Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Zorp: Nockchain's ZK Proof-of-Useful-Work Consensus - Logan Allen
Episode Date: August 9, 2025Inspired by Urbit’s minimal assembly language, Nockchain fuses Urbit’s vision of sovereign computing with a novel proof-of-useful-work consensus mechanism, creating a blockchain where every comput...ation fuels progress and scaling. The crypto-economics behind Nockchain’s zkVM incentivise competition between zero knowledge provers, ultimately bootstrapping ZKPs as a new computational commodity.Topics covered in this episode:Logan’s backgroundUrbit’s valuesNock, Urbit's minimal assembly language Use cases for zk proofsNockchain’s zkVM efficiencyUseful proof-of-workLaunching NockchainFuture roadmap for NockchainBuilding apps on NockchainStore of value vs. revenue generationThe impact of quantum computingEpisode links:Logan Allen on XZorp on XNockchain on XUrbit on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Brian Fabian Crain.
Transcript
Discussion (0)
Back when we were thinking of KnockChain in 2023, we said, we want to solve distribution.
We want to get more people doing proofs with the Knox-ZKVM than anything else
as fast as possible in a self-reinforcing process.
And so we built a Zero Knowledge Proof-Work competition, a ZK Proof-Wer.
In KnockChane, the proof-of-work competition is to solve and build ZK proofs.
We wanted to use the consensus mechanism to drive an incentive.
incentivize the production at industrial scale of zero knowledge proofs.
The core economic center of gravity of knock chain is around a scarce value storage instrument.
And so all of the revenue generation capabilities that are going to be built over time for data availability,
for programmability, are about increasing the monetary velocity and the usefulness of that digital goal.
So the framing that I kind of used and that I like is that not chain is
a programmable sound money that scales.
Welcome to Epicenter, the show which talks about the technologies, projects,
and people driving decentralization and production revolution.
I'm Brian Crane, and today I'm speaking with Logan Allen,
who is the CEO of Zorp.
Zorp is a company that's been working on ZK technology,
and they have also launched a new proof-of-work, ZK chain called Knock Chain, very recently.
So I'm really excited to talk with,
Logan today. Just before we get started, we'll share a few words for more sponsors this week.
If you're looking to stake your crypto with confidence, look no further than Corse 1. More than 150,000
delegators, including institutions like BitGo, Pintera Capital and Ledger trust Corse 1 with their
acids. They support over 50 blockchains and are leaders in governance or networks like Cosmos
ensuring your stake is responsibly managed. Thanks to their advanced MEV research, you can also
the highest staking rewards.
You can stake directly from your preferred wallet.
Set up a white label note.
Restake your assets on eigenayer or symbiotic
or use their SDK for multi-chain staking in your app.
Learn more at chorus.1 and start staking today.
Hey guys.
I want to tell you about NOSIS,
a collective of builders creating real tools
for real people on the open internet.
NOSIS has been around since 2015.
In fact, it started as one of Ethereum's very first projects.
And today, it's grown to a whole ecosystem.
system designed to make open finance actually work for everyday people.
At the center of it all is NOSIS chain.
It's a low-cost, highly decentralized layer one that's compatible with Ethereum and secured by
over 300,000 valid errors.
So whether you're building ADAP, experimenting with DFI, or working on autonomous agents,
NOSIS chain gives you a solid, neutral foundation to build on.
But NOSIS is more than just infrastructure.
It's also tools that people can actually use.
Like Circles, for example, lets anyone issue their own digital currency through networks of
trust, not banks. And then there's Metri. It's their smart contract wallet that makes it easy
to access circles, manage group currencies, and even spend anywhere visa is accepted, thanks to their
integration with NOSUSPAY. All this is governed by NOSISDAO, where anyone can propose, vote,
and help guide the network. And if you want to get involved, running a valid is super easy. All you need
is one GNO and some basic hardware. To learn more and start building on the open internet, head to NOSIS.
Noces, building the open internet one block in a time.
Cool. Well, thanks so much for coming on, Logan.
Happy to be here.
Yeah, it's been, I think, a long time coming.
I mean, I've been aware of Zorp since the very, very beginning.
But maybe you can start here.
Like, what was your journey?
Like, how did you get into, well, crypto, I guess, orbit.
And sort of, yeah, tell us a little bit about.
your journey. Yeah, absolutely. So I have a software engineering background, was programming since I was
12, went to Georgia Tech for computer science, and then I started getting really into Bitcoin and the whole
cryptocurrency thing in 2016 or so. And by 2017, I just, I was in class. I was thinking, what am I doing
here? Learning, you're learning more math. Why am I doing this instead of just going out and working in
industry, trying to get Bitcoin? And so,
I was about to start my senior year and I decided I was going to drop out and just go move to
San Francisco, work in the bay, grab a job and get started, just trying to accumulate, basically.
And so I've spent time at Uber doing product engineering during some of their growth periods,
spent time at Snap.
And then eventually I found myself actually working more directly in adjacent to crypto at Toulon,
which was the research company that built urban.
And so I spent some time there, ended up as a tech lead on their product team.
And there was a lot of fun things going on at Talon at that time.
It's hard to even describe the energy.
But Talon had this very ambitious vision of building a new operating system
and a new decentralized internet to complement blockchains.
And I really, once I started interacting a lot with the technology there,
I really saw something very transformative because of how minimal and how tightly defined everything was.
And in a world in which we're increasingly dependent on computing and these trustless, decentralized
systems for really our everyday life for being able to send money around, for being able to store value,
and in a world where centralized banking and centralized information stores are increasingly compromised at an ever-increasing rate,
I really find that vision of trying to get more of our data stored and decentralized and
really secure encrypted ways with really minimal security assumptions to be very compelling.
And so I spent a few years there and then I worked on a product studio for a little while.
And eventually in 2022, I founded Zorp.
And initially, the idea with Zorp was we're just going to do a bunch of research
and how we can take one of those technologies from Erbit,
the NOC instruction set,
how we can take that and make a really, really performant ZKVM,
a zero knowledge virtual machine that uses that instruction set.
And I was able to bring on some amazing math PhDs,
former professors and things like this to work on it with me.
And then we made some really great research results.
We recently published a paper actually that's kind of the encapsulation of all the research we've done over the past two years into this in June.
We published an e-print.
And then, anyway, in 2023, we looked at what we had built and we said, how are we going to go to market with this?
Right.
And so we decided to build knock chain.
I've been in and around crypto for quite a while, and frankly, you know, not everyone feels the same way about this, but I've always really preferred the no pre-mined proof-of-work ethos to the more pre-mined proof-of-state ethos, just because I think of these economic protocols as having these like core economic engines and incentives that they bring to bear in addition to the actual technological features.
and I think that there's a little bit of kind of an there's been an emphasis, let's say,
on particular incentives around basically going to market with pre-mine proof of stake protocols
because they're easier to underwrite by private placement.
And I think that what we've seen over the past seven years or so since that emphasis really
took hold is that there's now a lot of blockchains that all have the exact same incentives
and aren't really doing any innovation on the crypto economics, but they all have these like
different technical flavors, basically. You know, there's the AI blockchain, the fast blockchain,
the storage blockchain, you know, there's so many flavors all marketing to kind of the same
group of developers, all trying to be the next Ethereum, but they all have the same
incentives. And so we wanted to do something different. We wanted to, we wanted to, we wanted to,
build something that we could that we could really get behind for a very long time.
And so what we decided to do with KnockChain is build it as a fair launch proof of work project
and try to use the proof of work incentives in a novel way to incentivize real-world behavior
that we wanted to see. And so, of course, we had this ZKVM. And so I imagine you're familiar
with VHS Betamax. Often, often when there's a new technology, the one that wins is not necessarily.
necessarily the best one. It's the one that gets the best distribution. Now, of course, we love
KVM. We think it is the best. We released this wonderful paper last month showing formal security
soundness bounds and showing that we see performance about an order of magnitude more than
risk 5 VMs, which is rather compelling. But we wanted to solve back when we were thinking
of knock chain in 2023, we said, we want to solve distribution.
We want to get more people doing proofs with the NOx ZKVM than anything else as fast as possible in a self-reinforcing process.
And to do that, you need to actually use incentives.
You can't just have these like top-down grants or anything like that.
And so we built a zero-knowledge proof-of-work competition, a ZK proof of work.
And so in KnockChane, the proof-of-work competition is to solve and, and,
build ZK proofs, not to build hashes like you see in standard Bitcoin.
And so that was where notching was born was we wanted to use the consensus mechanism
to drive and incentivize the production at industrial scale of zero knowledge proofs.
I'm going to pause because you asked a very simple question and I really, I really got into it.
But that's the story. That's how we got here.
Yeah, I think there's a lot of things here that I want to dive a little bit deeper into.
but maybe we can start with Erbit a little bit,
because I mean,
Irby is something I've been involved in since for a long time as well.
And, you know, we've done a bunch of Erbid podcasts here.
Maybe two questions here.
How would you describe Urbid?
What inspired you about it?
And then also, I'm curious,
what are your biggest learnings from how you've seen the urban ecosystem involved
that you wanted to sort of apply when it comes to knock chain.
Yeah, absolutely.
So the things that appealed to me the most about Erbit
were the idea that really this crypto ethos
of self-sovereignty and decentralization
and this idea that you should kind of be in charge of your own destiny,
you should custody your own funds,
you should be the one that's ultimately making the decision
about whether the funds move or not, you don't need custodians.
This concept for me, I think applies to a lot more than just money.
And I saw when I joined Erbit, the thing that really appealed to me the most was that they had the most expansive vision of how to apply those principles across the rest of computing.
And I think that that ambition to spread the crypto ethos to a larger set.
of applicable principles, let's say to make it more easy to run, say, your own energy grid at home,
right? Like do solar and batteries, run your own energy, don't be dependent. That kind of almost
homesteading vision with computing where, you know, you're growing your own food, you've got
some automated systems helping you grow your own food, you're producing your own energy,
you've got some automated systems doing that. And you just have these protocols that run
that don't make you more dependent on everyone else,
but make you less dependent.
And in making you less dependent,
make you more free and more agentic.
That was what really appealed to me about Urban.
And I still find that vision very appealing,
not even now that I'm not working on the project directly.
And to me, it's still about how do we bring those same principles to bear
regardless of whether it's in that form factor.
and what are the biggest things you feel like you want to do different from
urbit or your biggest sort of lessons that you see can we swear on this podcast
for sure go ahead okay okay okay so i wanted to ask okay so so
erbit needed to ship a product to users
that created a that basically showed why they would need the infrastructure that that they
that they that they were producing the vision for okay so let's let's kind of like talk about this
from first principles if you're building a growth company and you're selling a big vision
you need to be able to provide a compelling first use case for that vision that's a step towards the
vision. Okay. As a justification for why and and also as a as a tool to show where you can go.
So when Elon wants to make better battery technology, he tells everybody, I'm going to give you the
best car. And then he gets to go invest in battery technology. When Elon wants to make better
rockets, he tells everyone, Mars is really cool. Wouldn't you love to live there? Wouldn't that be so
cool if we had this new place that we could go colonize and live? And then he gets, then he gets to
take that vision and turn it into action of building tangible goods, right, of building and making
rockets better. Right. So you start with selling speculative vision. Then you show that you're
making real steps towards capturing that energy and actually turning it in something real.
And so the thing that I think Erbit has done the worst job of overall is they have an amazing speculative vision of what can happen with computing.
And they've done a really poor job of showing that they can actually take real steps towards making that real.
Yeah.
Certainly that has been a challenge.
I agree with you.
Now, let's talk about knock, right?
Because I think knock is basically the kind of assembly language of.
orbit and in the beginning I don't think ZK proofs was something that was like really
a consideration there but my understanding is to spend or Curtis spent you know five years or
something in the beginning just on knock to try to make the most simple and elegant
definition of a computer and you know in the orbit thing space you'd have people then
print out the whole knock definition on a t-shirt and
that was kind of one of the products.
Look, it's so simple.
It fits on a T-shirt.
Like, how, what's your,
tell us more about, like,
how do you feel about NOC?
Why are you so excited about NOC?
Yeah.
So, we love NOx so much that we,
then we named the whole blockchain after it.
We literally named the currency of the blockchain knock, okay?
We are, we are Knock maximalists over here at Zorp and with
knock chain.
And the reason for this is that,
NOC is a minimal, executable specification of computing, in which you can do any practical thing that you want to do with the computer and specify it in terms of NOC instructions.
It has built-in capabilities for extension instructions.
So for complex arithmetic or cryptography, you can just call out to an extension instruction in the same way that your CPU calls out to its ALU, its arithmetic logic unit, for fast addition.
Like, you're not manually calculating in the CPU, you know, every, every arithmetic instruction.
It's basically calling out to a little specialized chip subsets on the chip.
So, knock is built in a way that mirrors the way that CPUs are built,
where you have a generic flow of logic that's minimal and can do anything turn-complete computer can do.
And then you can call out to instructions that are extensions.
And knock is agnostic to how many extensions you have or what they do, but only imposes one requirement, that it's a pure function.
So this is a really, really key component here, which is you have a minimal Turing complete computer,
and then you can call out to extension instructions for any complex arithmetic, et cetera, that you need to go really fast.
And so this lets you do hardware acceleration of any complex logic, but have an extremely consistent
specification for normal computing operations.
And so consistency is a really, really powerful tool when building systems.
Because consistency and having a really, really minimal surface area of possible things that can occur
allows you to build systems that you can actually understand,
and in understanding them,
you can cover the security holes in vulnerabilities,
you can hold a mental model of what's happening in your head,
and you can build and work a lot more effectively.
And so the real things about NOC that are distinct,
that are really unique, is one, it's extremely minimal.
And the second one is that it only uses a single data structure.
Okay, so these components are going to really, really matter when we start talking about ZKVMs, okay?
And the extension instruction pieces, too.
That's something that is rather unique to Nock, but that most ZKVMs now use as well.
So, but when Nock invented this and when Nock was introduced in 2008, this idea of these extension instructions being included was very, very unique.
no one else was doing this. So the key point around this is knock is really minimal, very
few instructions but turn complete. And second, knock is built around a single data structure,
the binary tree. And this single data structure, it turns out is, well first off, it's
one of the most fundamental data structures in computer science, many, many things in computer science
or binary trees. Efficient databases, dictionaries, etc. And then,
And second, because everything is one data structure, and there's a really minimal set of instructions,
it allows you to build really very efficient ZKVM circuit for the not ZKVM.
And so that was the initial intuition that I had had in 2022 was this is really minimal.
Binary trees are a very well-studied data structure with strong mathematic properties.
And so I thought, we can probably build a really efficient ZKVM around this because ZKVMs, as a concept, are a way to express computation in terms of essentially middle school arithmetic.
You probably remember from intermediate school, right?
You have these polynomials, right?
F of X is equal to X squared plus three or whatever.
And so a ZKVM is a way to take a computation and express it as a relationship between polynomials.
So the idea is that you constrain the results of what is going to be computed using the polynomials.
And in practice, what you do is you ensure that for any given pair of rows, so for any given like pair of states in the in the circuit, you're going to evaluate a polynomial.
that must evaluate to zero. So, this is getting a little particular. But in practice, the way it works is,
you got a computation, you record all the steps that you do, you put them in this big table, and then you apply these
polynomial constraints. And the polynomials have to all be zero, otherwise you did the computation rock.
And ZK. Proofs give you a way to use these kind of little primitives and give you a really, really small proof that the computation was done correctly,
that can be verified extremely quickly, regardless of how really big the computation was, and the proof's tiny.
So let's say I've got some super huge computation I want to do.
I can do it, make a ZK proof of it, send it to you, and you can verify it on your phone in like 20 milliseconds,
no matter how big the computation was.
This is really powerful, primitive.
It's kind of like, so hash functions let you take.
a fingerprint of data, no matter how big the data is, and compress it into a little bitty piece
that lets you verify that the data is what it's supposed to be. And so ZK proofs are kind of like a hash
function, but for computation instead of data. So it lets you commit to the computation.
What do you think are the implications of ZK proofs? Like, what are the use cases you most excited
about and how do you think CK proofs are going to change the world in the long term?
Yeah, okay. So the long term, ZK proofs I think are going to totally transform finance, compliance,
medicine, privacy, and probably things like voting also. And we could get into the kind of like speculative
idea of how these things can impact. But the general heuristic is anytime that you want to be
able to do something privately, but have everyone else be able to verify that it was done correctly,
a ZK proof is your best tool.
So we don't really want our credit stores being leaked all over the internet every time Equifax gets hacked.
But they do.
And so that type of really sensitive data, that's the type of thing where ZK proofs would be really, really useful.
Or similarly, your medical records, or how you voted.
It would be really, really nice if every time we have an election,
everyone's not all pointing fingers and saying you cheated, you cheated, da-da-da.
It'd be really nice if we had a public, transparently verifiable
mathematic representation that says everyone voted once,
everyone that voted was supposed to be able to vote,
and we can all verify that.
But we don't know who voted for what.
These types of features are uniquely enabled by ZK in a really efficient way.
Now, that's kind of the long-term societal implications.
I guess there's one more, which is we're in a world where hacking and cyber warfare is increasingly relevant.
Zero-day attacks are kind of the new thing in terms of warfare, whether it's the actual security zero-day attack where they're hacking your information,
or whether it's them dropping like a cargo crate full of drones next to your base, and then the drones come out and bomb everything.
These are like very sudden, very like, frankly, sophisticated attacks that involve technology
are the cutting edge of warfare.
And it really is in many ways impacting critical infrastructure and critical infrastructure threat modeling.
So the power grid, right?
And water treatment facilities.
How do we get clean water?
How do we have, how do we have, you know, good food?
How do we have power?
This is, these types of questions are obviously in some ways,
concerns, you actually have to protect these things, but these systems are increasingly
digital. And so securing these systems and allowing for introspection into these systems,
verifiability that everything is going correctly, this is becoming increasingly important.
And zero knowledge proofs are a great way to be able to get that verification component.
So that's all the long-term stuff. In terms of the short-term stuff, zero-knowledge proofs are
extremely good today for making blockchain scale. Really, really, really good. And that's,
that's one of the things that we're using heavily for knock chain. So one of the things that
blockchains are kind of, one of the things I'd say is unfortunate about blockchains is today,
the way that blockchains achieve verifiability, like, you know, if we are running a uniswap
smart contract on our Ethereum nodes, you guys run a lot of Ethereum nodes, I know. If you're
running uniswap on it, you have to be running that smart contract on every single node you're
running in order to actually make the next state transition. If you want to check the next block,
everybody has to run the same computation and verify it. You get verifiable, you get verifiable
through replication of execution. You just all execute the same code. That's how you know you all
got to the same answer. Well, that's really inefficient for, I don't know how many Ethereum nodes
of running today, do you? You probably do.
I don't know exactly the number of top of my head, but it's many, yes.
Yes, it's a lot, right? They're all running the same computation, right?
They all have to run the exact same thing every, you know, every new block.
So, you know, you've got, I don't know, however many 30,000 computers, all running the
exact same thing every block. That's kind of inefficient.
So the way we see it is, and I mean, Justin Drake and these guys are all starting to pitch some
of this stuff too and, you know, talking about how Ethereum is going to transform over time.
I think they've got some five-year vision. Well, anyway, the future of blockchains is point-blank,
you're going to be running the actual computation on your computer, and you're going to be
verifying what you did on the blockchain. You're not going to be actually doing your
execution on the blockchain. That doesn't make very much sense. What makes a lot of sense is for you
to verify a proof that you did the execution on the blockchain. And so off-chain execution,
on-chain verification. So that no matter how much computation you did, no matter whether you were
running AI models or really sophisticated high-frequency trading algorithms or crazy MEV protection
or super-sophisticated loan credit checks or whatever it is you're doing, whether you're running a
video game, you can be doing it on your computer and then the blockchain's just verifying
you as done correctly and just settling.
So you mentioned that the ZKVM that you guys wrote based on NOC is much more efficient than the other ones.
Yeah.
Is that particularly relevant because of the cost of generating proofs?
Or like what's the most important consideration for efficiency when it comes to ZKVMs?
All right.
No, yeah, that's a big question.
So, yeah, ZKVMs are really good for scalability, blockchains, and they're also really good for privacy.
That's kind of the other, one of the other big use cases.
So in terms of, in terms of efficiency considerations for ZKVMs, so first off, you have to understand ZKVMs are really two parts.
All right.
So there are two parts.
The first part is you can think of it as which circuit are you running, right?
Are you running the NOx circuit?
Are you running the RISC5 circuit?
Are you running the Cairo circuit from Stark, from Starkware?
So it's really, and so the technical term here is, and of course different, different ZKVMs will make different
technical decisions about things. But the technical term here is going to be that's generic for all of this,
is the interactive oracle proof. Okay. So you're going to have your circuit,
which is going to be modeling some interactive Oracle proof for expressing some particular computation.
And so the question is, is, so you've got your first part, which is which circuit are you running.
And for Starks, that's going to be a randomized arithmetic intermediate representation with pre-processing,
which is a crazy acronym, but they shorten it to wrap.
So with the Stark, you've got your wrap, and that's kind of the front end to your ZKVM.
So it's which circuit you're running.
Then on the back end, you're going to feed that circuit.
into the ZKVM backend, which is your polynomial commitment scheme.
So you've probably heard about Starks.
You've probably heard about Snarts.
You've probably seen the term trusted setup.
All right.
So when people are talking about that kind of thing,
they're talking about the back end.
They're talking about the polynomial commitment scheme.
And so there's really two areas of optimization.
There's the front end and the back end of the ZKVU.
And almost all the research has gone into optimizing the back end, how to commit to these polynomials.
And that's where most of the tradeoffs come in.
So do you have a trusted setup?
If you do, then you can get O of one verification.
You can get these itty-bitty proofs that have O of one verification.
Really tiny, really, really efficient.
But you have to trust the setup was done correctly.
Otherwise, they can prove arbitrary statements.
or you can be a transparent commitment.
In other words, there's no trusted setup.
You're only trusting pure math.
And in that case, the most commonly used thing on the market is fry,
which is what Starks use.
And so that's going to be a transparent commitment scheme.
You don't have to trust anybody.
It's pure math.
And with those, you have larger proof sizes,
depending on how large the computation is and how large the circuit is.
So to answer concretely, there's trade-offs, it depends.
But the smaller your circuit is, the more efficient you're going to be able to do the computation,
which is going to make proving faster and is going to make this proof smaller.
And so those benefits of having smaller circuits and of having better asymptotics for building them are going to matter
no matter what, no matter what back end you're going into.
So the work that we've done into making the knock,
the not ZKVM, the notch circuit really, really efficient,
is kind of timeless in a way because it's like the core cryptography
that we can then, that we can use as a module
and put into whatever back end we want.
So let's say they make this, you know, super new, amazing proof back end.
You know, let's, let's say Ligarito, it's like a, it's a,
funny thing from Bain Capital where they made
Ligero, which is another one, like
they made it really small, so they call it Liderito.
Like, let's say that's the best thing, I don't know.
If that's the best thing, we can port our circuit right into it.
And we did all the same efficiency
speedups that we get right now, but in the new back end.
And so we spent all of our time working on
this, on the actual circuit definition.
Okay.
One thing you mentioned as well,
which is worth diving into, I think, is that, you know,
knock chain is a proof of work chain, but the work are ZK proofs.
Now, of course, that is an interesting idea because in the end,
the idea of useful proof of work has been around for a long time, right?
Like early on, people are like, oh, you know, Bitcoin's very cool,
but all people, the miners do is to create these hashes,
and these hashes really don't have any function except like mining Bitcoin blocks.
And so, of course, the idea was like, well, what if all these miners have to do some work,
but that work has some other external benefit and value besides just mining blocks?
So can you explain a little bit?
How are the ZK proofs that are produced by knockchain miners?
How can they be useful?
Yeah, that's a great question.
So it's really hard to design a useful proof of work puzzle,
which is why we haven't seen many many, many attempts at it, really.
So the fundamental traits that you need for a proof of work puzzle
to be a secure proof of work puzzle are, first off,
you have to be able to verify that the puzzle was completed way faster
than you can do, than you can make the puzzle in the first place.
Okay.
The reason for this is that you want it to be hard to spam,
invalid puzzles.
So you need to be able to throw,
you need to be able to check that the puzzle was done properly
really, really fast.
That's the first thing.
Then the second trait
is you need it to be amortization resistant.
Okay, that's a more complicated phrase.
But what it means is that you don't want to be able to reuse work
between attempts.
Each time that you do an attempt at the proof of work puzzle,
you want to basically have to start over.
And sometimes you can't get all the way
to amortization resistant.
you want to be as amortization resistant as you can.
And so, for instance, when the ASIC boost vulnerability was published in Bitcoin,
that was an amortization exploit,
is that they were able to reuse some of the work that they were doing
between attempts.
And so Bitmain was able to go way faster than they should have been able to
relative to using the standard algorithm.
So you have to be able to check.
you have to be able to verify the puzzle really fast.
You have to be amortization resistant, at least enough.
And that's what you need for a proof of work puzzle.
So this is one of the reasons, so the fact that you have to satisfy these traits
in order to make a useful proof of work puzzle at all.
And then, of course, you need to try to make it useful.
This is one of the reasons why it's been so difficult for people to kind of do generalizable work
in proof of work.
Now, fortunately for us, ZK proofs actually satisfy a lot of these traits.
Okay.
So it's a lot more expensive to make a proof than it is to verify a proof.
Okay.
So that's one of the first things that make it viable to make a ZK proof of work protocol at all.
And then the second thing is, is that if you work really hard,
you can constrain down the circuit of your ZKVM enough so that there's only one valid
circuit, one valid witness, for any given computation, which means that you start with a computation,
you can only make one proof with it. Okay? And if you can only make one proof per computation,
it means that you have to start over again if your attempt fails at the puzzle. Okay, so that gives
you that amortization resistance. All right. So ZK proofs can be made into valid proof of work puzzles.
And so then the question becomes, which is exactly what you're saying, can we make that useful for something other than just the proof of work competition?
Well, luckily, the answer is yes. You can make that useful. So as I mentioned, knock is a Turing complete VM. You can compute anything with it, right?
And so the important point here is, is that because you can compute anything with it, and the algorithm for verifying,
the proof of work puzzle is just verifying the proof, you would be able to, to theoretically,
provide any type of verifiable work as a proof of work puzzle result. Now, in Notching today, in the
first version that we launched, we Tony and Cheek called it dumbnet because it was kind of like a minimal
shippable protocol. But it's, it is a useless proof of work puzzle right now. It's only used to
incentivize increased proofer capacity
and the global
performance competition around optimizing
the non-ZKV, which I think is very useful.
It's hardware aligned
around incentivizing people to make ZK
proofs faster, which is useful
with the whole industry.
But
in making the actual proofs useful
individually,
it's actually not that big of an
upgrade.
Because currently, they're
making one proof per attempt
but they're making a proof of basically a fixed computation.
Now, you can imagine that it sure would be nice
if instead, say, they were data availability sampling proofs.
Like, let's say they're providing some useful service of data availability sampling,
or let's say they're providing a proof of transaction inclusion in the blockchain.
Well, luckily, there's a whole area of research
by a by a wonderful
Cryptography PhD
Akikadas
researching exactly this.
How can you make ZK proof of work
useful
and how can you understand and bound
the security characteristics of it?
And so there's some wonderful papers on this
that he's published.
We were really pleased to
to collaborate with him actually on getting our research paper published on the NOx
EKVM last month. But he's spent a lot of time over the past, you know, many years
publishing papers on exactly this. How can you make ZK proof of work useful? And so he's got a
proof of necessary work paper that describes how you can, once you have a ZK proof of work
that's secure in an Akamoto Consensus Model,
how you can actually use it to provide proofs of transaction inclusion
and actually use that to power the chain itself.
So you could imagine in that model,
the chain, every proof that you're doing as a part of the proof of work,
is actually a proof that you included a transaction in a block.
And so the transaction processing is the thing being proven in the proof of work,
And so the idea behind proof of necessary work is that you actually scale the chain with the proof of work competition.
So the more compute power that's going into the chain for securing it, it's also powering transaction processing.
So of course, you know how much compute power has gone into Bitcoin.
That's a lot of compute.
That's a lot of energy.
Imagine if you were able to take that energy and the speed and the speed and the speed and thoroughput of your chain,
was proportional to the amount of energy going into the proof work competition.
So do you think the demand for, because I guess I can see different avenues where this
CK proofs could be used. I mean, one of course would be to basically say like, hey, look,
in the blockchain space, there are people using ZK proofs, you know, kind of all over the place.
So you could go to them like, hey, you should use NOx CK proofs because then you can basically
earn some revenues in the form of knock tokens and and you know maybe they're also more efficient
and faster and stuff like that but you know especially you have a sort of you know economic interest
in adopting knock zK proof so i guess that's one the other one would be more focused on zK
proofs to power knock chain itself like do you feel one of those directions to do both those directions
exist, then are you more bullish on one versus the other?
Yeah, so I would say I'm more bullish on using the ZK proofs to actually power specific
capabilities of NotChain itself.
So, for instance, proofs of transaction inclusion so that transaction processing scales up
with the security budget, proofs of data availability so that you can provide basically
data availability sampling at scale through the proof of work competition for something like a temporary
blob store like you'd see from Ethereum. I'm really bullish on these use cases. And I think that
I think that the ideal situation is that you end up where knock proofs have the competition to generate
knock proofs from the ZK proof of work has generated such a massive amount of proofer capacity
that individual knock proofs are extremely cheap and efficient.
And so there would be no reason to even pay the protocol for the knock proofs.
You see, and so the service would be actually the proofs are just powering the protocol
and you're really paying for settlement, you're paying for data availability, etc.
So you guys launched a knock chain in May.
How did it launch go?
Yeah, launch was crazy, man.
So let's see. Yeah, that was that was such a crazy time. So many sleepless nights. So yeah, we launched Narchain. We wanted to get it out the door as fast as we could. We've been we've been trying to get it out the door for, I don't know, like a year. And so we finally had tested it enough that we were like, look, the whole thing works. We just got to get this shipped, get it out the door. We can keep iterating on this forever if we want to. But we're just going to get it out and and do it for real. And so what, what we're,
What we intended to do, as I mentioned, is we didn't do a pre-mine.
So we launched it to the public, and we want NocChane to stimulate a global performance competition around optimizing ZK proofs.
And so the way that we kicked this off was we had published in multiple of our pieces,
hey, you know, the first Bitcoin reference clients, they weren't optimized either.
People quickly came on the scene with GPUs.
People did this optimization privately, and there became this big competition and almost like,
war around optimizing and doing better in the proof of work competition.
And so, of course, it's 2025 now, right?
Very, very different from when Bitcoin launched.
When Bitcoin launched, only a few people even knew what hash-tash was, right?
And now everybody knows what cryptocurrencies are.
Everybody knows what mining is.
And there's actually entire, like, massive server farms that all they do is they just wait
for new proof of work coins to launch.
and then they just go mine the heck out of them
and then dump everything, right?
And so we thought to ourselves,
how can we make the fairest proof of work competition
that we possibly can in 2025
when there's all this like hostile,
sophisticated compute,
ready to be deployed and just like, you know,
be mercenary and take everything in the dump, right?
How can we do this, right?
And so what we decided to do is we decided to,
launch basically, we decided to open source a few weeks ahead of launch.
A, and we published and talked about this in Twitter spaces for like, I don't know, like a year before we launched.
So we've been talking about this for a year and we've put out there what our business model is going to be, everything.
So what we ended up doing is we launched a slow reference client.
So it's like a faithful implementation of all the algorithms.
It's like, you know, if you take this reference client and you optimize it and you,
you write and you make the code go faster, it will mine you a bunch of knock.
And so we published this a couple weeks in advance of launch.
And we said, hey, guys, start optimizing this.
And then we published a blog post and said, listen, like, just to be super explicit,
our business model, because we didn't do a pre-mine, our business model is we're doing
acceleration on this.
Like, we're making this go faster.
and we're going to be mining this with a fast client from day one.
So if you want to get tokens, if you want to get knocked,
you need to optimize yours too so you can be competitive.
And so then we launched on May 21,
and we had this insane flood of user.
I mean, there were 10,000 nodes joined the network in like 30 minutes.
It was crazy.
And we got started,
And what we discovered was very, very quickly was there's a massive community, particularly in Southeast Asia of minors that try to join, you know, proof of work projects, particularly fair launch projects that have a lot of interest in them.
and that our communication around our strategy had not gotten to them either through the language barrier or because they mostly were listening to YouTube tutorials about how to set up the node or whatever.
They basically weren't engaging with our material.
And so they had no idea that they needed to optimize the minor to be competitive.
So we were kind of floored because we got like this massive burst of attention just in the week or two coming up to launch.
And we kind of had no idea that it was going to be the way it was.
So we got out in front of it to the best of our ability.
We said, hey, guys, listen, listen, if you're just running the slow code, you're not going to mind any blocks.
You should, you should like go get some rust guys and write some faster code so you can be competitive.
And honestly, it pissed a lot of those people off.
But the strategy worked.
So we got a bunch of really amazing developers
and a bunch of really dedicated and interested guys
from early BitTensor.
We got some people who kind of came over
and started a company who were ex-Urbit people.
We got these various groups of people
who kind of like came to the call to adventure, if you will,
and they optimized their miner and they got competitive really fast.
The first block mined by a third-party miner was block 1123.
And since then, you know, like right now on the network, we're only mining 30% of blocks.
And the other 70% are totally unaffiliated competitive miners.
And we're only like 40 days into the protocol.
So like basically it decentralized super fast.
And there's these different companies that are competing on the protocol.
And I've talked to a lot of them and a lot of them I guess I haven't talked to too.
But basically people optimize the code and we were able to use this as a strategy to get a bunch of really useful values aligned people to join and give and basically direct all of the early token rewards to people that are actually going to work for it.
And not people that are just kind of coming in trying to get an airdrop and then like, you know, they don't care.
They're just like here to because they think they're going to get rich quick and they're going to jump out, you know.
And so yeah, launch was crazy, man.
I had no idea what to expect.
Yeah, it's definitely very cool how that ecosystem has emerged so quickly there
and how these different companies and I know some of them as well are involved there.
So you mentioned that right now it's in this kind of dumb net phase.
The proofs of work are not useful yet.
What are the next stages in the evolution of the network?
Yeah, absolutely.
So, I mean, one of the big things is just getting, getting an Heathbridge, you know, getting an
each bridge set up so that, so that, you know, we can actually be connected to internet capital markets
and, you know, get early price discovery. I mean, at the end of the day, right, there's kind of this,
like, old, I mean, it feels funny saying it's the old meta, but everybody in the past few years
has just been doing this thing where they, like, try to get like super hyped up, super high private
valuations.
you know, the super high FTV private valuations pre-launch.
Then they launch and list on an exchange and then it's down forever.
You know, it's like they try to like keep liquidity low so they can like manipulate the market
and do all this like shady crap.
And that, I don't know.
Like I don't know why everybody's doing that.
It sucks.
Everybody's sick of it.
Nobody wants that.
And so we tried to do the exact opposite.
Basically like, you know, fair launch, proof of work.
And we want price to start.
to be happening as fast as possible.
Because at the end of the day, like, you know, you live and die by the incentives.
You can't, like, cheat the incentives.
You know, if your protocol sucks, you know, no amount of, like, high FDV, low float shenanigans
is going to help.
And so, you know, we believe in what we're doing.
We're aligned around not chain long term.
And basically, we want early, we want, we want to see what the community does.
And we want to see what happens when you connect, when you connect to.
to broader internet capital markets.
And so that's one of the first steps is just like, you know,
getting getting connected to the rest of the rest of the market.
And then from there, we're going to be adding hash time locks to support atomic swaps
rather shortly.
We were currently working on temporary blob storage so that we can have knockchain
start providing data availability services.
Ideally, what we want to do is, as I mentioned, we want to move towards
off-chain execution and on-chain verification
in an app roll-up model
where applications are issuing tokens on-chain
and they're able to perform logic
and do a lot of work off-chain
and use blob storage on the chain
and use the lock scripts
and composeability through intents
to interact with other app roll-ups.
So you can have your game or your club or whatever
executing off-chain
and then interacting with assets on chain.
So that's where we're headed.
One way to think about this is it's basically it doesn't really make sense
to try to scale your blockchain by just centralizing
and having it like do more and more replicated wasteful execution.
What makes the most sense is to have as much execution happening off the chain as possible
but have the chain acting as a central coordination layer
for all of that off-chain execution
and providing composability
between all those off-chain institutions.
And so we're doing that through intents.
And luckily, we're in the UTXO note model.
And so that's how you do intense, basically,
is by having these individual notes
be able to be interacted with independently
and then be able to compose atomically.
And so what we're moving toward
is towards providing data availability to the chain,
starting to provide these very basic defy primitives like atomic swabs,
and then moving towards programmability.
Okay, okay.
So this is another topic I wanted to talk about.
So what is it going to look like to build applications on top of knock chain?
And how does it differ from, let's say, the Ethereum paradigm of how you create like
your solidity, smart contract, and people can send transactions to interact with these smart
contracts, like, how is it going to be different for knock chain?
Yeah, absolutely. So as I mentioned,
knock chain uses the note model, so UTXOs. And so what that means is that every note has a
lock on it. And so you can spend the note if you can unlock it. So the most common way
to think of this is if you sign it, then then you can spend it if your key matches, right?
That's the most simple possible lock script. Now another lock script is a time lock.
And so that's another one.
It's like you can spend it after ever, however so many blocks.
That's another kind of simple block script.
But the idea behind intense is that you can build more complex and more
semantically meaningful conditions for spending coins.
So for instance, I could say, I'm willing to spend these coins if you trade me
100 USDC for them.
That's a pretty complex condition.
And if you have these like swap conditions, as an example, that would be, you know, I'm willing
to swap these coins for 100 USDC, then you can have solvers be going through all the notes
on the chain and saying, wait a second, I can make money by unlocking, by unlocking these coins
and giving these guys their 100 USDC, right?
like I'll make that swap.
And so the idea, of course, here is that you can actually use the locked scripts as the contracts.
And so the way to kind of understand how this relates to knock chain is
notching allows assets to compose with each other through lock scripts.
So you can have assets interact with each other through locked scripts.
but the execution is happening off-chain and being submitted to the chain.
And because not chain is a ZK-native chain,
we expect that for all these complicated lock scripts,
instead of having to execute these complex computations on chain,
what you're going to be doing is you're going to be verifying a proof
of the lock condition on chain.
So for instance, does that make sense?
Yeah, it does make sense.
I mean, one thing I'm curious about here is in terms of the capabilities, is that, you know, on Ethereum, you know, of course have, like, you know, lending markets, things like Uniswap, you have DOS, you have a lot of different types of smart contract applications.
Do you think this approach that knock chain is taking, is that going to be, like, as powerful?
Yeah, it's going to be as powerful.
And what we're seeing a lot of today is, particularly for complex applications, a lot of complex applications are actually moving onto their own custom stacks.
And you and I both know that a lot of those custom stacks are just cosmos.
But look, a lot of people are moving over to app chains.
when when people first started pitching app chains
app chains were not far long enough
like basically app chains were hyped
before app chains were ready
but app chains are how these large applications
are going to scale period
and regardless of what chain you're talking about
whether it's pumped up fun on salana doing their own chain
or or you know Robin hood deciding to do their own chain
whatever specific products
that are going to do really really large amounts of transactions
large amounts of data moving through them,
are going to be executing on their own.
Whether we call them a chain, whether we call them an app,
it doesn't really matter.
They're not going to be executing in the main chain state machine.
And so what we're doing with knock chain is,
we have knock apps.
Knock apps execute off chain.
If you want a central limit order book,
you're going to run it as a knock app.
If you want your AMM, you're going to run it as a knock app.
If you want a lending protocol, you're going to run it as a knock app.
It's going to execute off chain.
But it's going to have locks on assets on chain.
And so you're going to basically post proofs to the chain,
and those proofs can unlock and move funds around.
And these different apps are going to compose on chain.
Okay, so all of the actual assets are on chain.
And so as apps post-proofs, they're going to be interacting with each other through the chain as a central coordinator.
But the chain's not doing the execution. It's just coordinating and composing the intent matching.
Very cool. Yeah, I think that is a very powerful approach.
You mentioned that you guys are building a, I think, decentralized exchange.
Are there any other products that you guys are planning on building?
Well, so far we've seen the community's been building.
a bunch of products. So I'm aware of another company, Southwest Pool Supply, that it, I love the name,
it's so funny. But Southwest Pool Supply, they're making a mining pool on Knock. They call it Knock Pool.
And they've made an explorer. It's beautiful. You should look at it. KnockBlocks.com.
I got to say it's got amazing metrics on it, showing minor decentralization, showing the supply
schedule. I mean, these guys probably did a better job than I would have done. I mean,
it's beautiful. It looks great.
we've been seeing a massive amount of work starting to go in from companies that just sprang up, basically.
Like, you know, we're like, we're not paying these guys.
These guys are just doing, doing the work because they believe in the vision, that they want to participate.
And the proof of work protocol incentivizes them to get their hands dirty and actually work to create value.
And so for now, we're focused on building up the protocol.
we're focused on on you know building bridging on building and we're going to be working on
doing a decentralized exchange probably probably starting in the first half of 2026 and and yeah we're not
we're not doing we're not going to try to do like every possible product all at once or something we
want we want to like basically what is it we want to pick our shots you know what I'm saying
Yeah, yeah.
You sent me a document where you talked a bit about L1 tokens and what makes them valuable.
And you sort of put them into two categories.
One is the store value asset, the other thing, revenue generating asset.
You know, I guess Ethereum would be one that, you know, I mean, it kind of maybe fits into both buckets as well.
but like it has that revenue generating component too.
So where do you see knock fit in in this framework?
Yeah, absolutely.
So I've been I've been writing about this and thinking about this for a while now
about how to understand and create a valuation model for L1 assets.
And placeholder conveniently scooped me a little bit.
They published something yesterday where they actually talk about they don't,
the thrust of their essay is a little bit different than what I've been thinking about.
But they make the same dichotomy that I've been looking at.
And so I was like, damn, I got to publish this then.
You know, I got to get this out of here.
Like, this is crazy.
So the idea is that you can understand blockchain protocols as being either primarily
value storage protocols or revenue generation protocols.
And it's not that you can't be both.
it's that often you're optimized more toward one than the other.
And so the way to think about this is a value storage protocol is a digital gold.
It's like Bitcoin.
And the idea is it's a neutral store of value.
You're a credibly neutral protocol.
You're not doing things like reversing hacks and giving people their money back.
You're very censorship resistant.
It's very difficult to change your social consensus.
this, you have an immutable supply schedule, and if someone buys the asset, it's provably scarce,
and they know what they're getting into.
And so they know that they can buy it, and that basically it's going to be a hard store of
value.
It may fluctuate and be volatile, but it's going to be scarce forever, period.
And so Bitcoin is the best example of the digital goal that we have today.
And of course, we see a lot of the narrative around sound money resonating and pitching this exact thing.
And so Bitcoin is kind of the preeminent value storage protocol today.
And one of the things that makes this dichotomy really useful is, well, the way that you would value something like a digital gold or a sound money is just fundamentally different than the way that you would value, say, like, Tesla stock.
okay
like Tesla stock is valuable
right
or I don't know
like Google stock or OpenAI stock
like these things are these things are valuable
we agree they're valuable everybody thinks they're valuable
people want them
but
but it's not a store of value
just because something's valuable doesn't mean it's a store of value
okay so this is where I kind of bring in
the differentiation around a revenue
generation protocol so
on crypto Twitter we see a lot of talk about
the revenue meta
and this idea that we should kind of value protocols in terms of their ability to generate revenue through protocol services.
And so the idea here is that there's kind of this dichotomy of these two different centers of gravity
that protocols are naturally attracted to of whether they're primarily a value storage protocol
or whether they're primarily just providing services as a protocol that they're generating revenue through.
So as you mentioned, Ethereum, right, it does do some of both.
It serves as a medium of exchange and a unit of account for the L2s and for the applications that use it
and does rather well for it in relation to the actual, or I should say, the valuation of Ethereum is rather high
as a multiple of the revenue it generates. And part of this is because of the network effects around
the way that it's used as a medium of exchange in a unit of account. We can look at a kind of an
alternative example of a revenue generation protocol of Celestia. If we look at Celestia,
at Celestia as a revenue generation protocol, well, they really don't have any value storage capability.
They only really are valued in terms of their revenue generation. There's not some big network
of applications that are built on top of Celestia and using Tia, their token, as a medium of exchange,
or otherwise treating it as a store of value. They basically, like people basically only
value Celestea in terms of the revenue that it generates.
And so you can see the actual value of Celestia and the value of Ethereum
make a lot more sense when you start to understand the difference
between valuing something in terms of revenue generation
versus valuing it in terms of value storage capability.
So as you mentioned, Ethereum does have some of both.
It does have some value storage, but it is primarily revenue generation
through its data availability services, through its smart contract execution,
etc. So, and of course it was the first programmable coin. And so as a result, of course,
it was able to develop a wonderful network effect and it's widely considered the number two
asset. So the idea of, of course, behind this is not to say value storage or revenue generation
that either one is good or bad, it's to have a mental framework for being able to value
these assets appropriately. So I'm going to pause. Does that make sense?
Yeah, absolutely then.
Nice.
So in terms of notchane,
knock chain is primarily a value storage protocol.
Notchained had no pre-mine.
Notchained has an immutable supply schedule.
Knockchain is scarce.
The only way to get it, at least right now,
is through mining it and taking part in this hard competition.
And so notchane is going to market,
not as yet another general purpose application layer
where the whole focus of not chain is on bringing developers onto the ecosystem, right?
The notchane is going to market as a store of value.
Notchane is going to market as a digital gold.
And so the way to understand this is not that we are against revenue generation.
As I mentioned, we're building out data availability services.
We want you to be able to do programmability.
but the way to understand it is that the frame for the core economic center of gravity of
knock chain is around a scarce value storage instrument.
And so all of the revenue generation capabilities that are going to be built over time
for data availability, for programmability, are about increasing the monetary velocity and
the usefulness of that digital gold.
So the framing that I that I've kind of used and that I like is that not chain is
programmable sound money that scales.
So with Bitcoin, it has practically no revenue generation capability whatsoever.
It's only a value storage instrument.
The blockchain fees from moving transactions on Bitcoin are so minuscule as as a percentage
or as even like just any yield that of course people hypothesize it.
at times about the idea that Bitcoin, if its price doesn't go up at a fast enough rate,
that eventually the block rewards will go so low that they won't actually serve and justify
securing the protocol because it doesn't have any revenue generation capability as a protocol.
And so the way to understand this, I think, is that all the protocols that have been launched
over the past seven or eight years have, as I mentioned, been really focused on being proof of stake,
pre-mined coins that are marketing to developers saying we're better Ethereum,
they're all revenue generation protocols, primarily.
And so Bitcoin is really the value storage protocol today.
And I think it's silly to think that there can't be others,
particularly that have differentiated characteristics.
And so not chain is a value storage protocol
that we intend to build in revenue generation capabilities over time.
Okay, cool.
I have one more question here.
So quantum computing is something that's coming at some point and there's some concern about,
well, I mean, expected to break a lot of encryption.
What do you think is going to be the effect of quantum computing on ZK and on maybe
knock chain in particular.
Yeah, so quantum computing is an interesting topic because it's one of those things where
everybody wants to be safe against quantum computers, kind of like how you want to be safe
against natural disasters and earthquakes.
But there is an open question of how, how, you know, let's say you don't live near
any center of geological activity.
How likely is it that you're going to be struck by?
by an earthquake, right?
Like, probably pretty unlikely.
So I think in a similar way, quantum computing,
I think quantum computing is becoming more practical over time.
I think that over time, it's probably
going to be able to do more stuff.
Some of the breakthroughs in being able to use
quantum topological techniques to get more stable configurations
of qubits are rather interesting.
But we're pretty far from being able to implement
Shores algorithm and actually get a sped
up, you know, practical implementation of any of these attacks.
That being said, of course, it takes time to upgrade protocols and it makes sense to
play in advance just in case, right?
So, Starks, depending on the hash function that you're using for your random oracle,
are already plausibly post-Quantum secure.
And so algebraic hashes vary in, in,
in their security against these types of attacks.
And algebraic hashes are the ones commonly used for securing
and making ZKVMs go fast.
So the reason is that because they're algebraic,
you can model the relationships between the hash functions
more easily in terms of polynomials, et cetera.
So, Starks are plausibly post-quantum secure,
depending on the hash function you use.
And of course, you can switch out the hash function
hash function reasonably easily for something like Blake 3 if you really need to.
So there's a pretty easy path to taking to taking Notchain to be post quantum secure because
we have built on starts. The main piece to think about would be the signature scheme.
So signatures and making making sure that we have an upgrade path to a post quantum signature scheme.
So Noxin is not currently secure against quantum attacks if something just popped on the market.
But we'd be able to upgrade.
We're not using any.
So if we were using a trusted setup as an example, it would be a lot harder to be able to like,
it would be an open research question as per how to make it secure.
But we're not.
We're in a transparent scheme using very battle tested cryptography.
and so the path to the path to being totally post-quantam is a lot clearer,
particularly on the timelines that we would need to be thinking about,
which is like a decade.
Cool.
Well, thank you so much for coming on, Logan.
That was super fascinating.
I do think you guys have launched one of the most original and unusual networks
that has a lot of like, you know, radical design decisions you guys have made
and it's just like it's certainly one of the most novel things in crypto right now.
So I'm really excited about seeing the blockchain ecosystem evolve.
And it feels like it's off to a great start.
So thank you so much for coming on.
Thanks for the time, Brian. I appreciate it.
