Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Tal Moran: Spacemesh – The Space-Time Consensus Blockchain
Episode Date: June 11, 2019We're joined by Tal Moran, Chief Scientist at Spacemesh. This new consensus protocol is designed to run on home desktop PCs, filling free space on users' hard drives to create a Proof of Space-Time. T...he goal of this new blockchain protocol is to solve the issues with Proof of Work and Proof of Stake, that is, energy inefficiency on the one hand, and possible centralized plutarchy of rich validators on the other. Topics covered in this episode: Tal's background as an academic and researcher The problems with Proof of Stake and Proof of Work What is Proof of Space-Time and how it works How miners use their hard drive space to establish proofs How randomness is generated in Spacemesh Spacemesh's DAG architecture and how blocks are added to the chain The tortoise and hare protocols proposed by Spacemesh The Spacemesh team and recent funding round The project's business model and roadmap Episode links: Spacemesh Spacemesh white paper Tal's CESC18 talk in San Fransisco Spacemesh on GitHub Spacemesh on Twitter Spacemesh on Medium Interchain Conversations Berlin event – use code EPICENTER for discounted tickets Cosmos Hackatom Berlin Sponsors: Vaultoro: Trade gold to Bitcoin instantly and securely starting at just 1mg - http://vaultoro.com Cosmos: Join the most interoperable ecosystem of connected blockchains - http://cosmos.network/epicenter This episode is hosted by Sebastien Couture & Friederike Ernst. Show notes and listening options: epicenter.tv/291
Transcript
Discussion (0)
This is Epicenter, Episode 291 with guest, Don Moran.
This episode of Epicenter is brought to by Voltoro, the gold hedging platform for the crypto community.
Trade gold to Bitcoin instantly and securely, starting at just one milligram.
Go to Voltoro.gol slash Epicenter to get early access to their V2 platform and to start trading.
And by Cosmos. Cosmos is building the internet of blockchains, an ecosystem where thousands of
can interoperate creating the foundation for a new token economy.
If you have an idea for ADAP, visit cosmos.network slash epicenter to learn more and to get in touch
with the cosmos team.
Hi, welcome to Epicenter.
My name is Sebastian Kujo.
And my name is Federica Ernst.
Today we're speaking with Talmoan.
Tal is the chief scientist at Space Mesh, and he's also a professor of computer science
at the interdisciplinary center in Herzlia, in Hurtzilia, in.
in Israel. And Space Smash is a project that's been on the mind for a while because I met one of the co-founders at DabCon last year and he was so excited because he was a big fan of the podcast. And, you know, it was just like obviously wanted to be on the show and everything.
Or wanted SpaceMash to be on the show. And it was really early at that time. They were just kind of starting to, you know, put the ideas together. But since then, it has, you know, transformed into a project that soon will be on TestNet.
And it's actually really interesting because it proposes a new way to do consensus that addresses a lot of the issues that people see in proof of work and also in proof of stake.
So we talked to Tal about a lot of the technical intricacies of their proof of space time protocol.
And yeah, what did you think, for that again?
I thought it was really interesting how he brings together this proof of space time, which in and of office is a very important.
is not completely new with this directed acyclic graph topology.
And I thought it makes for a super interesting project.
And I am not surprised at all that they attracted massive investments by the top tier venture
capital funds in this space earlier this year.
Yeah, also they just have a really impressive team of like super smart academic cryptographers
that are making this happen.
I think that a good complimentary to the show is a talk that Tal gave at San Francisco Blockchain Week in November.
That talk will be in the show notes because if you're like me and you're more of a visual person,
you might want to look at the slides that he uses to sort of describe the protocol and how it works.
Other thing to mention is this week, we are in Berlin and we are at the Cosmos.
Interchain Conversations event.
So that is happening on the 13th and 14th.
And we've been mentioning this for a couple weeks.
Now let me just give you links for that.
So if you want to sign up,
because there's still time as this is being released to sign up,
to the Interchain Conversations event happening at FullNode,
you can go to Epicenter.
You can go to Epicenter.
DotRox slash Interchain Berlin.
And if you use the code Epicenter,
you'll get $65 off the tickets.
if there are still places left on that discount code,
because I believe we had some limited access codes there.
And then there's a hackathon happening on the weekend.
So on the 15th and 16th, again, at full node,
Cosmos Hackathon.
So if you want to work on a project and you have an idea,
and you want to come build it at the hackathon to sign up,
go to Epicenter.orgs slash Cosmos hackathon, Berlin.
All those links and details will be in the show notes.
So without further delay, here's our interview with Talmaran.
Hi, so we're here with Talmoran.
Tassal is the chief scientist at Space Mesh.
He is also a professor of computer science at the interdisciplinary center of Hetzilia in Israel.
And he joins us today from Israel to talk about space mesh and a new form of consensus mechanism that we'll learn all about in this episode.
Thanks to joining us, Tal.
Thank you.
I'm glad to be here.
So tell us a bit about your background.
And so you come from academia.
How did you come into the blockchain space?
I am an academic cryptographer by training.
And I was always interested in sort of protocols that have like new ideas and potentially
practical uses.
And basically I started working a few years ago.
I heard about, you know, Bitcoin.
And I have to say in the beginning, it didn't sound, you know, it didn't understand what
the excitement was about.
But once I started looking at the protocol,
then there are some really nice, interesting, theoretical questions there,
and this is how I got started, how to solve the problems, the theory that come out of Bitcoin.
So you did a PhD with Moni Nowa, who also worked on eKesh with David Chum.
So what was your PhD on?
So my PhD was fairly wide-ranging, but I think some of the more interesting parts,
definitely to allay audience are protocols that people can do.
I think my thesis was called cryptography by the people for the people.
So things like voting protocols where you want to make sure that your vote was counted
but still keep the vote secret.
And on the other hand, you don't trust computers.
So what can you do?
So these are also actually things that are based on some really, really nice ideas by David Schaum.
that we extended, we have several papers in that area.
And also things like how you can use everyday objects to do cryptographic protocols.
Like we had cryptographic protocols with stretch-off cards and things like that.
You studied in your PhD exists in the real world or like in practical obligations?
The voting protocols are people are actually trying to put these into practice.
And there are several different ones.
There was actually a project that they did at the IDC called Wombat voting that had a cryptographic verification.
There were bigger projects.
David Chalm was involved in one in Maryland, I think Tacoma Park.
I don't remember the exact name, where they actually did like a citywide election using verifiable voting.
So people are working on these things.
And I think now that voting fraud and attacks on the vote,
voting system have become more talked about, then these things might have a resurgence.
And what about your postdoc at Harvard? Can you tell us a bit about that? Yeah, so I worked with
Salil Vadhan there. And actually, some of the things that we did there with the, say, we had the first
protocol for proof of sequential work when it still wasn't fashionable. We came up with the protocol.
This is together with Muhammad Mahmoudi.
And it was a very complicated protocol.
I have to say, it was not really practical.
But last year, Bram Cohen and Christoph Pechak actually have a much, much simpler version of this protocol that won the best paper awarded Eurocrypt.
And this is what a protocol we're actually using in Space Mesh.
Super interesting.
Bram Cohen, he's also involved with Chia coin, right?
He is, and also Kristov.
They're both involved.
Maybe we'll talk about this.
Maybe we'll talk about this for a bit later.
But you've been, for a couple of years,
you've been a professor in Herzliar,
and you've started working on space mesh there.
Yes, that's correct.
So I actually started an earlier cryptocurrency first,
called Mesh Cash, which had many of the same ideas,
but it was based on proofs of work.
And the original motivation was always to try to replace proofs of work with something else.
But there's just a lot of details there.
And proofs of work have this one giant advantage is cryptographically.
They're very easy to work with.
And they have like their self-contained.
They really are very nice in terms of proving their security.
So the first version said, let's do things one step at a time.
The first version just took the first step of solving the scalability problems
and some of the incentive compatibility problems that these blockchain protocols have
by going from a chain to a mesh.
And then the second step was taking the ideas from mesh cache and replacing the proofs of work
with proof of space time, which is basically replacing the resource that I'm using.
Instead of CPU, I'm going to use disk space.
And this adds a lot of challenges in terms of how we get things to stay secure and guarantee
consensus.
And that's what we solved, basically, when we wrote space mesh.
Really interesting.
So maybe let's talk about the proof of work and proof of stake and proof of space time
part first and then we can move on to the to the mesh part is that okay with you sure yes fantastic so
i mean there are different issues with proof of work one being that it's extremely energy
intensive um but you also feel that there are that proof of stake doesn't um isn't a worthy successor
or isn't a good successor for many causes why why do you think that is yeah so i i wouldn't say
that I'm like categorically against proof of stake. I think there's some very nice
protocols that use proof of stake, but proof of stake does have some major disadvantages
compared to proof of work. So one of them is this sort of circularity, right? When proof of work
is totally self-contained. I have a resource. I prove to you that I use the resource. In proof of
stake, I'm proving that I used, that I spent money that's in the system. But first of all, there's
of circularity because this is actually a resource only if the money is worth something.
But the money is only worth something if the system is secure.
So we have something that seems a little bit fishy.
It doesn't mean that it can't work in practice, but it already, it's like the foundations
are a little bit shaky.
The second thing is because you have the whole system sort of certifies itself,
then you have these problems with how you prove security
or basically what your security assumptions are.
So again, here I'm talking about things that actually prove security,
which maybe we'll talk a little bit later,
why I think that's a critical thing to do
when you're designing cryptocurrency protocols.
But in proof of stake,
if you want to prevent this sort of alternate history attack
where suppose I wake up now after 100 years,
and I'm trying to find out what the current history is of the system.
Then if I have a proof of work, then there's an easy solution.
I can see how much work has been spent on each branch of the system,
and now I know what's the true history.
But in proof of stake, if, say, somebody manages to steal old keys,
so somebody manages to steal all of the keys at some very far back point in time.
then they can now fork the system and create a new history that looks completely valid
because the history only depends on who has which keys.
And so they can sort of fast forward this history to the current day.
And now I have two histories that I simply cannot distinguish between.
And so I have to trust somebody.
And this is a problem when what you want is totally decentralized
and the trust should also be decentralized.
And the way they solve this in the proofs of proof of stake,
they either need to add these trust assumptions like checkpoints,
like, okay, you know, we just know that, you know,
at this point in time, this was the right branch,
and we just will never switch to another branch.
Or stronger assumptions like things like you can erase your memory securely.
So if I, the honest users always completely delete their old keys,
switch keys all the time and nobody can steal them after the fact, then this can help you prove
security of such a system. But these secure erasure assumptions are arguably not so reasonable
because it's very hard to actually securely erase data. And honest users, it's not clear that they have
an incentive to erase data because you cannot prove that you erased it. So, you know, we have
security assumptions that are more iffy. And then there's the final thing, which is this
sort of permissionless property.
So one nice thing about proof of work is I can start doing a proof of work without asking
anybody for permission.
I just start my CPU and it works.
In a proof of stake system, in order to join the system, I have to get stake.
And I can only get stake if somebody gives me stick.
So I need somebody's permission to give me stake.
So you can say, okay, this is just a technical, you know, definitional property.
But it's actually not completely technical because there's an attack that's
related to this. Suppose the adversary somehow manages to get a majority of the stake at some
point in time. It has 51% of the total stake in the system. It can now refuse to sell the
stake to anyone. And from now on till forever, basically, the adversary controls a majority of the
stake. And this is a problem, especially when systems are starting up, because systems,
say if the economy is worth $100 trillion,
getting 51% of the stake is a very expensive attack.
But if the economy starts now and it's just worth $10 million or $50 million,
then getting 51% of the stake might not be such an unreasonable attack,
especially if you're talking about things that might be of interest
to sort of nation state level actors.
The other problem is that this is undetectable.
So the fact that I have 51% of the stake,
it doesn't appear on the blockchain as, you know, TAL has 51% of the stake.
I can pretend to be many different people and they can still sell to each other.
So you don't see this like giant block of stake staying there.
It looks like the system is fine.
But at any point in the future now, I can decide to crash the system.
Right.
So this is something, again, is this an actual reasonable attack?
I'm not sure.
But I think it's enough of a worry that I wouldn't want, say, all of the world's economy
me to be based just on a proof of state system.
Interesting.
I had sort of heard of these attack factors before, specifically this last one.
But the other two, so the trust assumptions that you have to make around checkpoints.
So you mentioned that in order for this to work, one would have to steal basically all the keys
from a particular point in the past.
So this is like the worst case attack, right?
They're weaker attacks that don't steal all the keys.
that they're various, this is like an example of something that you could do that in order to
prevent this, you'd need some stronger assumption.
Okay.
But how likely do you think this attack is from happening?
I don't know.
I don't know.
And, you know, it might not be that likely, but part of the thing is that these are things
that are very hard to quantify.
And, you know, if you're basing your economy on it, I'd like not to have all my eggs in
one basket.
That's understandable.
I'm much more confident say about mathematical assumptions such as, you know, it's hard to find a collision in Chateau 56,
than in this kind of assumption about how people behave when the system is starting,
because we have much less understanding of these types of things.
If you're holding a significant portion of your net worth in crypto, you're probably waiting for your portfolio to moon at any time.
But holding crypto doesn't mean you should be irresponsible in the face of volatility risk.
That's where Voltauro comes in.
Voltauro is the leading gold hedging solution for the crypto community.
And as a stable asset, trusted for millennia, gold is the perfect long-term hedging solution.
And at Epicenter, we've been using Volturo since 2014 to protect a portion of our company's assets against volatility.
Now, you might ask, why not use a stable coin, Seb?
Which is a great. And don't get me wrong, stable coins are great and a real benefit for crypto adoption.
But algorithmic stable coins are still very new and experimental asset type.
And some asset-backed stable coins have been scrutinized for being under-reserved.
With Voltoro, your gold is 100% insured and secure it in vaults deep in the Swiss mountains protected by Brinks.
Every single gram of gold is audited and holdings are made transparently available on their website for anyone to verify.
And most importantly, it's quite literally your gold.
You can choose to have it delivered to you at any time.
To learn more and to get access to Voltoro's brand new V2 platform, which includes an interface overhaul and trading in dash, like coin, ether, and silver.
go to Volturo.gov slash epicenter.
That's V-A-U-L-T-O-R-O dot gold slash epicenter.
We'd like to thank Voltaur for the support of the podcast.
So what you posit as a new mechanism is the proof of space time.
Can you explain what that is?
Yes.
So a proof of space time is basically a real physical resource.
Ideally, what you're going to be using is disk space.
So I'm going to be sort of filling up my disk.
and then not using it for anything else for a period of time.
And the time here is important because you think of, say,
if I want to rent disk space, right, I pay by megabyte per month,
not just by megabyte.
So it's not enough to say that, you know, I have this much space
because if I can reuse the space again and again in short periods of time,
then in some sense my resource is not limited.
So I want it to store space over time.
unfortunately, we can't quite prove that that's what you're doing.
So formally, what we're actually proving is that either I'm storing space over time
or I'm doing a lot of work, of CPU work.
So why isn't this just a proof of work?
It's because in terms of incentives, it's actually a lot cheaper to store than it is to do the CPU work.
And we ensure that's true by making the CPU work sort of hard enough so that it
always is cheaper. And if somehow storage becomes more expensive, then we just make the CPU work
more expensive. And so honest parties will always just store the data because, you know,
it costs you one cent to store things, but it'll cost you $2 to recreate it. And the adversary,
our assumption about the resources is that this combination of, you know, CPU plus disk space
is still a minority. So in the white paper, I believe,
it's the white paper or at least one of the places where you describe space mesh. It
talks about the assumption that it is unprofitable for a participant to sort of mind blocks
on like a cloud instance or on even a dedicated computer at home and that one can only be
profitable if they're using it on a computer that's being used for other things presumably. Why is
that? Yeah, I'm not sure. Okay, if the statement is it's not profitable, that's a bit too strong. I think
the word is used unprofitable. Yeah. Okay. So maybe we should be more precise. We can't guarantee
that it's not profitable, right? Because profitable or not will depend on, you know, what the price
of the currency is and how fast it goes up and all sorts of things that we can't control and they depend
on all these economic factors that like they're not part of the technical design or the parameters
of the protocol. What we hope, right, the way the protocol is designed is that it should still be
profitable for you to do this if you have a home computer and disk that you already own for a
different reason.
Right.
So if you were talking about marginal cost, right, buying a disk for a home user is maybe actually
higher cost than buying a disc for somebody like Amazon or Google or like some, a large whale
who can have economies of scale and buy disks more cheaply.
And this is, you know, the reason.
standard thing. So in any type of resource, usually if you have economies of scale, it gets
cheaper as you get more per resource. However, if you already have a disk for another reason,
and here we're using the fact that a lot of people already have discs. So for instance, I have
already for other reasons way before I started Space Mesh, a fairly strong computer that's
always connected to the internet, and it has four terabytes of disk, and I probably use two
terabytes.
And there are two terabytes there because when I bought the disk, I didn't know how much
I'd need.
And I think a lot of people are in the same situation.
So my marginal cost for using these two terabytes is basically zero.
It doesn't cost me anything.
I'm not buying a new disk in order to use space mesh.
And the idea is that because the marginal cost is low for a lot of people, then they'll be
able to join the mining system.
And what we'd like to see is not that there are no way.
whales, but that there's a long tail. So there may be a few whales, but there are also a lot of people
that are small, and the long tail means that if we take sort of the sum of all the storage,
it actually comes out to be a large fraction of the total system. And again, this is nothing
that we can guarantee, but this is like how we hope the system will develop, and we're
doing our best in terms of the design to help that happen.
So you're making the assumption here that people have computers with presumably a lot of unused disk space
and that these computers are on and connected to the internet all the time?
I mean, it's not an assumption.
But yeah, for this long tail to work, then you need a lot of people that have this,
are willing to have a, or maybe they already have a computer connected to the internet and a lot of unused disk space.
Okay.
But so there's a sort of trend that, you know, most people, now I guess like professionals,
I mean, I don't know anybody who has a desktop computer.
Like most people that I know and myself, you know, have had laptops for years that are
on parts of the day, but mostly like, you know, off or in a backpack or like, you know,
shut down or even in sort of emerging economies, most people don't even have a computer.
They're using a mobile.
You know, as this trend continues and less and less people are used.
using desktop computers, and there might be like a, you know, a significant margin, but still
like margin of people who are using desktops. As this trend continues, would space mesh continue
to work then? Yeah. So, like, you know, in the worst case, suppose there are only a thousand whales,
right? Space mesh still works. It's not that, you know, space mesh will crash. Our security
assumption is that there's an honest majority, right? Bitcoin right now is like super centralized.
And it looks like it's still working.
The reason you want this highly...
We want it really for two reasons.
One is sort of this general fairness,
where we do want, you know, the everyday people
to be able to join the system.
But the second one, sort of the,
I think the most critical one for any cryptocurrency,
even if you don't care about fairness,
is you want...
Your security depends on having an honest majority.
And the more centralized you are,
the less reasonable this assumption becomes.
So if we have, you know,
800,000 minors of which, you know, 400,000 are, or maybe, you know, 700,000 have a majority,
then making, like, colluding between 700,000 people is going to be really, really hard.
So this honest majority assumption becomes a lot more reasonable.
If you have 10 minors and they hold, you know, the whole system, then it's a lot more iffy.
But there are lots of, you know, medium spaces.
If you have a thousand miners, they're all very big, then maybe it's already fine in terms of like your trust in the system.
It's not something like technically, there's no reason Space Mesh can't work with two miners, right?
The same reason Bitcoin can work with two miners.
It's just, do you trust that, you know, if there are only two miners, the system actually has an honest majority?
Okay, I see.
So, walk me through the process.
So I actually have two terabytes of unused disk space on a computer that is permanently connected
to the internet. What do I do now? So again, you will be doing this once our test net launches,
right, or main net, depending, you know, how adventurous you are. So you download the space mesh client.
And the first part is going to be filling your disk. So this part actually requires a proof of work.
I think the exact parameters aren't set yet, but think of something like, you know, two days of your
computer working and using your GPU and actually solving proofs of work to fill your disk.
And then once these two days are up, the space mesh miner goes into the background.
And it just listens to the gossip network and trades sort of like a full node on Bitcoin.
And once every two weeks, your node is going to create proof of space space.
time, or actually a version of it that we call a nips, a non-interactive proof of space time.
And it's going to do this by basically reading your whole disk.
So once every two weeks, you have to read two terabytes.
So that could take you, I don't know, half an hour once every two weeks.
And then it publishes something that we call an activation transaction, which contains this proof.
This is, you know, everybody receives this.
It becomes part of the block mesh.
and at the time you publish this activation transaction,
it makes you active for the two weeks after.
So we divide time into periods of two weeks that we call epochs.
And so you publish something today.
It means you're active in the next epoch.
And one of the interesting things here is that
when you publish your activation transaction,
you basically already know when you're going to be eligible
to generate blocks in the next epoch. So it's deterministic. It's not a lottery at all, right?
If you published an activation transaction, and it's not like, you know, you have to,
you might get lucky and solve it. If you stored something for two weeks, then you can generate
an activation transaction. If you generated one and published it, then you will generate at least
one block in the next step. Interesting. So that is completely unlike, say, the Bitcoin or the
Ethereum Network where people actually were miners band together into mining pools in order
for people to actually generate some reward every now and then, right?
Right.
So I think this is a very interesting property and actually also really helps with this decentralization
because it's no longer rational to join a mining pool, basically, right?
Because mining pools usually join because you want to get rewards,
more often and more with lower variance.
So it's a steadier schedule.
But now you're basically guaranteed.
You get rewards once every two weeks.
And there's no probability about it.
The probability comes in when within these two weeks
you're going to be generating a block.
So there is some randomization, but the fact that you're going to be generating a block
and that will be accepted is guaranteed.
So am I going to be allowed to generate?
rate one block regardless of how much disk space I commit, or is it a linear?
Okay.
So there's sort of, you know, two answers this question.
It is linear.
So basically the moment, the way we think of it is every unit of disk space, which I think
we're setting at least initially to be something like 250 gigabytes.
But, you know, these are various system parameters are, you know, tweakable.
And part of the reason we're doing a test net
is so we can play around with them
and see what works best in various situations.
But let's think of it right now as say 250 gigabites.
So if you have 2 terabytes, you have 4 units.
And each unit basically behaves like a virtual miner.
So you get four times, four blocks in every epoch.
But we do have optimization so that if you're generally,
So instead of generating four different activation transactions with four different proofs, you only need one.
If you generate multiple blocks in the same time period, then you can sort of in terms of the communication,
in terms of the computational complexity, you can just use one and just say that this actually represents four.
So there are various optimizations to make this more reasonable.
But in terms of the reward you get, if you have four times a disk space, you had four times a reward.
Okay, I see.
So let's go into, so basically, as it's becoming already apparent, there will be a lot of blocks generated, which kind of does not go together well with the notion of a standard blockchain.
And we'll get to that in a bit.
But let me ask you first.
So how does the protocol make sure that I've actually saved the data on my disk for this amount of time?
So if I say I've saved this for two weeks, how does it know?
I've actually saved it for two weeks and I didn't just put it there yesterday.
Okay, that's a great question.
So there are actually two parts of this.
The first part is what we call the proof of space time, which proves that I'm still storing the data.
So basically, a proof of space time has sort of two phases.
A first phase, which we call initialization, is this is what we said about working, the proofs of work.
So you fill your disk with proofs of work, right?
This is the first phase.
This is something you only do once.
So even though there is a proof of work involved, if you keep running the same system for many years, you only did the proof of work once and the rest of the time is just storage.
Now, the second part of the proof of space time is what we call the execution phase.
So every two weeks, you're going to get a challenge.
We'll speak in a second about how this challenge is computed.
But it's supposed to be something that you couldn't predict.
So you don't know in advance what this challenge is going to be.
and in order to answer this challenge,
you either have to have the proofs of work on your disk
or you have to work again to recreate them.
But the thing is,
even though you could do everything with proof of work,
it's going to be a lot cheaper to store things, right?
So honest users will basically just store things
and prove that they still have them.
And how do you make sure that I can't generate this data ex ante?
So how do you make sure that this time has actually lapsed?
Right.
So this is an excellent question.
This brings us to the second part.
So how do I know that I didn't just, you know, get challenges ahead of time or who generates the challenge, right?
So if I know that this challenge I couldn't have predicted it two weeks ago and, you know, two weeks have passed and now I have it, then it proves that I've, you know, stored this data for two weeks or recreated it, but again, this is going to cost more than storing it for two weeks.
So the way we do, we generate the challenge is we use a proof of sequential work.
And basically a proof of sequential work is our proxy for.
for elapsed time. So we also call it a proof of elapsed time. The reason is that it's,
you know, even though with a lot of hardware, you can speed up parallel work, if you have to do
sequential work, then the cost of speeding it up is much, much higher. And there's some things we
just don't know how to speed up that well. So what you do is you do enough sequential work
that it should take you about two weeks. And the idea is that you use, you use, you, you know,
the previous proof of space time as your challenge to this proof of sequential work.
And then you have to work two weeks.
And then the output of this sequential work is the challenge to the execution phase.
And because it takes me two weeks, then I couldn't have predicted it two weeks ago.
I had to have stored the data for two weeks.
So this proof of sequential work takes me two weeks to produce, but it runs at a fairly low intensity?
or how is it better than the convention?
The other thing I wanted to say is this also seems to bring us back to proof of work, right?
So if everybody's running a proof of sequential work, everybody's running a proof of work.
So the nice thing here is that we don't care who generates this proof of sequential work.
Because you're not proving that you did work.
You're just proving that time elapsed.
And so it's enough that somebody in the whole network does a proof of sequential work,
one person can do it for everyone.
So instead of having everybody run their own proof of,
of sequential work. What we're planning is to have servers on the network provide this as a
service for and because it scales, right? So it costs you, you know, one CPU for, you know, two weeks,
not that big a deal. If, you know, 100,000 people are all paying, they can pay like less
than a cent and you recover your cost. So the more people sort of join this server, the cheaper it is.
basically this means you don't have to trust the server because the proof is self-certifying,
but there don't have to be many servers.
So we don't think that people will run their own servers.
There will be very few people who run this server.
They have like a fast computer, and maybe you'll use two of them just in case.
But the idea is that you're just going to get this proof of sequential work.
There will be one proof for or maybe a small number of proofs for the entire.
network.
Okay.
So I have this proof of sequential work now together with my proof of space time.
And that I can submit to be allowed to mine a block.
Is that correct?
Yeah.
So this, you submit in order to create your activation transaction.
And now once you have an activation transaction, we know that you are active, right,
in the next epoch.
And there are also some, there's an extra thing there.
have sort of this public key that you publish as part of this activation transaction.
And part of this public key, it's a public key for a VRF, a verifiable random function.
And this together with the sort of the epoch number and some other details tells you
when in the next epoch you're going to be eligible to create a block.
I'd like to come back to this idea that most honest users will store the data on their hard drive rather than do the work.
Now, if the coin itself, if the asset becomes incredibly valuable, and so economically, if it starts making sense for people to do the work rather than store the data on their heart drive,
drive, you know, is that possible and is there a way to fix that? Otherwise, we just get back
to proof of work. You're right. That's a great question. So, first of all, you know, in terms of
CPU versus storage, right, if it's, the cost of the coin doesn't really come into it, right? Because
if the storage is cheaper, right, it's not like I get more coins by doing CPU versus storage, right? So
I'm always, I wanted to have the lowest costs to get the same reward.
So the reward is, you know, every activation transaction gets X reward.
And now if CPU costs me $10 and storage costs me $1, I'd rather use storage.
And the fact that the coin, if the reward goes up 10 times, I'd still rather use storage.
I'd just get more reward.
But it could be that the cost of CPU goes down, right?
Or the cost of storage goes up.
Or like there's so many people are mining SpaceMash that, you know, there's a shortage of storage in the whole world and everything becomes more expensive.
It could also be that there's like a competing coin that also uses a similar protocol and that, you know, you would want to be able to sort of mine both coins.
And so one, you would do the actual, like, storing the data on your hard drive and the other you would do the proof of work.
Okay, but again, this is, right, the proof of work is more expensive.
So it's always, you know, it's going to be cheaper if we build things correctly to do it, to use storage rather than work.
If you want to do two things, just get more storage.
It's still cheaper than doing more work.
But the way we handle the cases where the storage actually gets more expensive than the CPU,
remember, we're talking about like a two-week period.
And there's also something that I haven't said.
So we said there's a two-week period.
Every miner gets a block.
So what if they're, you know, seven million minors?
Suddenly, there are so many miners, the communication costs balloon.
So once we get, I think, above around 800,000 minors,
then either we have to let the communication costs go up,
or we can increase the epoch size.
So we can say instead of a two-week epoch, we'll have a one-month epoch.
Now we can handle twice as many minors with the same communication costs.
But now the storage costs twice as much,
because you need to store something for a month instead of for two weeks.
And so if before the storage maybe was less expensive, storing something for two weeks than doing the work, now the storage is twice as expensive.
You're storing it for twice as long.
So maybe suddenly it's not less expensive.
So this is the spacetime version of difficulty.
Exactly.
So in this proof of space time, there is a difficulty.
What I said is we fill your disk actually with proofs of work.
So we have a very easy knob to turn.
and we can make these proofs of work just more difficult.
And this says how hard it is to recreate the table.
And one nice property that we have,
so there are these sort of competing,
older versions called proofs of space,
which,
the difference between proofs of space and proofs of space time
is like this technical definitional thing,
which,
you know,
if you want to know why I think the proofs of space time is correct,
read the paper that's on e-print.
But in terms of the constructions,
their constructions actually are also proofs of,
of space time in some sense, but they don't have this difficulty adjustment.
So they have some other advantages, but they have this big disadvantage where if I want to
make the initialization harder, so it makes it basically make sure it still is rational to
store rather than to use CPU, then in their case, you also have to make the verification harder,
you have to store more, you have to do something to make it that doesn't work quite well.
and in our case we can actually just go over the proofs of work and make them harder.
This episode of Epicenter is brought to you by Cosmos, the internet of blockchains.
Cosmos is live and we couldn't be more excited to see so many projects already building on it.
Blockchain technologies are evolving fast, and development shouldn't be one-size-fits-all.
As a DAP developer, you need the tools that will allow your data to scale, grow, and evolve over time.
The Cosmos SDK is a user-friendly modular framework which allows you to customize your DAP to best suit your needs.
It's powered by tenement core, an advanced implementation of the BFT proof-of-state protocol.
Cosmos takes care of networking and consensus and allows you to focus on building your application in your language of choice.
Ethereum smart contracts will be supported soon, and the SDK makes it simple for you to connect to other blockchains in the Cosmos network.
If you have an idea for ADAP, and would like to learn more about the Cosmos SDK, or if you'd like to
like to connect your existing dot the Cosmos, visitcosmo.network slash epicenter. For Epicenter listeners,
the Cosmos team will reach out to answer your questions and help you get started. We'd like to
thank Cosmos for those supportive Epicenter. I had another question here with regards to possibly
attacking the network. And of course, this kind of maps on to the idea of 51% attack, but I don't
know if it makes sense here. So enlighten me on this. So in terms of Bitcoin, at some
point, it doesn't make any economic sense to use old hardware. So that's why people are
constantly upgrading their hardware because the old hardware is not energy efficient. And the newer
hardware also produces more hash. But with storage, you know, it's just storage. So someone could
literally just buy up, you know, old hard drives that are being, you know, discarded from, say,
like institutions or like school systems. There's like an abundance of old,
cheap, like near zero-cost hard drives out there that one could amass and build like a massive
raid with all these hard drives and just keep adding hard drives and hard drives at basically no
cost without having to buy new storage space. How would one prevent like a sort of 51% attack
on the network? First of all, it's not it's not no cost, right? There are two costs. One is the
initialization cost actually requires proof of work. So adding
this space does cost you in initial work, which means that it's going to be pretty hard to add
a huge amount of space very quickly. Although, again, it could be doable. The second thing is that
when you say zero costs old drives, there actually is a cost. I think we did some like,
you know, back of the envelope calculations. And in terms of like actually getting things in
production and doing something that works, it's not clear that it's actually cheaper to use old
drives because they keep failing because there are a lot of operational costs around using old drives
that might not make it, again, I'm not saying that it doesn't work, but it is a big cost.
It's definitely not zero.
And the second thing is, could it work?
Somebody with a large enough budget can always attack any of these systems, right?
If you have a large enough CPU budget or just, you know, cash, you can buy enough ASICs, you can attack Bitcoin.
Right. So right now for Bitcoin, this budget needs to be enormous because Bitcoin has a lot of budget already in operational.
If you're talking about a smaller cryptocurrency, right, no matter what kind of cryptocurrency it is, because you can trade resources, money for resources, right?
If it's proof of stake, if it's proof of work, if you have a large enough budget, you can buy 51%.
So is it reasonable that you can buy 51% of the space time in the system?
I don't know. It depends on how.
how big it is. But I'm sure in the beginning, it's not so hard.
Right. So if we just start, I don't know, it's $20 million.
Yeah, if you have $20 million, you can, you know, take over the system.
One of the nice properties of proof of space time versus proof of stake is that even if you did
capture the system, over time as the storage grows, right?
So suppose you captured it now. You have $20 million and you captured,
90% of the storage.
But in 10 years will be, you know,
$1 trillion.
If you didn't continue investing,
and you didn't invest, you know,
$500 billion in those 10 years,
you no longer have 51%.
Right?
You've been diluted.
And there's nothing you can do to prevent that
except, you know,
continually buy more resources.
And right, in that case,
you know, there's nothing we can do.
If, if nothing that I know of that any cryptocurrency
can do. If you have enough resources to always have 51%, then this is the basic assumption
of security for all of these systems. Okay, I have one last question about the security of this
proof of space time before we move on to the mesh, just in very practical terms. So basically
you say I have those two terabytes on my hard drive and I fill them with your proofs of work.
And then basically this proof of elapse time happens. And depending on that, I get out.
asked questions of what's in position 1,713,
and what's in position 5, and what's in position 5,800,000, right?
So, and basically which positions I'm being asked about
depends on this proof of elapsed time
or the proof of consecutive work.
And it has an element of randomness, right?
And actually making randomness on a,
digital system is incredibly hard.
So how do you go about that problem?
This randomness is not so hard to generate,
because here we don't need like true uniform randomness.
We just need unpredictability.
And the proof of sequential work basically guarantees us unpredictability.
Because if you could predict what the result of this proof of sequential work was,
you could solve it faster than two weeks.
Right.
So just by the fact that it is a proof of sequential work,
it means that you cannot, you know,
guess the result before two weeks are up. And we use this together with hash function, so in our
case, say, SHA-256, which in terms of our theoretical analysis, we pretend this hash function
behaves like a completely random function. So obviously it doesn't behave like a random function,
right? It's, it has these very, you know, structural properties. But this is a very common assumption
in practical cryptographic protocols. And even though,
So in theory, you can build these toy protocols that will work with a random function,
but will not work with any actual hash function.
In practice, we don't know how to break any of the protocols that are secure in what we call
the random Oracle model when you take the random Oracle and replace it with shot 2.56.
So this, again, it's not a total proof of security because maybe somebody will find some break
in shot 256.
but if they do that, they will break so many other protocols
that this will be the least of our worries.
Okay, thank you. That makes complete sense.
So let's move on to the mesh.
So as you alluded to earlier,
you actually end up with a lot more blocks than you would
on a typical blockchain.
How is this handled by the space mesh protocol?
Yeah, so we're aiming for,
we divide time into layers.
So we have a layer
say every five minutes. Again, the exact parameters might be tweaked, but this is like the current
starting set. And we'll have something like 200 blocks per layer. Again, this is randomized,
so there might not be exactly 200. It depends who actually, you know, it's online and maybe some
parties will not be online. They won't create a block. And there's also this randomization of
when within the two-week period you generate blocks. It could be just randomly that some layers are
a little bit larger, some a little bit smaller. But this is sort of on average.
This 200 block per layer gives us some very big advantages and there are some disadvantages too.
So the big advantages are in terms of throughput, right, now we can handle a much higher throughput.
Because, I guess this comes from two things.
One is that because you're guaranteed that your block will get in, you don't have this huge disincentive to make your block larger, right?
In Bitcoin, for example, if your block gets larger, it takes longer for the block to be transmitted,
which means that if somebody else solve the proof of work at the same time, their block will get in first.
And so if you're in a race, then you want your block to be as small as possible.
And this creates this perverse incentive where you don't want to put transactions in blocks
because you want them to be first in the race.
And there's also these sort of limitations of the system where if you make the blocks too large,
it will just take them too long to be transmitted,
which means it will increase the chance of having multiple people
solve the proof of work at the same time or around the same time,
which means that the security of the system actually breaks.
So if the time between blocks is too short,
it's very short compared to the time it takes blocks to be transmitted over the network,
then you're no longer guaranteed that the longest chain rule will guarantee consensus.
Okay, so basically,
large blocks and long propagation times,
they foster a high ankle rate.
Exactly.
But if you actually mine many blocks at the same time,
how do you actually deal with conflicting transactions
that will invariably be incorporated into all of these blocks?
Right.
So this is where we come to the disadvantages.
So there are several disadvantages.
One of them is it's actually much harder to prove that you can get consent.
because now we have to agree about sort of which blocks are the right blocks, the blocks in consensus,
and which blocks are, say, say I published a block, I pretended it happened, you know, 200 layers ago, right?
So like it happened last week or something.
And maybe it even looks valid, right, because I said, I'm guaranteed to generate a block in, say, layer 10, right?
now several weeks have passed. I generated the blog. I didn't publish it to anyone. And now I
publish it, you know, two weeks later. So it looks valid in terms of, you know, it was valid at
the time. Had I published it then it would be okay. So now we need to, to, everybody to agree,
this is not part of our history, right? Because otherwise I could change history. So,
so the main sort of challenges in designing space mesh were exactly getting everybody to agree on
exactly which blocks are considered part of the history. So those are blocks that we call
contextually valid. A block is contextually valid if it's sort of part of the real history.
The transactions in this block are actually part of the history. And blocks that might be
what we call syntactically valid. So they were generated in some sense correctly,
but they weren't sent at the right time. Okay. So basically it's the distinction between
proof of existence and proof of availability, right? I'm not sure I understood that.
actually. Okay, so basically in the Ethereum space, that's also a very common problem.
Basically, there are two different, slightly different problems. So one is, I have something
and I want to be able to prove later that I had it at that point in time. So basically what I can do is
I can have a hash and I can post it to the blockchain and then I can show you later that
because I had the hash of this at this point in time, a point in time. This is actually an incredibly
strong proof that I actually had the thing. Oh, yes. Okay, okay. So that's proof of existence.
and basically proof of availability is that not only did I have the thing,
but I also gave others access to this.
Yes.
Yes.
So exactly, this is the sort of questions we need to answer.
And you can think of it like in Bitcoin, which is simpler, right?
You have a block that, you know, solve the proof of work.
It's syntactically valid.
But only if it's on the longest chain, it's contextually valid.
Right.
So by looking at the block itself, you can't tell if it's part of history unless you also see which other blocks point to it.
So we have to do something similar in the mesh.
But let's put that aside for a second,
how we decide whether blocks are valid.
Let's pretend for a second that we actually know.
We have for each block,
everybody agrees whether it's valid or not.
We still have this case where people are publishing blocks at the same time,
right, like you said,
and they might each see different transactions
and the transaction might conflict.
So in a blockchain, we can sort of guarantee
that if I'm publishing a block,
all my transactions will never conflict
the transaction in my block will never conflict with something in a previous block or within themselves.
But in a mesh, you cannot guarantee that.
And so basically we need to sort of say that we allow conflicts, at least of some types, right?
If there's a conflict that happens because two blocks are in the same layer and they couldn't have known about each other, that's fine.
They can both be still contextually valid blocks.
But we have this sort of state evaluator.
that decides what transactions are valid, or in the case of smart contracts,
what is the current state of the system after running the programs that are part of these transactions,
it runs over all these transactions.
If there are two conflicting transactions, say the first one will be valid and the second one won't.
This is, I think, maybe even easier to see in the smart contract type system, right?
So suppose you have a transaction that takes 100 coins out of an account
and another one that takes 100 coins out of an account,
and the account only has 100 coins in it.
So you can execute both of them, right?
The first one will take 100 coins out of the account.
The second one, everybody will agree
that it didn't manage to take the coins out of the account
because the account was already empty.
So how do you agree on the order of these blocks?
Once we agree which blocks are valid,
then actually agreeing on the order is very easy.
We just need to have a hard-coded mechanism.
And so one, like our initial mechanism is,
we'll just sort the blocks by their ID.
So that everybody that sees a block knows its ideas.
There's the hash of the block.
And so now we all agree on the order of blocks within a layer.
And we can then take, say, the transactions within each block and the order of the blocks.
But, you know, this does sometimes, like, there are various, like, subtle issues there with, you know, lotteries and things like that where you might not want people to be able to, you know, make sure that they're transits.
is first. And so this mechanism, how you decide order, we can switch it, right?
We can just as long as everybody agrees on it, and this will be hard-coded in the system,
but as part of test net, we might play around with different mechanisms.
And once we get to main it, there will be a fixed mechanism.
It could be the order of the blocks. It could be something that's a little bit more randomized,
that depends on exactly which transactions are there, and then we do some random ordering based on that.
based on that. But this is sort of an easy problem to solve when everybody agrees what's there.
Okay. And do you get the block reward regardless of whether your block is actually included in the
final blockchain or not? No. So what is guaranteed is that if you're behaving honestly,
then your block will always be included. It will always be contextually valid. And so you will
you will get a block reward for every contextually valid block.
But the only way that your blocks will not get in is if you're behaving dishonestly.
So, so, you know, you're guaranteed a block reward.
So tell me, where does this hair and tortoise protocol, these consensus protocols that you
describe in the paper, how did they fit into this?
And, you know, when, at which point are you using the hair protocol and at which point
we're using the tortoise protocol.
Yeah.
So maybe the high level, the tortoise protocol basically gives us consensus and irreversibility.
So basically it makes sure that eventually everybody will agree on which blocks are valid.
And also that if we agree and we have high confidence that some block in the past is valid,
then it will stay that way, right?
History can't be changed.
And we call it the tortoise protocol because it's slow but steady.
It will guarantee eventually consensus for everyone.
It requires very few assumptions to work, but it takes a while.
So consensus might take many layers.
And the Hair Protocol comes in, basically, what the Tortoise protocol guarantees,
I'll get into maybe in a minute how it does this,
but it guarantees that if all the honest parties start out agreeing,
then very quickly they come to a confident consensus.
And very quickly means within, you know, one layer if there's no attacks and on average two layers if there are attacks.
So we come to a confident consensus very quickly, but only if all the honest parties start out agreeing.
So why wouldn't they start out agreeing?
Because the adversary might publish blocks that are sort of on the threshold between being on time and not being on time.
So honest parties will always publish their blocks at the beginning of the layer.
They know ahead of time exactly which layer is supposed to be the layer which they publish.
They can create them ahead of time.
They can publish.
So it's always easy for them to work right.
And the layers are far enough apart that all the honest parties will receive all the honest blocks for sure from one layer to the next.
But if I'm trying to make consensus hard, then I can generate a block.
And then I'll publish it just half a minute before the end of the layer.
And now some of the honest parties will get it and some won't.
And maybe I can sort of fine-tune the exact timing so that, like, whatever fraction I want.
So half of the honest parties will think it's valid and half will think it's not.
Or, you know, one-third will think it's valid, one-third will think two-thirds will think it's not.
So when this occurs, then the tortoise will eventually guarantee that we have reached consensus, but it will take longer.
So we'll reach consensus quickly on honest blocks, but slower on dishonest blocks.
And what the hair protocol does is it's basically a protocol that we run off-chain.
So we run it on the gossip network, but it's not part of the sort of final history.
And it helps the honest parties agree right away about which blocks are valid.
So they're guaranteed that all the honest blocks will be considered valid.
And these blocks are in the middle, then they might be valid, they might not, but they will all agree on them.
And so now the Tortoise Protocol will guarantee consensus very good.
So you have instant finality on the clear-cut cases,
whereas the finality on not-so-clear-cut cases just takes longer.
Yes, except that because we want to have a programmable system with smart contracts,
it's not enough to just say, you know, this transaction is good.
It actually, what it does depends on whether another transaction came in or not before it, right?
So we actually need to have finality on an entire layer.
in order to say that we can now compute what the current state should be.
And what the HER protocol does is guarantee very fast finality on the entire layer,
even if there are some blocks that are sort of maliciously generated
and they're trying to prevent consensus.
Okay, but that, so that was going to be my next question.
So basically, what kind of transactions does SpaceMash allow?
And you already said that you're aiming for smart contracts.
But does it mean that basically the reaction
time of the network is five minutes or whatever one layer is?
Yeah, so the latency of the network is not fixed.
It's probabilistic because it depends on how it's being attacked.
If it's not being attacked, which is probably going to be or being attacked like with a
very small percentage of the resources, then the finality is going to be around five minutes.
So we'll be, again, it depends on the parameters.
So, you know, don't take me to five minutes.
We'll play around with this.
But yeah, I can be pretty confident in the results.
And I think, again, we have this thing that's sort of like Bitcoin in the sense that, you know,
there's no final finality, right?
There's just levels of confidence.
So, you know, if you see something with one confirmation, then you know what the probability is
that an adversary with this much resources can reverse that.
If you see six, then the probability goes down, but there's never a zero.
So we have the same type of guarantee, right?
But it goes down very, very quickly.
And because we have 200 blocks every time and we have the same type of analysis as Bitcoin, right?
The blocks are sort of voting for the previous layers.
Then we get it faster than Bitcoin, much faster.
What are the useful applications do you anticipate here?
Are you targeting a specific type of user or a specific type of applications for SpaceMesh?
No.
So we actually are a very general purpose infrastructure.
Obviously, you know, the initial use was probably going to be payments because this is sort of what the current use of the main use of cryptocurrencies is in general.
But our infrastructure is planned to support basically anything, you know, DAPs, registries, you know, whatever we can think of and especially whatever we cannot think of yet, but people will come up with.
Okay.
I mean, I think the main use for cryptocurrency at the moment is speculation, not payments.
Okay, well, the main use that people talk about that, you know, what it can do that right now we don't have good ways of doing without it.
Speculation, yeah.
Are you incorporating at least at the beginning or at some point are you thinking of having some sort of governance mechanism like on-chain governance?
or is this, so I know in the paper there's mention of this foundation,
you know, there's a foundation player role here.
So nothing, governance is, it's a great question and it's something that, you know,
we're actively working on.
We know that it's something that we should have in place, at least an initial version
of this when we launch Mainnet, but, you know, this is something that we're looking at
the options and what people are doing.
We're talking about it.
You know, if people have opinions about this, then, you know, we're very happy to hear
because things are not fixed yet at this point.
Okay.
And so I realize this is not necessarily your role,
but can you talk a little bit about, you know, the team
and, you know, the funding that you was raised?
And also, you know, if there's a business model here,
you know, maybe not at the moment,
but sometime in the future, how do you anticipate to make money as a company?
Okay.
So in terms of the team, so we're an open source project.
We have about 20 people.
working full-time, of which I think the vast majority are developers that are writing code.
And we also have a research team, which consists of the research team is basically all
people in academia, so we're all with several hats.
So there's me, there's Idobentov, there's Barak Shani, who did PhD in New Zealand,
worked on elliptic curves.
There's Julian Loss, who's finishing his PhD now in Bochum,
and there's Tal Malkin, who's a professor at Columbia.
So, you know, all people in academia,
and we're also, you know, this is a completely open project.
We're happy to work with anybody in terms of research.
Like I said, I mentioned, you know, the Christoph and Brahms work.
In academia, we're not competitors, right?
We're all trying to get the best technology out there, and we're building on whatever we find that's the best that we can use.
In terms of the developers, they're an amazing development team.
I think one of the critical things in terms of actually constructing a working protocol is having a development team that can do this.
And I think it's quite rare to find people who can understand the theory deeply and are amazing,
coders. And just as an example, they started working, I think, less than a year ago, and we
already have a working protocol that TestNet is starting in, like, I think, July. So, you know,
this is a very, very short time period for this kind of work, and it's definitely not a simple
protocol. So I'm, like, very impressed by the development team. I've worked with developers before,
and they really impressed me.
In terms of investments,
so we raised initial seed round,
I think I don't remember exactly when,
a year and something ago,
and about six months ago,
we had another $15 million investment
from sort of the top tier
blockchain and crypto funds,
like polychain, metastables, slow ventures,
a collaborative fund,
and another seven additional top tier funds.
In terms of the business model, the idea is that the vast majority of the coins are going to go to miners,
but there will be, at least in the initial period, a small amount of pre-mining, less than 10%,
and tax, again, a limited time period, a tax on the block rewards that will go to sort of compensate development and the investors.
this pre-mining, so I suppose you will have some of those coins and you're sort of betting on the future value of the coin.
And these transaction fees or sorry, this transaction tax that you mentioned.
It's a reward tax, not on transactions, but on the block rewards.
On the block rewards.
So once this runs out, what do you think is the future of the company?
I mean, the investors invested, you know, almost $20 million in your company, I guess, are,
betting not only on the future value of the token, but also on the company itself, what's the plan there?
So again, this is not like my area. I'm, you know, chief scientist in charge of the technical part,
the research. So I'm not 100% sure. As far as I know, the main, you know, return for the investors
is going to be the value of the coin. So we're not planning on doing anything.
more complicated. And also, we are ensuring that there isn't like a large block of coins
that somebody holds ahead of time that can be used to manipulate markets and things like that.
We're sort of dripping the coins off over time to help it stay decentralized.
Now, just before we wrap up, please tell us a bit about the roadmap.
So I know you mentioned there's a Tessnet coming up soon.
And also where people can find space mesh, where they can read about it and learn more.
and maybe even get involved.
Yes.
So actually, we're happy for people to get involved.
Our website, space mesh.io, and you'll find links.
It's a very cool website, by the way.
Thank you.
Yes, I also liked it.
Our chief marketing officer is also amazing.
The website has links also to the TestNet.
So we're going to be launching TestNet in July.
If you want to get involved there, then definitely follow the links.
There are various interesting things.
you can help with. If you're, you know, an amazing coder, we're actually hiring people in New York.
We want to build, our current client is built in Go. We want to build an actual completely separate
implementation in Rust so that we'll have, you know, something that can validate the spec and not
rely on, you know, the code is the spec. In terms of roadmap, you know, TestNet is going to run for at least
I think six months.
And Mainnet, we hope to get out by the end of the year.
But the whole point of TestNet is testing and finding bugs, finding problems.
We want the security to be, obviously, we're writing security proofs.
For everything we do, we want to make sure the protocols are sound, but that's not enough
to guarantee security.
So we're also going to have bounties for finding bugs.
We're going to run extensive testing.
we're going to have code reviews.
We want to make sure the software is also secure.
And depending on what we find, obviously, this could cause mainnet to be delayed.
But ideally, it's going to be by the end of the year.
We'll start running active currency.
Great.
Sounds very exciting.
And yeah, it's a fascinating idea for Consessence Protocol.
Give my regards to Aviv A.L.
who is also on your team
and I know he's a big fan of the show
and whom I've met before.
I will, thank you.
Cool. We look forward to the TestNet.
Thank you.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show
on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode
of the Epicenter podcast.
Go to epicenter.com.
for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter,
so you get new episodes in your inbox as they're released.
If you want to interact with us, the guest or other podcast listeners,
you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
