Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - IPFS, Filecoin and The Vision for a Decentralized Web (Part 1 of 2)
Episode Date: November 26, 2020IPFS (InterPlanetary File System) is a fully decentralized distributed system for storing and accessing files, websites, applications, and data. Released just over 5 years ago by Protocol Labs, it has... had a tremendous impact in the Web3 space as the standard for how blockchain projects store data. Filecoin is a complementary protocol to IPFS and was recently launched on the mainnet. Filecoin is the economic layer which powers IPFS's decentralized file storage network. It enables users to store their files at hypercompetitive prices and verify that their files are being stored and replicated correctly. And it allows storage providers to sell their storage on an open market.Juan Benet is the Founder & CEO of Protocol Labs, which has had a huge impact in the blockchain ecosystem, as organisation behind IPFS and Filecoin. Juan returns to the show after 5 years to give us an important update on the long-term vision to fund innovative technologies, IPFS since it was created, and Filecoin as a foundation to a new decentralized cloud.This is a 2-part series and in the next episode we deep dive in to the technical aspects of Filecoin.Topics covered in this episode:An update on Protocol Labs and how it has grown on the Bell Labs modelHow they bridge the gap between a research foundation (Bell Labs) and a company (Protocol Labs)The intersection between computer science and cryptoHow the Protocol Labs organisation is setup and how Juan has led it as a solo founderAn overview of IPFS (InterPlanetary File System) and how it has evolved since our last interviewAn introduction to the Filecoin blockchain and its unique designHow Filecoin is fundamentally different from other layer-1 blockchainsHow hash rates and storage fees affect the Filecoin blockchain consensus systemThe potential impacts of Filecoin on a global levelEpisode links: Filecoin websiteProtocol Labs websiteIPFS websiteEpisode #100 with Juan BenetFilecoin on TwitterJuan on TwitterSponsors: cPanel: cPanel's WordPress Toolkit is the all in one solution that makes hosting your website easier than it's ever been - https://epicenter.rocks/cpanelAlgorand: Learn more about Algorand and how it’s unique design makes it easy for developers to build sophisticated applications - https://algorand.com/epicenterThis episode is hosted by Brian Fabian Crain & Friederike Ernst. Show notes and listening options: epicenter.tv/367
Transcript
Discussion (0)
This is Epicenter, episode 367 with guest, Juan Benet.
Hi, I'm Sebastian Kuccio, and you're listening to Epicenter, the podcast where we interview
crypto founders, builders, and thought leaders.
On this show, we dive deep to learn how things work at a technical level, and we fly high
to understand visionary concepts and long-term trends.
If you like Epicenter, the best way to support us is to leave a review on Apple Podcasts.
And if you're on a Mac or iOS device, the easiest way to do that is to go to Epicenter.
dot rocks slash Apple. And if you're new to the show and not already subscribed, you can find
Epicenter on iTunes, Spotify, or wherever you get your podcasts. Today our guest is Juan Benet. He is
the founder and CEO of Protocol Labs. Of course, that's the organization that's behind
the IPFS project and Filecoin. Now, longtime listeners of the show will remember our interview
with Juan, which was about five years ago. It was on episode 100. And I remember recording that
and being so inspired by the idea of IPFS. First, by this concept of a fully decentralized,
always available content and file storage platform, but also the notion of flipping the client's
server model on its head. And rather than thinking of content as hosted on a server with an
address, you address that content directly by its hash. And of course, IPFS,
also has built-in versioning. At the time, all these ideas were very new, and I think that they've
contributed a lot to how we think about decentralized storage, and in some ways, they've set standards
for decentralized cloud storage infrastructure and the functions that we expect from them.
Of course, IPFS has been immensely valuable to the ecosystem as one of the building blocks
for Web 3 and Defi is an integral part of many.
projects in the space, but IDFS was just one part of the broader stack, and the economic
model had yet to be built. And that, of course, is Filecoin. Since we recorded our last
interview, there was, of course, the Filecoin ICO. That project was built out and is now live.
So Juan is back on the show to give us an update. And it was such an important conversation,
and there was so many things to cover that we're releasing it as a two-part episode.
Part one will focus on IPFS and the vision for a fully decentralized cloud storage infrastructure.
And part two, which is coming out next week, will focus more on Filecoin.
A little bit of housekeeping. As you know, I'm part of the Aden team.
That's the Association for the Development of Digital Assets.
We had the president, Simone Polaro, on a couple of weeks ago to talk about the upcoming MECA regulation in Europe.
I then is hosting a free webinar where you can learn all about the French
crypto regulatory framework.
France is a great place to live and it's a great place to run a business.
In fact, I live here.
I started companies here as well.
And the French regulatory framework is very favorable to crypto startups.
There are over 100 established crypto companies in France and many of them have benefited from
the ICO visa that is issued by the French financial markets authority.
So if you want to join that webinar and learn how to build a thriving crypto business in France,
it's happening on December 8th and you can register for free by going to epicenter.rocks
slash adan.
Speaking of Aden, we're actually building a brand new website on WordPress.
And what's perhaps been the most time consuming and frustrating is everything that relates to DevOps.
So I mean deployment, maintenance, backups, and database management.
Well, the WordPress toolkit for C panel is a tool that makes it easy for,
developers to manage their WordPress infrastructure. I'll tell you a little bit more about that
during the interview. And a couple of weeks ago, our friends on Algarand hosted a great webinar
to help developers build sophisticated defy apps. I hope you enjoyed it. If you like that,
I think you'll love their after-hour series where blockchain developers can meet with their team
and members of the community for informal conversations about Al-Garand. I'll also tell you a little
bit more about that later on. But for now, here's part one of our conversation with Juan Benet.
Hi, and we're here with Juan Benet.
He's the founder of Protocol Labs and of Filecoin and of IPFS.
And this is the second time we have gone on.
He's actually been on before on episode 100.
So a long time ago, it's about five years ago.
And back then we spoke about IPFS and we spoke about FileCon as well.
I mean, the white paper was out and the kind of vision of Filecoin existed.
And actually, I remember this was an awesome episode.
It was like one of my favorite episodes.
We ended up, at some point, we ended up having, like, being short an episode.
And then I think we rebroadcast that episode we have on.
So, but now we have him on again.
And, you know, long time has passed.
And Fyloin is actually live.
And there's been a huge amount of progress and it's kind of this new burgeoning ecosystem arising there.
So, yeah, we're really excited to have gone on and dive into Fyrecoin.
And, yeah, what's going on there?
So thanks so much for joining us.
Hey, thank you so much for having me. Last time was a blast. One of my favorite conversations ever.
So really excited for today and, yeah, looking forward to that.
Cool, awesome. One of the things I remember when we spoke last time, you spoke quite a bit about Bell Labs
and kind of like how that organization has inspired you in terms of like how you're approaching
protocol apps. And, you know, now a lot of time is past and like, you know, protocol apps has grown
a lot too. So I'm curious, like, how has that played out? And, you know, how have you kind of
continued pursuing this idea and this model of Bell Labs? You know, it's been a huge inspiration
for us and for a number of people that, that work at BertoK Labs to kind of create an organization
that can do research and development for foundational technology in the long term. It is
a thing to aspire to. I mean, you know, building something.
of the nature of the labs is like a multi-decade project.
That kind of institution takes 20, 30 years to build.
We think that we are on the path to creating a really important lab for the world,
and I think we've been doing pretty significant research and development across a variety of topics,
and we think that we're kind of on a good trajectory, but very, very, very far away from anything close to something as amazing as, of course.
But, you know, it's been a very important inspiration for how we've structured the organization
and how we think about hiring and how we think about goal setting and the structuring of problems.
And, you know, if you think about going from fundamental scientific development all the way to, you know,
prototyping, you know, getting to an important result, then taking that result and thinking about
what kind of technology can be built with it to then prototyping the technology, getting,
through multiple cycles of development
before you can reach something remotely close to a product
and then from there kind of building a thing
and shipping into the world.
Instead of innovating is a long kind of cycle
of going through and iterating through that pipeline
and we've structured protocol labs
as an organization to take that
from our first principles perspective
that I think about carefully, think about every stage
of development with every piece of technology
that we're working on.
and think about all the different artifacts that are made along the way,
and try to make those open source.
So think of things like the ideas that feed into the research,
documenting all of that and putting it into an easily accessible medium,
then the actual results, making sure to publish all of those.
Then from those results going to intermediate prototypes and so on,
open sourcing all the code for those,
and then once you actually get into building out a system,
making that a dependable piece of technology.
So that's something that we think we're doing
that are definitely other labs from the past didn't do,
which is embrace open source fully and build things
where every single intermediate stage of research and development
is usable by other people.
One of the important things that any of these kinds of important labs
end up having to do is be able to innovate a multiple front at once
because the innovation cycles are quite long.
they take on the order of three to 10 years on each individual piece of technology,
you want to be able to innovate on multiple fronts at the same time.
And so that means you have to get extremely good at road mapping across all of these
different problems and think about resource allocation against that and who's going to work
on what problem and for what duration.
And then think about then what projects and problems you're building and kind of getting to
over time.
So all of that has been a phenomenal challenge for us.
And we've then now have worked on many projects that are kind of getting in these different stages of development from, you know, we started with IPFS and then from there took pieces of IPFS and pulled them out and made them into their own projects, things like Lip-2-P-2-P and IPLD and others.
And then built Phalcoyne and then in the road to that, ended up working on other projects like DRAND and so on.
So along the way, there's been research done as well on the primitives and the pieces of tech.
So that means a bunch of libraries got built, papers got written, a lot of collaborations with other groups.
So there's a lot of kind of broader ecosystem development in the entire pipeline that we've been super thrilled to be able to do.
And that's been probably the most fun part of all of this for many of us has been, you know, getting to be able to do.
all of those different pieces.
And I mean, the contributions that you guys have made with the ecosystem, I mean, they're foundational
to a large extent.
What I wonder is, how do you bridge the gap between running a research foundation and running
a company?
Because, I mean, those two are very different beasts, right?
So yes and no in that I think most important labs of the type of our labs, things.
like the labs itself, Xerox, SRI, and so on, where, and even today, things like X and
DeepMind and Google Brain and so on, a lot of these systems are corporate labs.
They're corporate research labs.
They're not academic labs.
And there's a very important difference that happens when you can do research in an organization
that can also do development and engineering, which is that you actually get to take the
innovation all the way from idea to building a thing and getting the thing used by people
and seeing all the problems you're going to run into and then do all the whole innovation cycle
in one in one institution which is so much very important work gets done in academia that is
very difficult to translate into into technology that people use because it ends up being
kind of far removed from from users and so there's a lot of work in in academia that because
universities traditionally are very far away removed, kind of intentionally set up that way, removed
from industry and removed from development, a lot of important work gets stuck just in the idea
phase and just in the kind of important result phase, but not actually getting to technology.
And so from that perspective, I think you have to get these corporate labs to work, and they have
to, in a sense, be straddling the world between research and development and be in, you
in the past it's been sort of a corporate structure. I don't know that a company per se is the best
structure for it, but I think neither is academia. And so there's a gap in between here. And there's
not that many institutions that have survived long enough to kind of create a class of entities for
them. And so it definitely is a struggle. I think one of the things you're getting at is
when you're building a company, you have to be very focused on kind of shorter term oriented goals.
thinking about product development and business goals and all that kind of stuff, and that can
come in significant conflict with long-term research-oriented goals. And so you have to get
extremely good at understanding the priorities and why you invest deeply in something and why
that longer-term-oriented view is going to be very successful for the organization long-term.
One of the things that helps us is, Oracle Labs itself as an institution, as a company,
is very mission-oriented, and there's a very strong mission. Our mission is to drive
interest in computer science, driven interest in computing technology in general. And that's a
thing that everybody who works at PL has a strong sense that that's what PL is about. And so that
helps everybody kind of straddle that boundary in being able to navigate that whole spectrum from
research to development to actually building a business. Because if you, to really innovate and to
really push a breakthrough, it's not enough to build kind of a product that's kind of in the
margin and iterative, you have to do something fundamentally important, fundamentally, that
pushes the envelope in some way.
And at the same time, it's not enough to come up with an idea and tell the world about it.
You actually, you know, it breaks through the technology, it's not really a breakthrough until
it has actually gotten adopted by the broader world.
And so, you know, you can think about and theorize all kinds of things, but if you haven't
had a result that has transformed how.
the rest of the world operates hasn't really been a breakthrough yet.
And it's funny because Bell Labs itself had a very strong kind of meme about this.
They used to describe it as you haven't innovated until you've sold it.
And the idea was an innovation, it's not really an innovation until the fundamental improvement
and result has been put into a product and sold into a market.
And if you haven't done that, then the innovation hasn't really happened yet.
And so having a very strong sense that that whole pipeline, like,
to really advance technology, to really push science forward, you have to think of the whole thing as an integrated system and not really kind of introduce a bunch of boundaries that are kind of really fuzzy in reality and to understand how to do the whole pipeline and you get good at that. That's kind of, that's the sweet spot. And again, we are in the early days of something, a building institution like that. PLA is about six years old now and where, you know, it's a baby relative to any of the important labs to, to the, to the,
think about in the world. And so maybe we aspire to become a good institution like that someday.
And with a lot of luck and a lot of hard work, we'll get there. But yeah, we hope for that.
I've been building WordPress websites for over 10 years. And the most frustrating thing has always been
DevOps. I'm talking about deployment, maintenance, backups and database management. I've lost so many
hours of sleep doing WordPress infrastructure management. If you've been building websites for as long as I
have, you're definitely familiar with C-Penel. They've been providing web hosting management software for
25 years. Well, they have a new product. It's called the WordPress Toolkit for C-Penel, and I've been
given an opportunity to try it out. It's really cool. It makes managing your WordPress websites
really easy. You can manage multiple WordPress sites from one dashboard, and you can manage users
and databases too. And because all your websites are managed from a single interface, you'll be more
efficient. This is really useful if you're running multiple environments like staging and production.
The WordPress toolkit can also apply security settings and policies to all your sites at once
so you can harden and protect your company's website.
There's a free light version and a deluxe paid version that has added features like
website cloning and smart updates.
That's also great if you're running multiple environments.
Anyway, if you're doing anything with WordPress today, I would really encourage
you to check this out because it'll make your life so much easier.
To learn more about the WordPress toolkit for C-Panel and be informed when it comes out,
go to epicenter.rocks slash C-P-P-A-N-E-L. That's C-P-A-N-E-L.
We'd like to thank C-Panel for their support of the podcast.
So you mentioned the mission and the way you phrased it was like breakthroughs in like computer science.
And of course in the in the crypto space, often people, you know, talk about mission and, you know,
related to ideas of like decentralization or maybe some sort of like sovereignty.
for individuals.
So like, so how, like, how do you look at the protocol last vision?
Like, to what extent, like, is it, or can you explain?
Like, is it breakthroughs in computer science?
Is there elsewhere?
Like, how is the intersection with the crypto space?
Yeah.
And I, sorry, I spoke a little bit.
It's breakthroughs in, driving breakthroughs in computer technology.
And so, and the idea there is, is that it's both the science and the technology.
You have to do both the science of coming up with new results.
but also you have to go and build them into a thing.
And the way that this fits into the Web3 world for us is that in this time period,
in 2020 and probably for the last five to 10 years, Web 3.0,
what is now called Web 3.0, which includes kind of all the Web3,
all the crypto space plus the kind of IPFSD web area of things.
This whole important development that's happening is a really critical part
of computing.
And from our perspective, driving
a bracelet in this area is one of the
highest leverage things that we could be doing
in the world, because the
kinds of technologies that are being put in place
are upgrading
the computational fabric that we all use.
So if you think about the applications
that we use day of day, and you think about
the rights and properties that
these applications have and these systems
that we use and so on,
and you think about the utility
that crypto brings and the
entire Web3 world brings to the table, like that verifiability, making sure that when we enter
into transaction or when a contract executes and so on, all of those interactions are verifiable
and correct and so on, and you're not just taking it on trust and unfaith from other entities.
That's a fundamental computing technology improvement.
And I think in general, this gets misunderstood or sold short by kind of a broader computing
world. They tend to look at crypto and Web3
is, oh, it's weird and decentralization is weird and
crypto's weird and like, oh, it's just about money and whatnot.
But what's really going on is
the infrastructure layers
of the internet are, we're figuring out how to
introduce verifiable and rights into those layers
and putting in place systems that can
automatically scale with the right incentive structures.
And getting that right is
probably going to be one of the most important
transition periods for for the internet and the web in in the last 20 years and probably for the next
10 15 and so this is kind of why working in this area is is kind of important for us it's also a
little bit incidental when I started protocol labs I was already working on IPFS and so on so
it sort of followed that pathway but we ended up discovering just so many different
super rich areas for computing it's not just files and distribution
of information, it's there's all kinds of important primitives being explored here from
new economies and new financial structures for organizing people and organizing work and new legal
structures and new legal components for, for, for how much you build groups of people and
group, like, yeah, how do you organize entity? Well, the whole Dow space, for example, and or, you know,
Crypto First, Crypto Native organizations is incredibly interesting.
And it tends to be talked about much more than experimented with.
And so I really think we need to experiment a lot more and try a lot more things,
probably more than we talk about it.
But I think it's one of the most important things going on.
It's really software-eating, the economy, software-eating law.
And when you kind of put those things together,
the, and you think of the leverage that computing gives you, it's an extremely, there's a bunch of
extremely powerful primitives of being played with right now that are probably going to define
how humans operate for the next 15, 20 years. And a lot of that is getting kind of, yeah, tinkered
with today. So it's kind of like the personal computer tinkering that was happening, you know,
a decade or so before personal computers became kind of like the hot thing on the market.
And so for us, it's like, it's really kind of an amazing field to be in because it's so rich.
There's so many different areas to kind of so many different threats to pull on and so many
different possibilities that could be turned out to be really important.
I think probably the hard part is some degree of prioritization and focus so that you can actually,
you know, work on in specific kind of discrete projects for the timeline that they need and you
don't get spread too thin, right?
So there's so much going on that it's very easy to kind of acquire a very large fringe of work and kind of make progress incrementally on all of it.
But it feels like you're going really slow because there's just your energy and effort is being divided across that large fringe.
So the way that we kind of navigate that is plant specific flags on specific kind of milestones, being able to achieve certain kinds of goals and then working against those goals and focusing on them so sequentially.
So at protocol labs, you're continuously pushing boundaries.
And you're doing that as a sole founder.
So basically, in my experience, it helps enormously to have someone to bounce ideas off off
and have someone equally committed to the same thing.
So being the visionary, it's actually pretty exhausting.
So having someone to run with makes it a lot easier because you don't, you don't, you
end up second-guessing yourself a lot, right?
So basically having someone who's equally crazy helps.
How has that worked out for you?
Because you're currently doing this alone.
I mean, not alone.
I mean, you have a team, but, yeah, as a sole founder.
Yeah, great question.
I mean, for me specifically, you know,
Brook Labs as a team is incredibly brilliant and hardworking.
So I get to work with a ton of amazingly brilliant people
who are having a ton of ideas.
And in terms of vision for the long term,
maybe in the early days,
like in the first couple of years,
a lot of it was kind of shaped by me.
But now we've kind of set up the right structures
such that the range of projects that we're making
and the range of ideas that we're pursuing
and all that kind of stuff
are being driven by the team as a whole.
And so that's been phenomenal.
A really important transition for any organization
is to identify this.
spots where whoever started the organization or the leadership in general is you don't want to be
overly reliant in any leader. And so you got to spot those single points of failure and
like remove them. And so for us, it's been thinking of ways to help kind of foster projects
that other people are coming up with and other people are having ideas for. And so on, a lot of
file coin today is the product of like the actual technology within it and the kind of side projects
that kind of help build Filecoin
is a product of a lot of other people.
And so really, you know, I get to work with
with super brilliant people who are kind of helping shape that vision.
Probably my role is shifting more towards,
and maybe not as much in the next couple of years,
but really kind of further out,
like maybe thinking five years ahead,
is going to turn much more into looking further ahead
and kind of thinking about ranges of fields
to work towards.
and how to build the organizational structures
to kind of yield that kind of really good R&D
across our idea of things.
We also have a kind of like a creative structure here
where we think about individual projects.
Right now the way that we sort of organize ourselves
is that there are individual projects
and those projects have a project lead
and the project lead sort of defines the vision for that project in that year.
So there's like a larger stretch of work that we're going after
over, say, a decade, like larger scale goals like that.
And then project leads and the team in the project sort of get to figure out what are the important goals in that year to achieve and so on.
And that becomes kind of pretty scalable.
And it's a really good, really good setup.
But now I think part of what you're getting at is building systems like this, building companies, building larger projects, it's an extremely difficult endeavor.
And when people try to do it alone, you can get into all kinds of difficult challenges.
And it definitely helps a lot to have other people to kind of bounce ideas.
around with and all that kind of stuff.
For me, I think it's been, because protocol collapse is so, I don't know, unique is probably
the wrong word because everyone can easily say that their project is super unique.
It's more, Protocolaves is so different in its goal set in terms of really trying to do something
very long-term oriented, very R&D focused, and at the same time have short-term-oriented
success with specific projects.
That is kind of a pretty, you have to, it's quite difficult to kind of, I think, find
many people or, you know, several people that together could have been super aligned over,
you know, 10 years or so to kind of follow that path.
So I think for me, it's been an advantage to be a solo founder because I don't have
to argue with myself about, like, the kind of the trajectory there.
The way I think it definitely made a bunch of things challenging in certain periods of time where I became very much a bottleneck or a point of failure for all kinds of important development for the organization.
And so I think getting extremely good at identifying this advice for so founders especially, but for really all founders and all leaders of organizations, it's really kind of spotting where you have your unique contributions.
that maybe right now are really nice and really useful, but will quickly become bottlenecks
and problems and kind of trying to stay ahead of that so that you can then build that
into the rest of the organization and kind of either build systems against that or recruit for it.
Most of the times the challenge becomes build a team that's better than you in every way.
So at every kind of scale of organization, like when you began as a single person or a small team
or even when you're like 50 or 100,
one of your biggest challenges is going to be,
how do you take that team and in, say, a year or two years,
make it dramatically better at everything.
Everything you're doing, you need to get better at it.
And so that means finding other people
who are better than you in a bunch of different vectors.
And so at that point is when you can start specializing, right?
So you might not get people that are, you know,
maybe as have as much breath or as generalist as maybe the smaller
team needs to be, but on the whole, the whole system kind of gets dramatically better at
tackling most challenges that you'll face. So that's kind of how we've dealt with it.
Cool. Yeah. I know, I'm excited to see, you know, what else you guys will all produce,
because, you know, as you both pointed out, there's been already quite a few different things.
Our friends over at Algarand are starting an office hour series. So every week or two,
Algarand will bring together to team, partners, and community together for a live discussion
intended to provide you with all the answers and resources you need towards building useful,
meaningful blockchain applications. By joining office hours, you'll learn how to get started with
command line tools and use the SDK and RES APIs to help you build applications for use cases
like crowdfunding, asset tokenization, supply chain management, and gaming applications.
Each office hour will start with a theme, for example, smart contracts or writing contracts
in Python, followed by an open Q&A and chat.
So if you're building on a blockchain protocol that has unfeasibly high or unpredictable
transaction fees and doesn't provide you the speed you need, or if you work at a large
enterprise or financial institution and are interested in learning how to build applications
that can integrate with your current technology stack, or whether you have no blockchain
experiments at all and are just looking to take the first step into learning something new,
Algaran could be the right solution for you.
To learn more, visit algaran.com slash epicenter
for developer resources and information
about their next office hours.
We'd like to thank Algarand for those support of the podcast.
I would say let's dive a bit into the meat of our conversation.
And, you know, we want to focus a lot on Filecoin.
Now, of course, Filecoin is very much also related to IPFS.
So, I don't know, if you have to give kind of like an introduction
of like what is Filecoin?
I don't know, maybe does it make sense to first start with like, what is IPFS?
Or like how do you tackle kind of that question?
We've gotten good at describing FilePoint without having to assume that you know what IPFS is.
But maybe I'll describe both for just for the audience here.
So IPFS, it signs for the Interplanatory File System.
And it's a protocol to make the web peer to peer.
The goal there is to use content addressing to address and move around all the content in
the web.
So that means files, websites, all kinds of media, instead of using HTTP, which is a location-oriented
protocol, moving to using IPFS, which is a content-addressed-oriented protocol.
So that means hash-link everything or use name systems that map to hash links so you can
distribute everything with the same kind of integrity that the blockchains or Git have.
So you get to turn individual websites into the same.
having the same kind of integrity as Gid or or blockchains and so on.
Right. And maybe maybe I can briefly explain like an example that might make this easy to understand, right?
Where we're seeing IPFS used here today. So, you know, as an example, right, like in Cosmos, you have like non-chain governance systems and then people make proposals.
And then people often like link to like, you know, the voting takes place on the blockchain.
But then often you link to some kind of document, right, that describes the proposal.
proposal in more detail. And that's always always hosted an IPFS. And of course, that means because
the link defines exactly the document, like anybody can host it and anybody can kind of like serve
that document. So you don't have to single point of failure there. And also the link that's on
chain, you know, exactly defines the document. So you don't have to trust someone. You know, if you put
a simple URL, you trust whoever's hosting that website and they could potentially give you
something else and then you're reading something else and you're voting on. So I,
PFS is like a perfect solution for this problem.
Yeah, and we think that the whole way that we move information around right now is pretty
broken because this location addressing makes information too dynamic and too liable to change
or disappear, right?
So if you see a URL, like a normal URL, you have no guarantee that what lies behind
that URL is the same thing.
the person who sent you the link intended you to look at.
And you have no guarantee that it's going to be stable at all.
And so having at least the ability to create immutable structures for information
is kind of a really critical component for computing.
And then beyond that, you can then build dynamic systems and dynamic applications
on top of that immutable log of versions.
So, yeah, again, it's very kind of Git inspired and blockchain-inspired.
and that's kind of the way that
the emphasis as a project has come to
help out with all kinds of information distribution
but one of the key parts of the protocol
that it didn't address is
you know how do you
and this is an intentional division
how do you get people to store the content
in a long term right so
as a protocol IPFIS is pretty general
it just says if you are willing to move around the content
then you do and it sort of leaves it up to the user
into a different layer to
decide why you're moving the content. And that's pretty important because people could have
tons of reasons for doing this. It could be altruistic. Or it could be, you know, I want you to look
at my data, so I'm totally willing to serve it to you. Or there's a community together forming around
keeping some important datasets or some important public data around. Or you could, you know,
pay people in euros or dollars or whatever. And, you know, that was important to separate out from
another really important component of a system like this, which is building a cryptocurrency-powered
storage network. And so that's where Falcon comes in and connects. It's, you know, now once you
have the content addressed by IPFS, then Phatcoin is a protocol for, you know, incentivizing
the long-term storage and distribution of that content. So you can pay for it with a cryptocurrency
called Falkcoin. And you can, the protocols of the system,
is meant to be this two-sided marketplace where on one side parties are bringing in storage
and in distribution of the data.
And so these are the miners.
The miners are coming online and providing a lot of storage facilities and so on.
And on the other side are the clients that want to hire that storage and want to pay people
to distribute the content.
And so Frakoin solves this kind of market problem by using a currency as an immediate exchange
and a whole host of protocols to verify the integrity of the data long term,
get to high reliability,
create the kind of sets of incentive structures to achieve high reliability in the long term,
verify that miners are continuing to store all the data and all that kind of stuff.
So right now, for example, every 24 hours, all the data storage is proved.
So you can tell immediately if somebody stop storing something.
So you can immediately recover data, right?
So people come online into the protocol, take their data, maybe start a few different copies,
and you can immediately, you know, overnight, at any point in time, detective,
if people have stopped storing that data and then kind of move to repair that missing copy
or start it with somebody else and so on.
And, you know, the protocol itself has very strong incentive structures to, you know,
greatly incentivize miners to continue keeping all the data around in the long term and honoring those deals.
So you get into, this is kind of a verifiable property that I was talking about before.
If you today wanted to store data long-term in the network,
you outside of the crypto space, you can hope for the best and hope that opt-trits are going to keep it.
Or you hope that you can set up an arrangement with one of the big cloud providers
and that you're going to keep paying them, you know, hopefully and that that link is going to survive.
But if either your bank account or your credit card stop paying,
that link will, or, you know, people stop wanting to store that data, that link will go away.
And so it's an important thing to establish into the network that you can build a network
where you can set up very long-term-oriented deals for persisting, super-valuble, super-important
data in the long-term.
And that only requires cryptocurrency.
It doesn't require a bank account, doesn't require a credit card, doesn't require you to be,
doesn't even require you to be a human, right?
So you can think of programs themselves hiring storage on their, or whatever it is they're trying to do.
And so you can think of smart contracts as being able to use this storage platform.
So we'll talk about what happens under the hood of Falcon in a bit, but basically zooming out a little bit.
Falkoan is its own blockchain very much like Ethereum, but it's kind of a single-use kind of network for storing and distributing content.
So how does that change the architecture when looking at the differences between a general computation blockchain and something that is made for a very specific use case?
Yeah.
Yeah, great question.
I mean, I think today when you build a blockchain, you're building all of the consensus machinery and you're building transaction machinery and you have a currency and so on.
So it is definitely a super set of Bitcoin, right?
So you can use Falkcoin for everything you could use Bitcoin for and more.
But you can't do like general purpose contracts yet.
That is intended for the long term.
So it is pretty important.
It's pretty clear now that we need a pretty generic contract system,
even in the world of just kind of thinking about thought storage,
because you want to enable users to create many kinds of structures for their,
either on the client side when they're trying to hire
storage or on the minor side, think of different
kinds of financial instruments being created
or economic arrangements between miners and
their communities and so on.
And also think about all the applications that people are going to
want to build and want to map against the storage.
So we're definitely thinking of the whole
application stack.
But we are much more connectivity oriented than most
blockchains. I think right today we have
a lot of islands being formed that are sort of a consequence of the fact that, you know,
blockchain formats are different and consensus, there's a lot of experimentation with consensus,
and that ends up yielding different, all these different blockchains that don't speak to each other
that much. And that has yielded a world where you need to create an entirely different set
of projects and protocols to sort of interconnect them, right? It is totally what was going on in
the internet before the internet itself was coming online, which is people were experimenting
a lot with networks, and they were building different networks, and there were a lot of specific
protocols built, and a lot of different wires even, and different devices to speak to
each other and so on. And you ended up with, like, you know, this is about a decade of work
where tons of people ended up with many different computer networks around the world.
They didn't really talk to each other. And so people ended up having multiple terminals.
and then people had to go through the work of interconnecting the whole thing and creating the internet.
And we're about to kind of hit that stage, right?
There's already kind of a few projects that are doing this.
But what I think is more likely to happen here is that we're going to end up on bundling many of the blockchains themselves and separating the layers.
I think today it is surprisingly difficult to build an entire system that has very few assumptions and very little reliance in other systems.
and manages to build a solid crypto economy.
Meaning, you know, kind of the original intention for Falcon was that it was just going to work on top of Ethereum and was going to, you know, for a period of time, we explored doing this where we could just write entirely on top of Ethereum 1 and I have a set of contracts within the chain and then separately having kind of a virtual chain on top.
But we ended up in a world of a struggle because, A, the throughput would be, ended up out of calculations just exceeded Ethereum's total bandwidth.
And that's, you know, one problem, kind of hard problem to deal with.
And then another problem was that it was very easy to come up with structures where if you have one system writing on another system like that, you could start messing with the economy in one layer to affect the economy in the second layer.
And it was very difficult to create structures that were resistant against that.
And we didn't find really good ways to doing that.
So, you know, kind of, to put it in terms of today, imagine that you can figure out a way to exploit a network, or a near C-20 token or something, and then you take a huge flash loan with EVE to do that.
And then suddenly, like, it's very easy to kind of manipulate the incentive structures and mechanisms within one network writing or another.
So you have, like, this crazy combinatoric explosion of mechanisms when everything is kind of in the same medium.
So kind of in the four, in order to kind of like sort of mount these problems and kind of look ahead, we ended up having to build our own entire chain, like our layer one chain from scratch.
And that was a super useful thing for us because we ended up being able to optimize the hell out of all kinds of things for the use case.
But it's also super wasteful, right?
Because who needs another layer one blockchain, right?
Like there's way too many of them.
And especially when we think ahead in the future, like there's going to be all kinds of,
scalability improvements that are going to arrive and you don't want 50 different teams building 50
different scalable blockchains you want one or three teams building really robust systems and
getting a ton of input and help from a ton of other groups and you want to arrive at a dramatically
better better you know protocol that way another way to put it is like you don't want 50 different
internets you want one internet by definition so I sort of expect that over time often a lot of other
blockchains will unbundle and end up with like a different kind of layering. But that's probably
many, many years out because there's a lot of tech that needs to improve and a lot of protocols
that need to be built to address those problems I mentioned, which is how do you get scalability
across this wide array of systems? And how do you decouple the mechanisms such that you can
have high certainty that you can isolate the economic effects of one and you aren't accidentally
kind of bundling them into others. But in the meantime, that means there's a lot of work to do
to have a whole contract platform on
pipeline, which is kind of like ongoing work that
will land in the future, plus build bridges to every other
major blockchain. We want to be able to have,
make it easy for contracts on Ethereum to hire
Falcon Storage, like directly and natively within Ethereum without having
to think about having to have an application outside that
does anything complex, right? Like ideally, you just kind of
call Falcon within the EVM. And so there will be a set of
contracts within Ethereum that, that,
set things up such that the Falcon Network can then operate in this.
And so that means there's a bridge there,
but now we have to do bridges across a whole bunch of different networks and blockchains.
And, yeah, super wasteful.
But I think in the long term, it's good now because it lets all the teams kind of explore
this very rich design space, figure out what the right structures are.
And there's going to be a very large period of convergence that's coming ahead where a lot
of these tech stocks are going to merge.
And I think, you know, there'll definitely be a lot of random,
protocols that persist and continued.
Today, you can still find Gofer websites, right?
And who uses Gofer?
I'm like, almost nobody.
Everybody uses HP, right?
That's two examples of two.
And even those two were late-stage hypertech systems.
There were a bunch of hypertext systems before
that kind of led to the development of the web.
And so likewise here, I think, you know,
we ended up building an entire blockchain from scratch.
There's a lot of important decisions that kind of led to it
and a lot of utility that we got about it in the short term.
but in the long term, a lot of these systems will end up converging.
I don't know if I answered your question.
I guess your question was like what optimizations you get to,
and maybe I can describe some of that.
We get to really make sure that blocks are as full
with the key component proofs that we need as possible
and that we aren't rate limiting the growth of the network
because it's also carrying a bunch of transaction traffic
for a lot of other things.
but you know, it does have like a full, you know, full ability to, you know, have any transactions
and phoquin as a currency can be used and you can, there's already wrapped phoccoin in
Ethereum so you can now move around in Python Ethereum as well and have it participate in a bunch of
other defy use cases there and then make, make its way back to the Falcon chain.
Yeah, I don't know if that's useful. One other thing maybe that was, that is valuable is
we got to build this entire blockchain with kind of IPLD primitives from the get-go,
which means it's very easy to move around IPFS itself,
so you can take the entire Cloudcoin blockchain
and the state tree and the blocks and all of the artifacts
and move them around IPFS and traverse them with IPLD as you would anything else.
That's a super, super powerful primitive.
A lot of magic that comes out of that,
that's probably not going to be apparent for a lot of people for a while,
but that's kind of like a super high utility thing
that comes from our work on IPFS
and our work of thinking through data structures there
and saying,
hey, when you're building a blockchain from scratch,
don't create a bunch of random formats
that are really hard to use and so on.
Try and make it definitely compact,
but web first and make it easy for compatibility
with a bunch of other systems.
And so that's been maybe something useful
that dropped out of that.
Cool.
And I think that was really cool to hear
like some of you thinking around this,
yeah, the FileCoin blockchain.
Now, a related question here is,
I mean, Filecoin, okay,
it's a layer one chain, as you point out.
but it's also like a very different layer on chain, right?
Because you have these miners that are storing, you know, storing datasets and like
distributing serving data sets.
But, you know, normally you have maybe a minor or a validator and they're all kind of doing
the same thing.
But here you have, you know, you have different miners storing different data sets.
So like, can you explain like what are the major differences in terms of like how the
Bitcoin blockchain is sort of, you know, built and designed?
as a result of the file storing function it performs versus other layer 1 blockchains.
Yeah, so I think maybe the biggest and most important piece is that the proof of work function
is a proof of useful storage function as opposed to normal hashing, hash rate oriented work.
Yeah, this is like one of the highest utility things here, which is, and we really hope that other
blockchain start doing this, which is instead of using what is a super wasteful proof of work
of just kind of like hashing a bunch of randomness to kind of try and produce like, you know,
the right value and kind of win the block and whatnot, instead do a try to do, you know, do useful
work in that computation. And there's a ton of protocols that have tried to do this in the past.
It was kind of like a big challenge for us to try and do it with by producing kind of a sort of
as a byproduct, have that work function produce useful storage.
And that's one of the really important differentiators,
where this as a layer one blockchain has all of the work and effort
that goes into maintaining the consensus of the protocol
is there's a bunch of useful storage backing that,
helping back that consensus up.
There's also elements of kind of proof of stake here,
where it's kind of like a little bit of a mix of both.
We ended up with kind of like this sort of hybrid protocol
where the kind of useful storage is sort of kind of
matches up over time and for a section of the protocol is sort of the stake that minors have
and might lose for kind of consensus, consensus attacks and so on. And this has been like a super,
super useful component because, you know, kind of what I think is the most important graph
in the entire crypto space is the hash rate of Bitcoin. I think like that is, and it's been
kind of like this astonishing graph over time. Like, I don't know what it is right now. I'm going to look
it up. You know, it's been, it's always kind of.
of astonishing to see just like the ridiculous growth in the hatch rate. So this is, this is, like,
insane. I don't know if this is true, but this graph claims that we're, like, close to 150x
a hashes per second. It's an astonishing amount of hashes, an astonishing amount of electricity.
And, you know, one of my favorite things to do is, like, plot this graph of all times,
from, you know, 20, doesn't 8 to now, and just see, like, this insane exponential that is
totally relentless, right? So you can, one of the few things you can guarantee about, you know,
the entire crypto space is that the Bitcoin hash rate, you know, in a larger scale over
the multiple years timescale, is going to continue growing exponentially. And it is super crazy.
And it's, you know, I don't know what, uh, what concise comparison to a country it is now,
but, you know, last time I did this calculation, which was, I think, like, two or three years ago,
it was, you know, was surpassing Australia. And like, that's, yeah, and it's growing exponentially
since then. So I don't know, maybe it's like getting close to China or something like that.
It's just an enormous amount of power that's going into one single process.
And the reason I think this graph is so important is that it shows the tremendous power of an incentive structure.
And in here, there's a very simple game where a lot of miners are competing with each other to win the next block.
And all they need to do is get a little bit more power than each other to increase their likelihood of winning the block.
And out of that very simple structure, you end up producing,
this insanely huge and powerful computing network.
And so this is what, what, Phelcoin,
this is kind of like one of the secret ingredients to,
well, not secret, but like totally obvious ingredients to FilePoint,
which is you want to create this structure
with this kind of an incentive structure,
a couple it to the block reward.
So that, and this is where kind of like the proof of useful work consensus comes in.
You have to like land all of that work in order to do this,
at least in kind of like the original framings.
and you end up with like this,
you can produce this kind of amazing exponential growth
for the adding of resources to a network, right?
And so this is what, you know,
what's kind of like behind Bitcoin.
In fact, PacoCoin right now, unfortunately,
like rate limited by the blockchain bandwidth.
So even though it just launched,
we're already now rate limited by,
like the growth of the capacity of the network
is now right limited by the chain bandwidth.
And that kind of sucks.
We've got to get to scalability a lot sooner than expected.
I mean, it's a really good.
great place to be. Like, we just passed an exabyte. And, like, that's a staggering amount of
scale. Like, when you think about that kind of... Yeah, going back to what this graph is so important,
when I first saw this graph and it saw the power behind it, you know, you kind of realized that you
could use this kind of simple incentive structure to amass any kind of resource on a broader
network and then kind of then use that resource to provide some useful service. And so that
means you can do this to storage, you can do this to computing, you can do this to bandwidth,
you can do this a bunch of different things. And so for us it was one of the key components for
Falcon's design was you have to have this right incentive, this correct incentive structure
where miners are competing with each other to add significant capacity to the network.
Now, another whole other layer here was how do you turn that capacity into a very strong
incentive to store really valuable and useful data.
There's a whole world of other mechanisms I won't get into at the moment to really make sure
that that's actually useful valuable data versus kind of garbage data or something like that.
But yeah, this is one of the maybe important components that differentiates Falcon as a layer
on blockchain is the use of proof of useful storage to maintain the consensus and earn the
block reward and get this mass as like really large capacity of real large amount of data
for them using to store all the stuff that we want on Web 3.
Funny thing about scale here, by our calculation, most of Web3 storage is a few petabytes.
And so, you know, we can store all of that hundreds of times over.
And so now we have like this massive amount of capacity.
And now like all of the interesting use cases that can fill it up.
And we can turn that capacity to go after kind of like more traditional Web2-oriented problems
and businesses and so on, where, you know, like now go and make this a really useful network,
not just for the Web 3 landscape, but for a bunch of applications that now can turn around
and use super useful, cheap storage.
Can I ask about the differences in consensus mechanism between proof of work,
blockchains like Bitcoin and Ethereum 1 and Falkoin?
Because, I mean, so basically this is just a fairly random thought.
But you were talking about how the hash rate of Bitcoin,
has gone up exponentially.
And I mean, in part, that is because computation is getting cheaper.
But in the most part, it's actually because people have just added resources to this.
And if you look at storage, so basically, if you look at computation,
we're pretty much at the limit of what is possible.
I mean, obviously you can redesign circuits and so on.
But, you know, from a physics perspective, and the MOSFET is probably as small as it's going to be.
Whereas if you look at storage,
you have so many orders of magnitude that, you know, you can still get to.
Building storage has gotten exponentially cheaper and will get exponentially cheaper for the foreseeable future.
Does this have any repercussions on the Falkoin blockchain consensus system?
Well, so there's a couple pieces here.
So one is the importance of the mechanism, I think, is that you can convince a lot of people to add resources to one-ne-
network. So I don't think that the Bitcoin hash rate has actually put a dent into accelerating
Moore's law, for example. Like I think, you know, Moore's already, you know, we're at the very
limits and now Morsella is about, or already, pseudo-Morsalana, really, Morsella, is about
increasing, continuing the cost reductions by getting, you know, many different chips to
talk to each other. Like many different kind of, we're going towards parallelism, not kind of
smaller and smaller transistor sizes.
And so the kind of like scalability will come from better low-level ways of computing
in these paralyzable, paralyzable systems, right?
So things like GPUs and TPUs and all that kind of stuff is, I think, kind of like where
the frontier lies.
To your point, you know, when Bitcoin really kicked off into gear, you know, it's exponential
caught up with more, with kind of like the normal computing improvement rate.
and then after that just followed it, it didn't really push it forward, right?
So I don't think people working on Bitcoin are helping drive fundamental improvements.
What they may be doing is helping drive fundamental improvements for ASIC design for hash rates,
which is like not the most useful thing in the world.
You know, maybe useful, but like not that useful.
So that's a thing where if you change the incentive structure here where in order to get
an advantage over anybody else, you really,
really do provide something, some vastly useful resource, and that can be pretty, pretty
important. And that's one of the kind of important design considerations. So to your question around,
like, hey, is the work on Falkoin going to push on storage? And because storage is further away
from the theoretical limits, might we actually get miners and other groups pushing and advancing
the state of the art in storage technology? And would like it kind of be meaningful.
or it could potentially present a risk.
Like, you know, what happens if somebody does this in secret
and, like, you know, gets like this breakthrough storage device
that can outpace everybody else.
I think right now, attacking any kind of consensus protocol
by trying to find a lot more resources this way
and then trying to, like, turn that into a business,
it's not like the kind of resources that you need
to send the many billions of dollars of R&D,
but in doing like the scale of manufacturing,
manufacturing to be able to, like, if you came up with a breakthrough device that could store things
at a, you know, maybe 10x or 100x better price point than the rest of the industry,
as you probably have invested billions of dollars into doing that. And the only way you're
going to get your billions of dollars out safely is by, you can probably end up mining
popcorn with that, or you're probably just going to sell it to everybody else. What is most likely
is that you're going to set up a storage media company and you're going to sell it to everybody.
And the reason why I don't think it's going to any kind of, um,
Consensus attack is really viable here is that by the time you get into the tens of billions of dollars of investment
is something like this, marshaling that kind of resource, and then betting it all on the fact that a consensus attack might net you a benefit is super sketchy.
Like there is very few ways of trying to do a double spend with a large enough quantity of any cryptocurrency, say Bitcoin or anything like that, and try to then somehow get away with it.
Getting away with it includes having to clear those transactions into other currencies that are not traceable where other people are not going to find you and so on.
So these kinds of consensus attacks at this scale is, I think, infeasible.
It's just straight up infeasible for humanity to perform.
I think it is feasible when networks are off smaller.
If it only takes a few million dollars to mount these kinds of consensus attacks, totally.
At that point it becomes viable.
And then you can trade using defy rails into.
you know, Zcash or your favorite
private token and so on and kind of
get away with it that way. But you really need to cut
down the cost of the attack to millions of dollars or tens of
millions of most. Anything
larger than that and your risk
profile just turns into, hey,
make a business, like become a useful minor
or, you know, sell it to other miners.
Like, why not? Like that's like a much
safer pathway to
success. And you tend to find that
like once you reach those higher amounts
of capital in the world, it tends to be
pretty short-term oriented and pretty
rational and if the rational incentives line up then you won't get like this irrational attack attack factor
and so the not outpacing like the like the investment amount to attack needs to not get too far away
decoupled from the total value of the currency maybe maybe this good way of thinking about it
sorry it's a random tangent this is really cool exploration and like one question that kind of
comes up for me so I mean it was interesting you brought up you know this like useful proof of work right
Because as you pointed out, right in the past, there were various ideas in that direction.
So this was like 2013 and like even earlier.
There was ideas like, you know, prime coin where you would try to, you know, find new prime numbers.
And then the idea was like, well, this has like some kind of utility as opposed to Bitcoin.
But of course, you still have the fact that, let's say prime coin became like very valuable.
Then maybe a lot of people would expend like a lot of energy into like finding new prime numbers.
And, you know, there may be some utility, but it might be very much, you know, from a social
perspective, it might be very much like out of line. So I'm curious here, you know, already
file coin is actually very valuable, right? The price is very high. And if you sort of took like
the total supply of Falcon that will exist and, you know, the current price of Farcoigne,
then you're, I think, like, number three on the market cap.
But that's sort of assuming. So Falcone has a very slow release rate. It's not going to, you know,
has like a half life of six years plus the baseline. So we won't get to even, you know,
half of that for the next six to eight years, half of that supply. Plus, you then have to think
about in six to eight years, where are all of those other currencies that you're kind of comparing
against? Sure. Sure. Look back to Ethereum. What was Ethereum worth six years ago? And it's like,
well, zero. And like it's worth like, you know, X now. What is Ethereum's growth where you're going to be?
And then map it to that. So like, that's like the fair. And other people right now are just like
taking a supply because that's like a very easy competition to. Yeah, totally. Yeah, I totally accept
your point. Yeah. I mean, it's not so much my, like, I think the, the point I want to make or the
kind of question I have is, you know, is there going to be kind of like an alignment? Like,
I totally get like, FileCon will probably like increase the amount of investment potentially in like,
you know, how to store files efficiently and distribute them. And that seems to be like, you know,
a socially good thing. But I'm wondering, like, you know,
Do you think this, is there like a risk that, for example,
Fyrecoin would be so successful that there would be like so much invested in that
that is kind of like you're more than socially optimal?
You mean like if the amount that's going into producing storage for the network
greatly outpaced humanity's need for that storage?
You're basically saying, hey, so what?
Like there's like a lot of value here.
And then suddenly the utility of having all of this storage is like,
not actually a useful to humanity.
And that's a market question.
And that's a really solid one because, well, a couple of assumptions.
One is, you know, what is a growth rate of human usage of data?
And, like, that's growing exponentially, right?
Like, the data generated is, like, going to be in the Zeta bytes soon.
And we actually don't have enough storage to store it.
So most of the data in the world generated gets deleted.
You can maybe say most of the data in world generated doesn't need to be kept around.
But it is kind of one of these, like, almost, you know,
classic business school style disruptive innovations where you greatly reduce the cost of something
and you create a great capacity of something and suddenly starts getting used in a bunch of ways
that before was way too expensive to do. So this happened to coal and a number of other
kinds of resources. And so if we can, you know, we're very far away from, you know, one xabyte is a lot
and like starts being, you know, finally, you know, it's kind of amazing to hit one xabyte because
that's like suddenly competitive with the large cloud providers. We, we didn't,
kind of expect to be that here so fast like we sort of you know hey we had 100 petabytes
it's kind of be awesome and like you know we're already at a next invite and we're like oh wow like
that's super fast awesome like that that's a really great result but we're still kind of really far away
from a zeta byte a zeta bite is a lot and so um we we are very far away from matching
humanity's consumption and humanities consumption is growing way more than our storage media is
capable of dealing with. So I don't think we'll ever really have as much storage media as we
really want to have. And because storage media is not just about storing bits, right? Like,
you can store bits into all kinds of systems that might be really cheap, like DNA and so on.
You want storage media that has certain kind of read-write cost profile and latency profile. And so
there's only so many hard drives and SSDs in the world. So part of what's going on with
storage media is that storage media is kind of far away from its fundamental.
the limits not because, you know, people are working really fast and haven't hit the limit,
but rather because we sort of stopped investing as much in hard drives.
So if you look at the, there's like this really cool graph similar to Moore's Law called Kreider's Law,
which tracks kind of like the density of storage versus the cost reductions, right?
And it can sort of see where it goes.
And it sort of tapered off in the last five or so years.
It's no longer growing exponentially.
And part of the reason for that is not that people are stopping to research storage media
or have hit fundamental limits,
is that people are transitioning from hard drives to other media.
So now a lot of the R&D budget in the world is going to SSDs and Flash
and a bunch of other storage media that is faster in various ways,
or NVME and all this kind of stuff,
where it's different tiers of storage for different kinds of applications,
and this gives you different performance profile.
So different from shapes where you can just focus on transistor count,
Here, you have multiple variables to optimize,
and so you end up with different tiers.
And so I don't know when we'll hit the fundamental limits on any of them.
But for now, Falcon is sort of geared towards the hard drive SSD world.
A lot of miners are using SSD, so a lot of miners are using hard drives.
And that's kind of like a sweet spot.
In the future, we've already thought a lot about how to bring online,
the ability to choose the storage media.
Like when you're making a deal, you're actually imagining,
be able to select, I want this on this type of storage media so that you can kind of get
guarantees about the latency and so on for retrieval and all that kind of stuff.
But that's kind of like longer term work.
It's hard enough already to get like the incentive searches to produce this output.
That is going to get a veryifiability on that other kind of storage media will require a bunch
of technological improvements.
But you know, returning to your question, like I don't think we'll ever really outpace
humanities need for data storage.
the more data search capacity we have,
we'll be able to kind of cut into all the data that we delete today,
and maybe someday we'll come up with more uses for data search.
I think Bitcoin ends up being a waste
because nobody can use these hash rate, right?
Like this hash rate is totally wasted,
and so when you look at the Bitcoin enterprise,
it's a lot of hashing machines doing nothing but dissipating energy,
and like that is horribly wasteful.
But if you can turn that kind of mechanism
into producing a useful resource.
That becomes, I think, pretty interesting and valuable.
And, you know, like you're saying, as long as it doesn't kind of outpace the human demand,
like, I think it's an important piece.
One other point here is, like, it's actually decoupled from the price of the token
an important way, meaning it's a market.
At the end of the day, like, a Falcon doesn't guarantee you a certain amount of storage.
It's miners of selling you storage out of price that they set.
and so the
Falcone price kind of floats.
There's a coupling and a relation
when it comes to the fee structures.
There's a bunch of fee structures
in the network
that are denominated in Falcoy and that
kind of ends up producing a link there.
But it's not about kind of like the
total amount of storage or something like that.
Cool. So you mentioned
a bunch of impacts, right, that
Filecoin has. So first of all,
this aspect that, okay, you have maybe
different type of verifiability, you know,
this idea that, okay, machines
could buy storage.
and then we have also hopefully the impact, right,
the fire coin will be that storage
would just become like much cheaper
and maybe more abundant.
So if you kind of take, you know, those things together
and let's assume that fire coin, you know,
really succeeds at like a huge scale
in the next, you know, decade,
you know, how will that change the world?
Like, what do you think are the most important impacts
it will have?
Yeah.
So I think, you know, kind of maybe I'll describe this
in short, medium and long term, right?
So in the short term, already with Filecoin today, as you can use it now and with the capacity that has now, one of the minor sets that you have now, you can start taking all of these Web3 applications that right now link content to IPFS and so on, and then backup all of them in FileCon.
So you now have a fully crypto-native, Web3 native way of storing and distributing those applications.
So today, a bunch of them end up using Amazon or Azure and other systems behind the scenes.
in order to pin their IPDefense data, right?
So maybe the front end is decentralized, but not fully.
There's a few applications out there that are kind of fully decentralized,
that kind of like nobody is necessarily pinning other than kind of the community members.
And particularly, we'll probably jump in and talk about one in a moment.
And definitely there are other source networks in crypto that, you know, kind of couple with IPFS and you can use them already.
But I think one of the kind of important improvements here is you can now sort of addressing larger
amounts of data, like really getting into petabytes of stuff and use that in web through applications
in a way that you couldn't really do before, right? So today, you can now build something like a
social network and build it on tooling like, you know, textiles doing, I don't know if you're
familiar with textiles, but it's like a really useful tooling for building web through applications.
And you can build a, you know, something like a social network or a video-oriented website or
something like that, like a totally traditional consumer application type thing, and deploy it in a
fully crypto-native, weather-native way where the whole application, not just the front-end,
but like the logic itself and all the data that users are going to generate, all of that
gets backed up and stored using crypto. And that becomes, that kind of opens the doors for a lot of
things that up until now weren't really, wouldn't really possible, weren't really done that much,
where there's all this kind of hybrid app stuff
where some of it is on using blockchains,
a lot of it is still using the normal cloud.
And it creates those really wonky, lopsided structure
because a bunch of the important facilities
are still happening in the kind of lockdown cloud,
and it creates this very large dependence
on those development teams.
Those development teams still have to run that infrastructure,
still have to exist, still have to kind of have a bank account
connected to Amazon.
And ideally, you want a...
system where the developer can build an application, give it to the world, and then users move
their application storage and wherever they want, and they can control where that storage goes.
So most of your web experience a world where developers don't have control over your individual
data and can't decrypt it, can't see it, and so on, but rather you control exactly
where that gets stored and you control kind of the outcome in the long term of that data.
and the application developers can move more towards building the UI
and developing it and shipping it out there,
but not really banking on creating a data monopoly
that then becomes exploitative of the data.
It's really kind of breaking that paradigm,
and you can make it possible because a lot of application developers
never really want to go into storing people's data
or looking at it and kind of like trying to exploit
using advertising or something like that.
They really just want to find a really good way to monetize
building the application itself,
so they can kind of continue doing so.
And ideally, they don't want to pay any of the infrastructure costs that it takes to run an application.
Most developers are forced into advertising because they end up accumulating.
In order to run the application, they end up accumulating these huge infrastructure bills.
And so the way to pay these huge infrastructure bills and all of the engineering talent and effort that goes into maintaining the infrastructure,
which is, you know, sizable a number of people, all of that then suddenly is like, okay, well, you know, advertising model.
like screw it let's do that
when in reality you could
move more to a model closer
to the app, the kind of mobile app stores
where there's a ton of apps that are just developed
by developers release into the
world and don't have any kind of
long-term relationship
between the developer and the user
and no kind of like data ownership
problem there. And so like that's
one of the things that I think we can achieve in the short term.
It's like start moving,
making it possible for application developers to build
a consumer-oriented applications
like social applications and video and all that kind of stuff
that are fully Web3 native
that can start pushing the kind of storage frontier
to a world where, you know,
users can have full control of their data
and kind of can direct where it gets stored and so on.
And that's kind of kind of like possible now.
It's also possible to store large data comments.
So this is kind of another neat use case where
there's a lot of public data out there that today gets
somebody has to foot the bill for
and somebody has to kind of agree to
be the steward for and they have
to sign up to either store themselves in their own infrastructure
or hire a cloud to do.
And in reality, it's a community-oriented
dataset. Like, there's a lot of people that care about it.
There's a lot of people that want to store it or there's a lot
of people that want this thing to exist and are willing to pay
for it. But today, the current
infrastructure of computing forces
there to be an organization that kind of has to
steward the data. Ideally, you can be in a
world where you can just as an individual or an organization create or publish data. And once you've
published it, that's it. That's the end of your relationship with the data or with a community.
And if people want to keep it around, they can pay for it. And they can pay kind of like a pool of
resources and come together around against that data set. So think of now building a sort of public
record of really important data that should be kept around by a lot of people. And this could be
different kinds of data sets, or it could be all one big kind of public record. You can now build
data commons that's really a commons where people are contributing resources to keep it around
by individually paying the people that are actually storing the data, like the miners and so on,
without having to go to a kind of an intermediary steward that sort of isn't the hook for maintaining this data set.
So it's a really important distinction because it means that you can have
content address data that's on the network that nobody owns that anybody can use and like that's that's
something that doesn't really exist today because most data has a URL and a URL means a domain
and a domain means an authority and an authority means an entity a legally recognizable entity
sort of owns the data and in creating an avenue for xabytes of data to be you know fully in
the public domain and fully not owned by anybody like that that's something.
something that I think wasn't really possible before and it's now possible today. How people will use this, or how people, what kind of data comments will exist and what time scale we'll see. Like it always takes a while to kind of move and rehome, rehome data. But that's one of the kind of like interesting and useful things. And a more midterm and long term, we really hope that, you know, first we can have a pretty significant impact in in shifting the kind of rights that people expect out of the computing infrastructure. So that means
it should be possible to build applications where the data that you add to the network
is really under control by you and seen by you and decrypt by the people that you choose,
and you don't have to worry about, again, developers or other parties spying on it,
but where you can sort of expect that most applications you use follow this paradigm.
And today that paradigm is not kind of exists here and there, but it's a very small minority.
And a big part of the reason is the, again, kind of that advertising model
and kind of like the current model of how data gets stored.
And so in the medium and long term,
it's not to take many years for us to get to this
because there's all kinds of important devX improvements to be made
and kind of markets to win in and all the kind of stuff.
But if we can have a significant impact
into upgrading our computing infrastructure such that
most users in the world can sort of expect
that when they use an application
and they type into their phone
that the only people that are looking at that message are themselves
and the people they intend to send that message to,
or share that photo with or whatever.
Like that would be a very important kind of contribution to the world.
But again, it's like, it turns out to be incredibly difficult to achieve this.
And it's a lot about infrastructure, a lot about applications, and it's going to take a long time.
But, you know, the first step is making it possible sort of there now.
Now winning in all the important markets, that's kind of like a longer, longer time horizon kind of question.
And there's all kinds of important use cases that, like we mentioned before, like having programs that can hire storage for themselves and start, that's kind of a longer time.
like a very greenfield avenue where we really haven't seen where people are going to build with
this kind of stuff, but you can start thinking of full, not just small contracts in kind of
today and Ethereum and things like that, but fully-fledged applications that are entirely managed by
Dow's or aren't even managed by Dow's with people that are just kind of just entirely
programs or like small AIs that are running these things. So imagine if the Central Land was not run
or maintained by humans, but it was just kind of like a protocol and a system.
that kind of had its own kind of algorithms for deciding when to store data and what to store where and all that kind of stuff.
And you could sort of design maybe the rules of the protocol, but then it becomes an important infrastructure layer.
And then you don't have to, like, you know, you don't have to trust humans, again, to maintain their culture of the system in a certain way.
You can really turn into protocol kind of like IP and require the rules function in a certain way.
You know, some of that kind of becomes possible if you can enable programs to hire vast amounts of storage on their own.
Like, you know, that's sort of like what in the long term I think we can achieve.
I mean, you can do it right now in smaller scales, but it's going to require, you know, to really kind of hit that source of scales and so on.
It's going to require a lot of bridges to other applications and so on.
So I think like that's going to be a bunch of uses that we can't really predict at the moment.
But you can maybe see glimpses of.
We've kind of like theorized many different kinds of applications that you might build that would be kind of self-replicating in some way or would hire storage on their own and whatnot.
And it seems like a pretty exciting, exciting field.
but this is the kind of stuff that once you build it,
you then discover all kinds of things that people are now doing with it
that you may not have expected.
Good and bad.
It doesn't end here.
There's more to this conversation,
and you can hear it on Epicenter Premium.
As a premium subscriber, you'll get access to a private RSS feed
where you can hear the interview debrief
and get enhanced features like full episode transcripts
and chapters which allow you to easily skip to specific sections of the interview.
You'll also get exclusive access to roundtable conversations with Epicenter hosts and bonus content we put out from time to time.
Go to premium.epicenter.tv to become a subscriber and support the podcast.
