Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Dominic Williams: DFINITY – Smart Contracts and the Internet Computer
Episode Date: August 24, 2021The DFINITY Foundation is a not-for-profit scientific research organization with a mission to build, promote, and maintain the Internet Computer. The IC is a layer 1 smart contract enabled blockchain ...which achieves remarkable scaling properties through native sharding.We were joined by Founder and Chief Scientist of DFINITY, Dominic Williams, who explained his deep support for smart contracts, gave us an insight into the Internet Computer Ecosystem, and shared his visions for its future.Topics covered in this episode:Dominic's background and an update on DFINITYThe advantages of smart contractsThe Internet Computer ecosystemHow DFINITY compares to EthereumThe concept of sharding - subnetsThe tokenomics of The Internet ComputerVision for the Internet Computer's futureEpisode links:DFINITYThe Internet Computer RoadmapDFINITY on TwitterDominic on TwitterSponsors:Gnosis Safe: Gnosis Safe is a smart wallet for securely managing digital assets and allows you to define customized access permissions. - https://epicenter.rocks/gnosissafeThis episode is hosted by Friederike Ernst & Martin Köppelmann. Show notes and listening options: epicenter.tv/406
Transcript
Discussion (0)
This is Epicenter, Episode 406 with guest Dominic Williams.
Welcome to Epicenter, the podcast where we interview crypto founders, builders and thought leaders.
I'm Friedrich Ernst and I'm here with Martin Koppelman as a special guest co-host.
Today we're speaking with Dominic Williams, chief scientists and founder of DFINITY.
DFINITY builds the internet computer, a layer one smart contract enabled blockchain that achieves remarkable scaling properties.
The way that the Internet computer does that is through native sharding.
They call it somewhat differently, but that's basically what it is.
The compromise they end up making is that the security guarantees are not as strong as on Bitcoin or Ethereum requiring some trust in a majority of validators.
But before we talk with Dominic about DFINITY, let me tell you about our sponsor this week.
NOSIS safe is a smart wallet for securely managing digital assets.
What makes NOSSAFE different is that it allows you to define customized access permissions.
Digital assets on Web 3 are usually controlled by a single private key, posing a challenge,
as private keys may get lost or compromised.
On top of that, users are forced to trust individuals holding single private keys to govern highly valuable digital assets and protocols.
NOSSA SAVE enables users to control digital assets with much more granular permissions,
involving multiple private keys,
a subset of which is required for executing transactions.
These keys can then be stored on different hardware or software wallets
or even shared across multiple people.
Additionally, custom permission modules can be added
to enable even more use cases,
such as setting transfer allowances for individual keys
or automatically executing transactions decided on
by a snapshot community vote.
NOSASA saves extra layer of security and personalization
makes it the most trusted Web3 asset management solution for individuals,
teams and DAWS,
who already use it to store more than $57 billion dollars
worth of fungible digital assets.
So that's Ether and ERC20 tokens.
Additionally, it can also store and manage NFTs.
On top of that, NOSASA save also provides a lot of opportunities
for developers to plug into the platform.
Developers can extend the NOSASAF interface with their own DAPS and even build additional permission modules.
The ecosystem of SAFE apps and custom modules extends the usability of NOSSAFE as a portal to DFI, financial tooling, organizational management and beyond.
Visit NOSUSMINUSAVE.I.O to learn more and get started and with setting up your own SAFE.
Dominic, you are the founder of DFINITY and you have been on this podcast before.
I looked it up and it was almost five years ago.
So it's super good to actually have you back.
I think seeing that it's been quite a while,
I think it would warrant to have a fresh introduction on you.
So Dominic Williams, who are you?
Well, firstly, it's very good to be back.
I think it's just over four and a half years since I was lost on your show.
And, you know, I'm working on exactly the same thing today that I was then.
Back then it was just called affinity.
Today, the network is called the internet computer,
and the foundation is still called affinity.
And of course, DFINITY stands for decentralized infinity.
You just, you know, remove some letters and stick them together,
and you get the word DFINITY.
And the objective, which was really first formulated in 2015,
was to really implement this idea of a world's computer,
a blockchain that would be fast and have infinite capacity,
or at least its capacity could be scaled without limit as needed.
And, you know, that's what we produced.
It was a tough job.
It took many years.
It's probably in truth, several years more than I expected.
But, you know, I'm pleased to say we're three months into the fully operational
public internet computer blockchain network and it's going very well.
So, I mean, it's been quite a while.
So what was so challenging about it?
Well, I mean, there's another way of looking at that, right,
which is why do no other blockchains scale as yet,
even though everyone recognizes that it's desirable to scale the capacity for smart contracts?
But to date, nobody can scale smart contracts,
blockchain's are slow and inefficient.
And there's some other things that the internet computer can do that other networks can't match
yet, such as serving interactive web content.
So smart contracts on the internet computer can actually service HTTP calls, which gives you
a measure of the enormous progress that's been made.
And to deliver something like this, you have to rethink blog.
architecture and blockchain science from the ground up.
And you also actually have to develop a lot of novel cryptography.
So you can't just go into the existing cryptography toolbox and say,
I'll take this and that and plug them all together and produce the internet computer.
You actually have to develop novel cryptography.
And if you want to do that, you have to build a team of eminent cryptographers who can
create the math.
then you have to find engineers who are capable of implementing the complex schemes they produce and so on.
So even building a research and development organization capable of implementing this kind of thing takes years.
But you also, of course, have to have done the research and development too.
And so that's why it's taken so long.
So maybe let's talk about what DFINITY set out to do.
So let's talk about what you mean when you say intercomputer.
and what's wrong with the internet as it is and what's wrong with Web 3?
Well, it's not there's anything wrong with it.
It's just that, you know, we believe there's a lot more potential that blockchain can unlock.
So I believe in this thing called blockchain singularity.
And, you know, I guess I sort of started out on this path back in 2014, you know,
when I heard the expression World Computer from Ethereum,
folks and wow, you know, wouldn't it be fantastic if there really was a world computer and
everyone could build everything on it? And, you know, it inspired me a great deal. And then
on top of that, you know, I began to think very hard about the nature of smart contracts.
And I came to the conclusion that small contracts are in fact a very new, very novel and
massively superior form of software.
And I realized that, given the advantages of smart contracts, if you could remove the
limitations of their implementations at the time and today outside of the internet computer,
then eventually everything would be rebuilt and reimagined using smart contracts and run
entirely from a blockchain.
So, you know, I decided to make it my mission to implement this world.
computer, and not something, you know, I mean, like Ethereum is today, which can do sort of a handful
of transactions a second, but a real world computer that can handle, you know, if necessary,
billions of transactions a second can run efficiently, and quickly scale its capacity,
upon which we could re-implement absolutely everything, and that's what I mean by blockchain
singularity. So if you think about smart contracts for a moment, how side of the context of
legacy blockchains which are very limited.
First of all, of course,
smart contract software runs on an open public network,
which in itself's an advantage, actually,
much better to run on a public network
than on Amazon Web Services or Google
or Microsoft Azure or wherever it is.
You become a captive customer.
So that's the first advantage,
but there are even more powerful advantages.
Smart contracts are tamper,
proof. You can't hack a smart contract. Well, you can hack a smart contract, but it will always run as
written. Smart contract gives you the guarantee that the logic that you've created will always run
against the correct data. And of course, you can't encrypt a smart contract with ransomware.
So this is going to become increasingly important because traditional IT is in the process of a rolling
meltdown. You can't make it secure, you know. And we seem.
problems. For example, in recent months, the colonial pipeline hack in America cut off the East
Coast of America, America's oil supply, the gas refineries ran dry, and there were these huge
thousand car tailbacks with mothers with young children sitting in line trying to get some gas for
their car. That was the result of, you know, the pipeline company's server machines.
being infected with ransomware and encrypted, and eventually they were at least after some weeks
in exchange for a Bitcoin ransom. There was a solar winds hack in that hack pretty much worldwide,
every imaginable form of confidential content was stolen and put in the hands of hackers on an
absolutely vast scale. So there's this rolling meltdown in security and it just gets worse and worse.
And inevitably, the only way around that is to,
move to blockchain where you can build with smart contracts, which are tamper-proof.
Because traditional IT, whenever you build something, it always starts off completely insecure.
You know, you start off with a web server and a database and a this and of that.
And then, you know, you try and make it more secure by surrounding it by firewalls and having
a security team. But that's the wrong way around.
If you start off with insecure systems and you try and make them secure, sooner or later, something
will go wrong and the hack will get in and encrypturing a system.
systems with ransomware or steal your content.
So tamper-proof, of course, small contracts are unstoppable.
I mean, the internet was designed to withstand a nuclear strike.
The internet computer blockchain is also designed to withstand a nuclear strike.
So that's a fantastic property.
The other one people miss, which I think is probably the, perhaps the biggest, is
smart contracts are composable.
So every smart contract can plug into every other smart contract, which creates immense
network effects.
And a smart contract is both static software.
and dynamic software. So smart contract, if you like, is static software like the word.exe file
that you see on your C drive. But it's also the running word program in which you're editing a document.
It's both of these things simultaneously. And what that means is you can assemble running systems
in the same way that you used to assemble static software. So in the same way that you built
software, static software from software libraries, you can now assemble, compose running systems in
the same way, and every smart contract can connect to every other smart contract. And a smart
contract can be part of multiple systems at once. So this is a simply immense advantage.
Let me jump in. So I think the listeners of our podcast are completely convinced of the usefulness
of smart contracts. And I think also Ethereum has demonstrated that there are,
are people willing to pay millions a day for the use of smart contracts.
And obviously,
if,
well,
obviously,
currently it is extremely expensive on Ethereum to use it and only really kind of a
handful of people can actually use those smart contracts.
So if you can say,
well,
we can bring those advantages to everyone and kind of scale it enormously,
the advantages is obvious.
So I'm really looking forward to now jump into,
kind of the technical deep dive and figure out how you achieve those.
As you said earlier, things that lots of other chains have tried to do, and so far no one has achieved.
So maybe to put things into perspective, at least where I'm coming from, and I think many
listeners are familiar with three other chains that Ethereum, PolkaDot and Cosmos,
And kind of my mental model of it is you have Ethereum with the idea of shared security
and then homogeneous execution environments.
And, well, we are not fully there yet, but you could see, well, there would be the sharding
and every shard would run the EVM, but they are all kind of, yeah, homogeneous.
Then you have something like Pocod where you say, okay, you have also shared security,
you have something like a, well, they call it different, but conceptually it's similar to a beacon chain.
but then you have heterogeneous execution environments
and the execution environments, yeah, can have their own rules.
They don't need to be all something like an EVM.
They could have different rules.
And then finally you have Cosmos, which says,
okay, there are kind of sovereign blockchains
and there's a light communication protocol between those blockchains.
So in that perspective, where does the internet computer fit?
So do we have, first, do we have a shared security?
do we have what are the execution environments?
Can you compare it in this way of thinking?
Yeah, so, you know, the internet computer, of course,
shares some similarities.
It has a virtual machine within which small contracts run.
But it's different in many ways.
And actually, the differences are necessary.
and you know it's interesting looking at you know the scaling efforts of many existing
blockchains because you realize that the path they're pursuing won't lead them to the
destination they want to reach so I'll give you some simple easily to easily understood
examples one is that if you want to create a blockchain that can scale smart contracts
the small contracts need to be asynchronous,
which essentially means that, you know,
if I'm a smart contract and Martin's a smart contract,
when my code wants to call Martin's code,
it essentially packages the function call in a message,
which of course is a kind of transaction,
and files that.
And this gets sent across the network
because, you know, the internet computers
There's a blockchain of blockchains and we'll get back to how that works securely.
It's very different to other systems with similar visions.
And, you know, Martin's smart contract will process the function call and produce a result
and that will be sent back.
And, you know, when that result is received by my smart contract, if you like, it's woken
up and it processes it.
But at any one time, my code can have several of these calls to other
smart contracts outstanding.
So actually, this has some other benefits.
I mean, just worth mentioning before going into the details of parallelism.
But one of the big benefits of this is there's no reentrancy.
And reentrancy is, in my view, one of the greatest security vulnerabilities
that Ethereum smart contracts have to deal with.
So we all know what happened with the DAO in 2016.
That was exploited by a reentroncy bug.
They can be very subtle and difficult to deal with.
My conception of it is, of course, there needs to be at some layer, asynchronicity.
I mean, otherwise, it's 100% clear that it won't scale.
There's no question about it.
The question is, on what layer do you introduce this synchronicity?
So, and for example, Cosmos would say, well,
within one chain there are synchronous calls possible,
but between chains, of course,
the communication is synchronous,
and potentially same with Ethereum eventually that you have those shards.
And within a chart, you can have synchronous communication.
So you are saying you are pushing the asynchronicity
on a smart contract level, or you call it canister, right?
Of course.
Yes, well, and we call, yeah, canister smart contracts.
And for the listeners, the reason we call or nickname our small contracts canisters
is that each smart contract implements the software actor model
and is in fact a bundle of web assembly bytecode and pages of memory
that are exclusive to that smart contract.
And because, you know, each smart contract is a bundle of web assembly bytecode
and memory pages.
we call it a canister.
But look, you know, you absolutely have to implement this at the level of the smart contracts.
And the key property that you need to implement a blockchain is just determinism.
And the reason people today have synchronous smart contract models is nothing to do with it being better,
is to do with it being complex to create determinism when you introduce a synchrony.
That's the truth of it.
So, you know, if you look at an internet computer subnet blockchain, which I suppose are in some ways
very, very approximately equivalent to an Ethereum shard, but they're more sophisticated.
They are, you know, the replicas, the nodes are processing numerous smart contract
computations in parallel, but they're doing it in a deterministic.
way. So at the moment we have a relatively simple system that relies upon introducing
determinism in the order of messages shuffled back and forth between the different smart
contracts as they run their computations. But we'll move to full deterministic time slicing
in the future for a bunch of reasons to maximize the performance and efficiency of our nodes.
and then when one subnet blockchain hosts a smart contract A
and it wants to send a message to a smart contract
and B on another subnet blockchain,
of course that message goes into a serialized queue between the subnets
and the network provides the guarantee that if you make a call to another smart contract
you always get the response.
So I should just be clear to the listeners as well.
When I talk about messages, I'm talking about the network at a lower level here.
I mean, a smart contract just sees function calls and function call results.
I think I only heard it between the lines, but maybe to make it explicit,
it sounds like you are also, that's a very useful thing,
have a separation between contract execution and kind of a transaction ordering.
So on some level, you mentioned there's a determinism achieved on a subnet
that says kind of exactly what is the transaction ordering.
Yeah, I don't know how technical you want to get,
but there's a sort of each node replica,
or old style speak client software,
is implemented in four layers.
At the bottom, there's P to P.
Then there's a sort of stateless protocol layer.
We call it the consensus layer,
but it actually does lots of different protocols.
Then there's a message routing layer.
and then above the message routing layer,
you've got the execution environment.
That's why you'll find the
WASM virtual machine
and all of the other stuff
that creates the execution environment.
So the message routing layer,
of course, can route messages
to local smart contracts,
i. Smart contracts executing on the same subnet blockchain,
or it can route them to other subnets.
But yeah, of course, it's all deterministic,
has to be it's a blockchain
and all done in ways that
enable that kind of cryptographic
verification that is essential to
blockchain systems to be performed.
Let's maybe carry on comparing it to Ethereum
which I assume the listeners
are most familiar with.
So you already said that
it's a wasam-based system. So basically
each subnet
has its own EVM, so to speak,
or ewasm?
Is that correct? And is there a concept of gas?
Yeah, of course. So there's a lot of differences. So first of all, we didn't try and use EWASM.
It's everything within the internet computer is new. And it's been designed for a specific purpose.
And, you know, as I mentioned, you know, if you want to create a blockchain of scales, your smart contracts have to be asynchronous.
And, you know, therefore it wouldn't have been possible to, you know, reuse any existing virtual machine.
So, yeah, we do, of course, have a gas model.
Instead of, it's not called gas, it's called cycles on the internet computer.
And we also use something called a reverse gas model.
So that means that the smart contracts pay for their own computation.
You know, you charge, you know, it's like filling up a car, right?
You know, the car's burning the gas from its fuel tank and when it runs out, it has to be filled up again.
There's a number of reasons we do that, but I mean, most obviously it allows blockchain,
to approximate to a sort of cloud computing model
and provide much better user experiences.
So, for instance, you could look at OpenChat.
OpenChat is a chat system that runs entirely
from the internet computer blockchain,
which I think will give the list of some measure of the differences here.
So, you know, OpenChat is implemented using smart contracts.
These smart contracts are efficient enough and run fast enough
that they can actually move chat messages around.
And these small contracts also can serve HTTP requests.
So they serve the interactive user experience that loads, for example, into your browser window that enables you as a user to send and receive messages.
Now, what's actually happening there is you're authenticating yourself using this thing called Internet identity.
And whenever you interact with the backend smart contracts, they're paying for their own gas,
or here, cycles.
And, you know, can you imagine if you were forced to use Metamask?
I mean, every time you wanted to send a transaction to send, you know, a chat message is a
transaction, right?
So every time you want to set a chat message, you'd have to sort of configure the signature
and say how much gas you want to send with it.
It just just wouldn't work.
It just doesn't work.
So it doesn't work.
Yeah.
Currently, currently clearly, Ethereum predictions are something that is probably, well, at least
to worth a thousand dollar.
Otherwise, it doesn't make sense.
It doesn't make sense to use it because you will pay $10 to $50 to reduction fees.
And of course, if you want to do something as small as a chat message, of course, the model needs to be very different.
So let me summarize.
So or at least make a, what we have so far.
So we have subnets that are somewhat maybe comparable to Shards, not exactly, but somewhat.
Then they run on Wosim.
they have canisters that are kind of similar to smart contracts,
but already on that level we have the S&Kristin team.
So what is the buff?
So I mean, I guess there needs to be something that is holding the subnets together.
And how does communication work between subnets?
Well, so one of the most important, if not the most important,
in the internet computer is this thing called the chain key system. And that actually involves
novel cryptography. And it's evolved from that original work that I used to talk about in
2015. You know, I was using... I remember the real. Yeah. Yeah, to generate random numbers.
You know, we just kept on burrowing down that, you know, along that furrow and it just became more
and more advanced. And so let's just step back a moment and think about some of the other challenges
that you see with existing legacy blockchain architectures like Ethereum,
or proposed architectures like Ethereum 2.0,
or hub-and-spoke architectures like PolkaDot.
So, you know, one of the issues,
so, you know, the Polka-Doc concept
is that you have some central hub blockchain
that charges toll fees.
And then everyone else creates their own blockchain,
a parochain,
plugs it in to the hub blockchain,
and it's made easy because they're all built this thing called substrate.
and then they can send messages to each other, you know, via the hub.
And obviously, you know, Gavin Wood sits there collecting the toll fees, basically, and, you know.
So obviously, from my perspective, I'm very interested in this idea of a world computer.
And, you know, if you, you don't really want a hub and spoke architecture if you want to create a world computer because you're going to create a bottleneck.
Naturally, the hub is a bottleneck, right?
So that's not a good way of going.
And the same problem exists with Cosmos, too.
you don't want to have to, you know, forward all traffic through a hub.
Why does the same problem exist with the cosmos?
Because, I mean, you can have with IBC enabled chains,
you can just have chain-to-chain communication, right?
You don't need beacon chain.
Yeah, I think it was at some point proposed that it would go through the hub,
but maybe that's changing.
I'm not up to date.
Maybe they've tried, maybe they've seen the error of their ways
and they're trying to get rid of the central hub.
Yeah, I haven't been following those projects.
But look, there's another problem as well,
which is actually much worse than that.
And it's that if everybody's creating their own blockchain,
every blockchain will have a different trust model.
So let's imagine, you know, Friedrich, you're the polka dot hub.
I'm a parochane and Martin's a power chain.
So then there's a smart contract on me
that wants to send a message to the smart contract on Martin.
It's like, okay, so creates the message, sort of delivers it to you, Friedrich, the pocket of hub,
and you, you know, I pay the toll fee, and then you take that message and you send it to Martin.
And then hopefully Martin processes the message and sends it back to you, pays the fee,
and then you forward it to me, right?
Now, there are a number of problems with this.
The first is that it's no longer a blockchain.
We've got three different trust zones here.
There's my trust zone.
which is based upon, you know,
whoever my, you know,
validators are and what my staking system is and so on.
There's Frederick,
you're the hub and that's another,
the second trust zone. And then Martin is the third trust zone.
And it becomes very difficult to reason about things.
You know, so if the smart contract on me,
you know, wants to call the smart contract on Martin,
well, somehow the designer of that's got to
be aware that perhaps the hub might fail or, well, at least they have to understand
what the mode of operation is for the hub. Can a message be delayed? And then, you know,
what about Martin? I don't know anything about his trust. Does Martin guarantee that
the message that I sent to his smart contract will generate a response? So this gets very,
very difficult. Not only that, of course, that it becomes very clunky for smart contract developers.
You know, if there's a small contract on me and it wants to call a smart contract on Martin,
it should be very simple. It should just be, you know, I can create a define that says, you know,
define this contract with the address of Martin's contract, and I should just be able to go,
this contract.comcall function, right, and process the result.
I personally see
this argument for
Cosmos.
I think Ethereum and Pocod are trying to
kind of have a shared security model,
but maybe switch to
internet computers.
So how does it work here?
It's not just, I mean, the security model,
well, you know, the scalability is the first
problem when you introduce a hub.
That doesn't work out too well.
The security becomes a mess
because you've got all these different trust zones.
And then, of course, worst of all,
developers are faced with
trying to deal with all these complexities themselves and sending messages are other functional.
Although, also again here, again here, I think we, to my, in my understanding, we have the two
approaches with Ethereum that's trying to say, well, we always use EVM and kind of have a shared,
well, homogeneous execution environment. It seems like you also are going for a homogeneous
execution environment while in PolkaDot, they say, and of course there are also arguments,
I would say for that, to say, well, different applications.
need different or might benefit from different, yeah, something like VMs.
Yes, but look, I mean, going back to, you know, the advantages of smart contracts and what got
me into this in the first place, a key advantage of smart contracts is they exist within a single
seamless universe and, you know, one smart contract can call another smart contract.
And there's no concept of partitions in different trust zones.
And that's why, you know, despite the extra order.
limitations of the Ethereum
blockchain today,
it's been immensely successful
because people for the main
create defy contracts
in systems and then anybody else
can extend them and plug into them.
And the network effects are just immense
and that's why, despite limitations,
has been so successful.
It would be an absolute tragedy
if we went forward with, you know,
Polkodart and Cosmos style models
where we get rid of
that great advantage of smart contracts
that they all exist within this single unified, seamless blockchain environment.
So it's very important to me that on the internet computer, there's no concept of different
chains and hubs that you have to send messages through.
And you can see, by the way, that this concept is embedded in these legacy blockchains
because they've got synchronous smart contract models.
Now, if you have, for example, Ethereum 2.0, all these shards and these shards are running
synchronous smart contract calls, well, you know, in the end, there's,
always going to have to be some kind of boilerplate that you use to send messages between
shards. You break one of the most important properties of a blockchain that the small contracts
exist within the seamless unified environment and there's no concept of partitions. Everything
is composable, everything, cool, everything else. That's one of the most beautiful things
that I've ever saw. I've been coding for 40 years and to remove that and abandon that property
would be a tragedy. So let's get in how you.
you are removing this because, okay, so you have different subnets with canisters.
So how does it work on internet computer?
How does a smart contract or a canister call another smart contract or canister on another subnet?
Yeah.
So, I mean, obviously the internet computer is a very complex thing.
But, you know, at a high level, you know, the network can derive, very efficiently
derive the location of a smart contract from its identity.
kind of its global position.
Yeah, yeah, exactly what shot.
Not shard, but subnet.
So basically the position of each canister is known to each other canister in whatever subnet?
No, no, of course not.
No, the canisters are unaware of the subnet.
Smart contracts unaware.
That would be a terrible mistake.
That's what's happening with these legacy or these sort of even, no, you don't want shards and you don't want parochains and you don't want hubs.
Look, it's very simple.
Like, you know, if you're a smart contract and I'm a smart contract, I just call your
functions.
And that's it.
Right?
There's no, within, you know, there's no, you know, the level of smart contract code,
there's no concept of a hub or a power chain or a shard or anything like that.
It's just a seamless universe for code.
Does it mean that if I deploy a canister, I don't need to make an active decision on what
subnet I go.
It will just be determined by the system or is it an conscious decision?
Yeah.
So, okay.
So that's a good question. And you don't know that's correct. You don't have to decide where it goes.
Complexities, of course, always happen around the edges and where you have very specific requirements.
So if you were a company and you wanted to implement not a defy system, not, you know, tokenized social media, but an enterprise system, your enterprise system might be comprised of lots of smart contracts and that,
interact with each other and it would run faster, of course, if all those smart contracts were on the
same sub-network. So we are introducing means that will allow people to hint their small contracts
with a service ID and the network will tend if it can, of course, because it can't guarantee it.
If it can, it will collocate those smart contracts. But, you know, that's just an optimization thing.
like if you're building this great big enterprise system
out of hundreds of small contracts,
it's going to run better if they're all on the same sub-network
or at least a lot of them are on the same sub-network.
So we're going to give people ways of achieving that,
but it won't be, you know,
it doesn't exist at the level of the code.
Like when you're writing a smart contract,
you could move them apart and it would still continue working.
Does that make sense?
That makes sense, but maybe let me butt in here.
So basically, what's the difference then between a transaction
between two canisters that are on the same side?
and two canisters that are not on the same subnet.
How do they differ?
There's no difference.
So why do you have the subnets?
And how do you make it such that a canister can call another canister on a different
subnet equally efficiently or nearly equally efficiently?
Well, so there's a lot to this.
So, I mean, I should probably just rewind a little bit to what a subnet it is.
A subnet is a blockchain.
and, you know, recall the purpose of the internet computer is to achieve a blockchain singularity.
So we want everything rebuilt and reimagined on blockchain.
So it has to run very efficiently.
So we actually do something called deterministic centralization, which means that what we call node providers, people running these special node machines,
identify themselves and the governance system of the internet computer is called the network nervous system,
essentially combines nodes to create new subnet blockchain.
which add capacity, it combines nodes observing this decentralization hierarchy, which is,
first of all, node provider, naturally.
I mean, let's say you created a subnet blockchain with 16 nodes, say, if all 16 nodes came
from the same node provider and that node provider turned evil or went bankrupt, obviously the
subnet would break.
So that's no good.
So the first, you know, the first, the top of the decentralization hierarchy is the node
provider.
You want the nodes to come from different node providers.
second is data center you know um it's all very well you know uh combining nodes from 16
independent node providers but if all those nodes are in the same data center and the data center
blows up well that's not much good either right so then so there's no provider data center
and then geography now actually we care a lot about this because um there's something for example
called an electromagnetic pulse sounds far fetch but it's not there was one i think in 1875
is called the Carrington event.
You have to look it up on Wikipedia.
You'll get the correct date, but it's around then.
And it was created by solar flare.
And, you know, in the hotspots where this thing, you know, hit the earth,
it would wipe out data centers.
In fact, it would cause an awful lot of damage to information technology generally.
So you don't want all your data centers in the same geography.
And by the way, I think it was, again, you can look this up.
I think of 2009, a solar flare of a similar magnitude to the one that caused the
carrying an event passed through Earth's orbit and we missed it by three days. And there are other
ways, you know, electromagnetic pulses could be created. For example, you can detonate a nuclear
bomb in the atmosphere and all kinds of things. So, and with climate change. Or you can introduce
the infrastructure bill. Yeah, that's right. Exactly. Yes, that's right. All kinds of things can go
wrong. So we want to make sure things are geographically dispersed. Or actually, this is the fourth one,
Martin, it's jurisdiction. So you could say, well, you know, I just want, I just want to, you know,
Okay, independent node providers, you know, independent data centers,
and the data centers are all dispersed to the four corners of Europe.
Well, guess what?
These states are all member of the EU, and there's a possibility they could ban blockchain.
So actually, you don't want to do that either.
You know, you want to make sure you've got some nodes from Amsterdam and Zurich and, you know, Munich and Budapest,
say, and you also want to include some nodes from places like Singapore and America.
So you've got this hierarchy, you know, node provider, data center, geography, jurisdiction.
that's how nodes are combined.
And by using deterministic decentralization,
which does, of course,
you know, make the sacrifice
that the people running these machines are identified.
You know, that's the sacrifice, that's the trade-off.
The advantage, though, is that you can create
much higher levels of security and resilience
with much smaller numbers of nodes.
And so first of all, these nodes,
independent nodes are combined
to create these subnet blockchains.
Now, as the internet computer goes,
grows, I mean, you know, in a few years, you could see 100,000 subnets or something.
How on earth, you know, you couldn't each, any subnets are talking to each other directly.
There's just no way, of course, and that each subnet could be aware of the data on the other
subnets, impossible.
So we use something called chain key cryptography.
This is one of the biggest innovations in the internet computer.
So, for example, today, you know, when you build a DAP, typically, you know, if you're
at building an Ethereum Dap, of course, the website, the interactive component runs on the cloud.
So, you know, an Ethereum Dap, to be clear, is not fully decentralized because you run the website
on the cloud. Then it will talk typically to...
Don't have to.
99% do.
Then it talks to Infura, which are Ethereum nodes running on Amazon Web Services run by consensus.
But if you wanted to at least, you could run your own Ethereum node.
And that thing basically is a slave.
People mistake Ethereum nodes and Bitcoin nodes for decentralization.
They're nothing to do with decentralization.
They're slaves that consume the blockchain produced by the block makers,
which typically are mining pools in the case of Bitcoin and Ethereum.
So they consume the blockchain, and they keep a copy of it.
And typically what you do is you interact with that local copy.
Now, what does that mean?
It means that if you have a local Bitcoin or Ethereum node,
because you're running the local node, you can trust it.
and you can interact with it as a source of truth
regarding the state of the blockchain.
So the challenge there is that if I am creating a DAP
and I don't want to just, I'm already my website's on the cloud
and if I don't want to now interact with Infura,
which is just more, you know, it's all on Amazon Web Services
and its consensus, if I want to run my own node
to be more decentralized, at least I've got my own local source of truth,
that's going to download the Ethereum state.
Martin, what is it right now?
What is it today?
I run a node, or it's probably 500 gigabytes.
Okay, so you've got to download 500 gigabytes.
I mean, it's probably worse than that in reality,
because you've got to download all the old blocks.
You're going to replay them.
It's going to be awfully computationally expensive,
and you've got to check all the hashing and everything else.
Okay.
So, yeah, it's an awful lot of data and an awful lot of computation.
Probably, in fact, it would take you your node a day or two
to catch up with the Ethereum chain.
So by contrast.
Yeah, a few more.
A few more.
A few more.
Okay, there you go.
So by contrast, in all the,
to interact with an internet computer subnet blockchain, and in order to know that the
subnet blockchain is correct, and in order to know that your interactions with that blockchain
are correct, all you need is a 48-byte chain key. That's it. So we've gone from the need
to download 500 gigabytes to a local node before you can start interacting with Ethereum, say,
or an Ethereum chart, we've gone from that situation to one where you only have to have this 48-byte chain key.
And that's chain-key cryptography.
It's absolutely revolutionary and changes the whole meaning of blockchain.
And that's how our shards can interact.
Sorry, not shards, subnats, can interact directly with each other.
They don't need to have copies of each other's blocks.
And they also don't need there to be trusted validators and bridges, which of course is completely insecure.
And as we've just seen with, is it, what's the deep.
thing that just went wrong today.
The ploy network.
Poly network, yeah.
It's poly.
Yeah, and they move stuff between Matic and
Ethereum and...
Bynum smart chain.
Pocod or something. Yeah.
So that's on Binance. So you don't want to do that.
I mean, we don't want to introduce trusted
validators and bridges. We just want to have
cryptographic security and that's why we have
this chain system. But it makes
it possible for subnets to interact
with each other just directly without
having to see each other's blocks.
and absolutely
water and corks the whole thing.
But you're comparing apples and oranges
to a certain extent, right?
So basically it's kind of like
if you're deviling infura,
in a way it's just like having more infuri
so more trusted third parties
that you trust to have a...
Well, I think that's currently the point
that you've provided
cryptographic proofs that you don't have to trust.
That you don't have to trust.
So basically it's basically the comparison
with running your own node is somewhat, somewhat faulty, right?
But I mean, let me get back to my core question.
So basically, if I'm a canister on one chain, how do I use this?
What's it called key?
Chain key?
You don't need to.
How do I use it to find Martin on a different subnet?
But hold on, just to be absolutely clear.
The question you're asking is presupposing that, you know, the internet computer works
in the way these legacy blockchains do with shards and.
hubs and things like, just look, I mean, I'll be absolutely frank. I think those architecture is awful
and very misguided. The whole advantage of smart contracts, or one of the key advantages of
smart contracts, is that they exist within a single, seamless, unified universe. You know,
I'm a smart contract, you're a smart contract, I want to interact with you, I just call your function.
There shouldn't be any concept of different subnets or chains and hubs and shards. It's ridiculous.
on the internet computer, contracts are completely unaware of the actual workings of the network.
It's not what you're describing.
You see, this is a very common thing in blockchain.
People look at the limitations of the cryptography involved and the architectures involved,
and they extrapolate from the limitations features.
People actually sometimes begin to think that the limit, you know, these shortcomings
are features.
They're not.
There's absolutely nothing, no advantage.
when you write a smart contract
and you want to interact
with another smart contract
in your needing to know
on what shard
that other smart contract is.
This is a very bad design,
obviously, like coach
my contract,
build, interact, your contract
and know nothing about
the underlying network architecture.
Dominic, I have to say
it still sounds a little bit like magic
and I'm trying to kind of bring into concepts
I kind of understand
and I think it could be
maybe similar.
So what I do understand is that there are, for example, of course, what you describe
is absolutely true.
It takes, well, actually took me, I'm on my fifth day of currently sinking in Ethereum
node and still sinking.
And well, yeah, that's how it is currently.
And I understand there are promising new ideas.
And maybe you have already achieved that to completely get rid of that.
So one concept I know is, is.
is one concept I know are zero knowledge proofs that you kind of make a proof that the state
transition was correct and you can maybe even do recursive zero knowledge proofs so that
you in the end just have to kind of check the final proof and that kind of gives you a recursive
or through recursion the idea that all previous state transitions are correct. So that is one concept.
I could imagine how that works. Is that related to that? Or you say it's a key?
I mean, first of all, a signature is a kind of knowledge proof in a way, right?
You're sort of proving that, you know, you've got it, you hold a private key that is, you know,
that corresponds to the public key by producing a signature.
And the signature is that proof.
Like, without showing my private key, I'm showing to you that I have one that corresponds to the public key.
So, yeah, I mean, optimistic roll-ups and all that kind of stuff.
I think, again, it's just a red herring.
It's not the way to go.
It's just introducing yet more complexity.
I don't believe in these latter two solutions.
I don't believe in optimistic roll-ups or any of these things.
I just think that small contracts should just run quickly and efficiently and with an unbounded capacity, right?
Then maybe try to give us an idea of how this works.
So you said, I don't need to have my full note.
I can immediately verify from very little data that all computation or kind of all state change was done.
So how does it work?
level. So, of course, it's all derived from threshold cryptography as usual. You know, the project
hasn't changed. It's just become more advanced. And, you know, a subnet blockchain, recall,
is composed, you know, it runs on a set of nodes that are independent and have been assigned
by the network nervous system, which is the kind of sort of government, government system that runs
with it, governance system that runs within the internet computers protocols. So,
So, you know, a subneck comprises of these nodes and the nodes have identities.
And when they're put together, they run a setup procedure in concert with the network
nervous system.
And their shared public key, their chain key, which is essentially a BLS special key,
is added to this thing called the registry by the network nervous system.
and that means that the chain, you know, depending how it's configured,
say a super majority of the nodes in that chain, can collaborate to sign something.
So now, before moving on, of course, BLS is a standard, you know, cryptography scheme's very well known.
Of course, Ben Lin works to affinity.
He's one of the inventors.
And I know Dan Bonney well, he's the B.
But, you know, alone, it's not nearly sufficient to create the chain key system.
So, you know, blockchains have dynamic membership.
So you need things.
Let me, let me jump in here because there are two very important distinctions.
There is, there is one verifying signatures and saying kind of, well, this was signed by at least two thirds or maybe even 80% of the key shareholders.
And that I can totally understand that can be, I mean, layered and so on.
that's work.
But that's different from giving you a guarantee.
I mean, what you do if you run a full node on Ethereum,
you are not just verifying the signatures or verifying the proof of work.
If you would only do that, well, then it would actually be quite fast.
What you are actually doing is you are doing all the computations yourself
are verifying that the computations are correct.
So ask directly, are you guaranteeing that the computations are correct?
or are you guaranteeing that a specific thresholds of signatures was reached?
And that could mean that if 80% are compromised, they could sign a wrong state.
Because those are two different points.
Absolutely.
So, I mean, the first thing to people to remember is there's a thing called Byzantine fault tolerance.
And sometimes in blockchain we get, you know, a little bit muddled because, you know, there's a lot of jargon and woolly things.
thinking, look, I mean, you have to base systems like this on mathematics. And if you base things
on mathematics and the designs on mathematics, you can verify and create, you know, that your
designs work correctly with mathematical proofs. So Byzantine fault tolerance, of course,
refers to the model where you assume that some proportion of participants are faulty and
faulty means they can behave arbitrarily and that's why they're called Byzantine. And they can also,
that includes colluding to break the system.
So, you know, internet computer subnets are Byzantine fault-toler, that is, based upon
the mathematical assumptions.
There is a chain of notarization, and so long as at any stage the subnet hasn't been taken
over by faulty nodes, the signature, the notarization signature is sufficient to.
show that the blockchain is correct and your interactions with that blockchain are correct.
So, you know, of course, if a sufficient number of participants in the blockchain become faulty,
they could produce, they could corrupt the blockchain.
But that's true of any other blockchain, by the way.
And if you understand, same with validators and everything else.
I think it's not.
I would say if 99% of the Ethereum miners would be kind of malicious, they could still not trick
my full note into accepting their block.
No, no, my full node would reject it.
So, Ethereum is controlled by three parties.
Three parties.
There are three mining pools.
There are three Ethereum mining pools that together.
I mean, there are a few more, but people take three.
No, there are three that together have over 51% between them.
Over 51% of the hashings.
Yes.
And the important thing is that they, even if they collude, they cannot.
Well, let's just get to that.
My full note?
No.
No, that's not correct.
This is one of the biggest myths of blockchain that, you know, is...
Okay.
So this is another great example of what happens in blockchain.
You know, some...
People want to say, well, our networks are super decentralized,
to look at all these nodes.
Look, we've got a thousand Bitcoin nodes running on Amazon Web Services and in
inferior or something.
This is not decentralization, okay?
If you control, if you're a miner and you control 51% of the hash rate,
you can arbitrarily rewind the chain and rewrite it.
And every single mode will accept it.
That's the way it works.
That's just the way proof of, that's the way it works.
So if you have 51%, yes, you can, to some extent, rewrite history.
What you cannot do, and I think that is still a very important difference is even if I have now 51%,
I cannot kind of introduce a transaction where I, let's say, spend,
Right. And that is a big difference.
Yeah, that is different. Yeah, okay. So just, yeah. Sure, so let's address that. So you are right that a difference here is that if instead of just signing the state, everybody replays the transactions that created the state, while it is possible to double spend and arbitrarily recreate the state, at least in this completely recreated state, it won't be possible to, for example, steal money.
my Bitcoin or ether.
That's very true.
Now, that's true.
We shouldn't be too pleased about that, though, because with something like a well
computer, if the state is rewound and then rewritten, that's a catastrophe.
So in the case of the internet, in the case of, sorry, Ethereum, you know, if, if, you know,
we reach one state and then those three parties that control Ethereum, rewound it and rewrote
it, while it's true that.
you know, nobody could steal my ether by signing a faulty state in which the balance had changed.
And my ether wouldn't be worth anything because if you can rewind and rewrite the blockchain,
it's such a catastrophe.
Nobody would care.
So just to be clear about that, so I think that's, it is true that that is a technical advantage in one sense.
But let's just, let's just fall back to mathematics for a second because clearly, well, let's just stand back and see the distance mountain and answer the question.
question, are we ever going to produce an infinitely scalable blockchain if in order to validate
it, we have to rerun every transaction that ever happened? The answer is no. It's not possible
and therefore we have to abandon the idea straight away and look for more advanced mathematical solutions
that solve the problem. I'd say two things to this. First of all, if I wouldn't, if I would
just run my full note and kind of would just, well, verify the signatures in a way and take the latest
state and see, well, it has this proof of work, then it would actually also be much, much
faster than it is today. So currently, really, this five days comes from re-executing. And you can argue
whether that is necessary at all. You can also say, well, I obviously kind of can trust the latest
state that there has so much proof of work behind it. I can just take this. That is one thing. And the other
thing is I still haven't ruled out the possibility to say we might have this recursive snarks
or kind of those correctness proofs that I still think that's exciting.
It is a nice idea, but there are issues there too.
And look, for any kind of practical well-computer to exist,
you actually need the well-computer to hold the data that's involved in online services,
defy and so on.
So I've got a great deal of skepticism about the practicality.
I mean, sorry, they can be made to work,
but they create a very unusable system.
It's the same thing as with roll-ups and, you know, plasma.
and everything else.
But anyway, just sticking on this question of security for a moment, and I'm going to ask
you a question.
So today, Ethereum is controlled by these three mining pools, and you're quite right.
Eventually, we could, if once miners had realized those three mining pools had turned evil,
they could join other mining pools.
Eventually, it would be fixed.
But nonetheless, it would take quite some time, and those three mining pools that control
Ethereum could arbitrarily rewind history and rewrite it.
Now, and break everything.
So it is nonetheless the case that the state would have to be calculated in a legal way.
That is, they could only select transactions that had really been submitted to the blockchain to create a new state.
But nonetheless, the whole, you know, everything could be reround.
You could rewind by a day and go forward and then no one would give it.
It's over.
That's why we need to get rid of proof of work as soon as possible and introduce proof of state,
because then the story is different again.
So that's the situation today anyway.
regardless of what new systems are proposed,
and I've got some big worries about those two.
I've been looking at the designs occasionally.
Now let's take the internet computer.
So actually, the ICP ledger, for example,
runs on the same subnet as the network nervous system,
and I believe it's got about 34 replica machines.
And they're all from independent node providers
and, you know, running an independent data centers around the world.
So, you know, it's not perfect yet because the network's only three months old.
But, you know, essentially you've got this decentralization hierarchy of node provider,
data center, geography and jurisdiction.
In order for that to, this is a complex.
So Byzantine fault tolerance is basically fails once you've got a third of the nodes being faulty.
The internet computer actually uses a multi-layered architecture,
so the higher levels will fail before the state level fails.
without going to deal. So essentially, in practice...
The state level, you mean sub-supnets.
The blockchain itself is actually multi-layered.
So first you have transaction ordering,
and only when the ordering of transactions is finalized
are they actually applied to the state by the replicas and a few things like that.
So that's a complex topic.
But, you know, Internet computer blockchains have three layers.
This is the randomness layer, which is threshold relay, of course,
which I started talking about in 2015.
Then you have blockchain formation.
uses a thing called probabilistic slot consensus,
where the chain is, of course, eventually consistent,
but it's actually highly consistent compared to traditional blockchains.
And then you have a finalization there,
which depends on something called negative.
It's optimistic, negative attestation.
Anyways, without getting into details, in practice,
you're looking at 2F plus 1,
nodes becoming faulty,
in order for the state to be arbitrarily,
modified in practice. And this is a complex topic, which I won't go into now because we'll end up
in a rabbit hole. But so let's, I, you know, I go back to, I think as I last heard, it was
running 34 nodes on that subnet. So, yeah, you know, 11, you're basically going to need
23 of those 34 to become malicious and collude in order to break the network nerve system and the
ICP, you know, governance token ledger that, that, that, that, that, that, that, that, that, that, that, that's
think about that. You know, you've got these 34 independent nodes run by, you know, different node
providers from different independent data centers, um, in different, uh, geographies and different
jurisdictions. What's more likely that, whatever it was, 23 of these node providers are actually going
turn evil and collude to corrupt the state of the ICP ledger.
Or that three Ethereum mining pools get hacked and rewind the chain and do some double
spends.
I mean, honestly, when you really get down to it, you know, these arguments are just
pretty straightforward.
I mean, look, yeah, okay, you can, we can have as many, you can have 100,000 Ethereum
nodes copying the blockchain that comes out of the block makers, i.
the mining pools.
It doesn't really change the security of the system.
The flaw in the system is that just three mining pools have over 51% of the hash rate
and they can arbitrarily rewrite the chain.
Actually, in practice, and in practice it's much more secure doing it the way we've done it.
In practice, once you look at the mathematics.
I think no one's arguing for centralized mining pools,
but maybe we've gone down a rabbit hole pretty deep.
there's one more thing I would really like to understand about the internet computer,
and that's the tokenomics.
Okay.
So basically, as you already said, the internet computer is a proof-of-stake model,
and there's a token associated with it.
And can you talk a little bit about what the token does
and what I need the token for as a user and what the nodes need the token for and so on?
Well, yeah, just make a quick correction.
The internet computer is not a proof of stake system.
So how are validators or nodes penalized if they misbehave?
Well, let me just rewind to that.
That occurs.
There is slashing.
But it's not a proof stake system.
Basically, it's a sort of hybrid between proof of work and proof of stake.
And I'll come back to that.
But just to be clear, you know, I'm not a supporter of proof of stake systems as they exist today.
and they always reduce to layer two applications of big tech cloud services.
So if people are upset about 75% plus of Ethereum nodes running on Amazon Web Services today,
you know, one thing that we should all be very concerned about,
because I'm a big supporter of Ethereum,
is that when we move to Ethereum 2.0, we actually end up with, you know,
95% plus of the validated nodes running on Amazon,
which is obviously a single point of failure.
It's much worse than the mining pools.
Today we've got three mining pools that can collude to control the Ethereum chain.
Tomorrow, it could be Jeff Bezos.
And that's actually the case today.
If you look at how things like PolkaDart and Cardano and Avalanche hosted,
in practice, all the nodes are running off cloud services like DigitalOcean and Amazon and all the rest of it.
So I don't like that.
And there are also major issues with proof of state with respect.
a scaling blockchain. So this is fairly easy to understand. If you want a protocol, and remember a
protocol is a set of instructions that are run autonomously, if you want a protocol to find a way of
distributing computational work across a network, well, you better be sure that you've got some
sense of the computational capacity of the nodes that are hosting your network.
it's fairly trivial to see that if you don't understand,
if you don't know what the computational capacity of the nodes is
or you can't rely on,
it's going to be very difficult to distribute computational work.
So, you know, one of the problems with, you know,
proof state networks as articulated, you know, as designed today,
is that they're all running on these, you know,
oftentimes virtual instances shared on these, you know,
on shared computers in Amazon Web Services, data centers.
You never really know what their capacity is.
So the internet computer,
obviously issues the cloud.
We want nothing to do with it.
We think a sovereign blockchain should run on its own hardware
and independent data centers,
and that's indeed how the internet computer works.
But the node machines themselves are built to standard specifications.
And this is nothing proprietary I've heard.
People say, oh, well, this is some kind of worth selling the nodes.
No, it's not the case at all. Of course not.
You know, there's been a generation one node spec,
and there's a generation two nodes back,
and there'll be lots of others.
the purpose is that people create node machines that are compatible with each other
so that when you create a subnet blockchain and that subnet blockchain is under load,
some node machines don't fall behind because the internet computer relies on something called
statistical deviance to detect faulty behavior.
And of course, you know, the protocol doesn't care why your node is deviating.
that it's producing less blocks than the other node, for example.
And this can result in it being slashed.
So it's very important that all these nodes are running on very similar hardware
so that you don't fall behind when the network's under load.
And, you know, this is another aspect, you know, the fact that these node machines
use standard specifications and hardware configurations specifically optimized
for the task of hosting a high-performance blockchain
allows us to drive
much higher levels of efficiency and performance.
But going back to the original question,
what is the model?
You can look at it a bit like this.
Each one of those node machines
is treated like an equal unit of stake.
So in Ethereum 2.0,
a unit of stake will be an ether, for example.
And if you stake one ether, you get
some fixed return. In the internet computer network, if you like, the staking currency is the actual
hardware device, the node the node machine. And that node machine, every node machine receives equal
rewards. There's no hashing competition, you know, or electricity burning competition.
The more hashing you do, the more block rewards you get, nothing like that. The reward
provided to each node machine is the same.
And the cat, without I get into some, there are details,
there can be some variations based on geography and things like that.
But essentially it's the same.
And you receive the rewards or the node machine receives the rewards if it does not
statistically deviate.
So that, that is, you know, according to the way that statistical deviation is detected.
I, you know, and analyze, you know, a node machine gets paid.
a fixed monthly reward in real terms for correct functioning.
And if you want to increase your revenues,
then you need to create, add more node machines to the network.
So, yeah, in that sense, you know, it's a kind of funny thing.
It's like it's a bit like proof of stake where the unit of stake is a node machine.
And just in the same way, one ether staked on Ethereum 2.0 gets a fixed stake in reward.
On the internet computer network, one node machine gets a fixed.
reward. And on the other hand, you know, this thing where the nodes are required not to
statistically deviate or they can get slashed, well, you could argue that's some kind of
proof of correct processing. It's a kind of, some kind of weird hybrid between proof of stake
and proof of work. And then the network nervous system, of course, is more akin to proof of state,
but it doesn't have anything to do a consensus. That's the governance system that's built into
the protocol. And you participate in that by getting ICP and staking them with.
in the network nervous system to create voting neurons. And then, you know, your neuron gets rewarded
when it votes. And, you know, as you probably know, it's a form of liquid democracy. And neurons
can be configured to vote automatically by following other neurons on different topics and
things like that. The magic of the system, though, is that the protocol is sophisticated enough
that it can upgrade the blockchain and the nodes without interrupting it. And this is actually
extraordinarily complex because, you know, I mentioned how the subnets all run a variety of
threshold cryptography schemes derived from BLS. And I also mentioned that, you know,
things like, you know, blockchain, blockchains that, you know, have dynamic membership,
like nodes come and go, you know, nodes fail, new nodes added. So there's all these things
it does, like it has a non-interactive DKG, which is a huge achievement. It does key resharing.
And all of this stuff is works in synchrony with things called catch-up packages within the
protocol that allows nodes to join and leave. And through that,
system, it's also actually able to upgrade the nodes within the protocol without interrupting
anything. So there's no need for a hard fork or anything like that. That's only like a sort of
emergency fallback mechanism that you'd actually do, you know, the node providers would have to
manually stick USB memory sticks into the back of the node machines. But that's not, you know,
the network runs under the control of the network nervous system that also upgrades. And that's why
it's able to evolve so quickly. I mean, and also why, you know, the networks, even though it's, I mean,
It's more than 100 times more complex in Ethereum, probably.
But yet, you know, the downtime, there hasn't really been any downtime since it's, you know,
it first went through Genesis launched 10th than May.
And part of the reason for that is that, you know, the network nervous system has been able to push out
security fixes and other kinds of bug fixes in real time.
I mean, I think it's, I mean, it's already processed hundreds of proposals that, you know,
do things like create new sub-nerds or upgrade, push-upgrades, that kind of thing, tweak economic
parameters. And if you include
economic information coming in,
it's processed tens of thousands of proposals
already. So anyway, the network nervous
system, the brain of the network, if you
like, that's more akin to proof
a stake where you can just take ICP,
stake them inside the network nervous system to create a voting
neuron. And if you don't want to be actively involved
in governance yourself, you just configure your
neurons to follow
other neurons effectively and vote
according to the activity of
other neurons and
instead of form of liquid democracy.
maybe one more question to the to to to to to to to to to to to to
operators currently if I uh check the website and then kind of there's a link
run a note and then it redirects me to a type form kind of to kind of basically introduce
myself and in a way ask for permission to run a note what's what's the vision on
that should everyone be or it should become permissionless to join the network as a
note or will that be part of governance or it is it is permissionless now it's I mean
it is permission is now I mean all this happens.
is the, I mean, because all of these things are completely new.
I mean, the internet computer is a completely new kind of blockchain.
And it introduces a lot of new concepts.
And it's substantially more sophisticated, but there's also a lot more to learn about.
So, you know, in order to add nodes to the internet computer, you first need to get a node provider ID.
And that, you do that by submitting a proposal.
It's permission as anyone can add the proposal.
Okay.
But it needs to be approved by governance or by the, but there's a lot of, it's a lot of
fiddly staff, you can end up with a command line, you know, working with a command line.
And so the internet computer association is doing is collecting people's information and
creating those proposals for them. Now, in the end, that'll be, I'm sure people will just go
directly. In the same way, by the way, I haven't even seen it yet. But we created internally
this front-end DAP that lets you interact with the network nervous system. It's like, if anyone's
interested, it's at n-s.IC-0.app. And that's actually,
actually being served straight off the internet computer blockchain. And it allows you to interact
with the network network system and your ICP ledger and stuff. But there's a, there's a group
called Tornick Labs. I haven't even seen it yet. Apparently hearing amazing things about it. And they've
created even better front-end app that lets you interact with the network network nervous system.
So in the same way, you know, the internet computer association is helping people, no providers
get involved as a free service. But in the end, I'm sure there'll be lots of other ways of getting
involved. Just to say on that front, it is a very complex piece, the internet computer. So it's
been adding new subnets quite slowly. And it's already, I mean, it's already processing 20 blocks a second.
And it's like, I don't want it is now, 130, 140 million blocks. But, you know, we plan on and hope to
see that, you know, scaling out to thousands of blocks a second. But, you know, I think the people involved in
pushing these proposals and getting it upgraded and updated, going carefully because, of course,
they don't want to.
This is only been running for three months.
So the moment, there are that many nodes.
There's hundreds of nodes actually already in data centers configured,
ready to be added to subnet.
And I think, I mean, I could have got this wrong.
But last time I heard the number a few weeks ago,
they were like 4,000 node providers,
4,000 node providers waiting to install nodes and things like that.
So it's tough because, I mean, you know,
the Internet Computer Association and the people are, you know,
working under its auspices.
trying to help people get involved as far as they can,
but this is a very, very complex blockchain system,
and it's only been running for three months.
And, you know, if something, I mean,
we did have one thing go wrong, like two weeks or no,
a week after launch where the network nervous system panicked, right?
Because you don't want to know.
It's like they just saw some,
it saw something that it thought was a data inconsistency.
It wasn't.
And then just panicked and refused to do it.
anything. And of course, the problem is if the network nervous system stops working and panics,
then you can't use the network nervous system, you can't submit proposals to the network nervous
system to push out fixes to the blockchain. So you actually have to revert to what everybody
else does, a bloody hard fork. So we're in the situation, like a week in, you know, from launch.
And, you know, the damn network network system has stopped working and, you know, it was panicking.
And so obviously, I think there's been an update push so that in the future, it will stop doing everything except accepting upgrades to itself to prevent this happening again.
But anyway, so then we had to get all the node providers to basically coordinate Editors and put USB sticks into the back and like reboot the machine.
Nightmare.
Nightmare.
It's only actually when you have to do these kind of hard forks that you realize how good the network nervous system is.
nobody ever wants to be in the position of hard forking the internet computer again by actually
sort of like, you know, actually manually overriding the software.
Absolutely, absolute nightmare.
Then we found things like, you know, some of the node machines has slightly different
specs and they didn't want to boot up from the USB sticks and all that kind of stuff.
It was a nightmare.
So the network nervous system is absolutely brilliant thing because, you know, it allows you
to evolve the network structure in real time, you know, creating new subnets and things like that,
inducting new nodes and node providers. And it allows you to, you know, push out,
updates the nodes and things like that. And, of course, you know, for the community to exert its will,
I mean, one of the things that goes back to the 2017, 2016, 2017 proposal, the, why you used to
call it the blockchain nerve system, that, you know, what do you do when you've got bad things like
child porn and human trafficking and terrorism? You know, the community, the internet, and
computer community can exert as well and potentially shut down those kind of systems through the network nerve systems.
Let me make one more comment on the question, whether it's permissionless or not.
I would say, well, I'm obviously kind of in the Ethereum camp and the proof of stake fan,
but I would say that is one of the arguments in a way for proof of work in my view, because that, in my view, is truly permissionless.
So kind of, well, you really need, you don't, I mean, you just need the hardware.
You need to deliver the work.
And that's all you need to do while with something like proof of stake.
You need basically, to some extent, agreement from the existing community or like put in extreme.
So if the token distribution would be like just between three guys, well, you would need stake from those three guys.
And I see it is.
And well, with Ethereum, I think it was a good thing that there was.
well, there was a presale, and then there were a few years of proof of work,
which actually led to a wide token distribution.
So now I think it's kind of fair to say Ethereum is now a permissionless system
because somehow the ether is so distributed that anyone can get 32 ether.
But in your system, you still kind of need the permission of the existing stakeholders
to become a node or obviously existing token holders.
No, not quite.
No, not quite.
I mean, there's a separation.
So I mean, you know, that's why the network nervous system,
the controller is separated from the physical layer.
So, you know, if you're a, to borrow the term, minor,
it's pretty straightforward.
You know, you get these, you know, obviously you have to buy the node machines to the
required spec from somewhere, but then you, you know, install them in a,
in a data center, you know, plug them in to the internet and off you go.
Obviously, if you're no machines defective in some way, if, you know, bandwidth isn't good
enough, you didn't use the right spec.
I mean, you can get slashed, but that's obviously very easy to avoid, just make sure they
have got enough bandwidth and they meet the spec.
So that's, you know, there's a deliberate separation there.
So, you know, the people participating in the network network system obviously want it to be,
you know, if you like, permissionless because the security of the network, the, the,
the greater the variety of node providers, the greater the security in the end.
So there's no reason why anybody would want to stop a node provider getting involved,
unless they are obviously malicious and known to be a malicious party.
You know, the protocol, if you like, takes care of that.
If somebody is running faulty nodes, then they can get slashed.
And that's their fault.
So it's not the, that's not something that, you know, the people staking in the network
nervous system and controlling these voting neurons have to worry about their objective.
I mean, look, the network nervous system is designed using, you know, cryptos economics and game
theories such that neuron holders will either vote or configure their neurons to follow other
neurons so it votes in a way that is likely to maximize the value of the ICP locked in their
neuron at the earliest point in the future that it can be unlocked, at least if they're a little bit
sociopathic. So that's why if you lock up your, if you create a neuron with an eight year lock up,
you know, you get, I don't know what it is. It's several times I think greater voting power and
rewards than if you just lock it up for for a year. Because, of course, you're voting with a long-term
perspective that you're thinking, well, how do I maximize the value of the ICP locked inside this
neuron over it over it? And certainly, broad participation is something you're going to be seeking
because the greater the number of node providers,
the harder it will be to attack the network.
For example, by launching a legal action
or through a node provider going bankrupt.
So these interests are aligned
to the network nervous system
as an open permission to system.
And certainly, you know,
if you're a long-term ICP holder
and you want the network to succeed,
you also want as many node providers involved
and over time as demand, you know,
for computation grows,
as many nodes as possible.
Something, by the way, I just mentioned with respect to proof of stake, and this is an important
observation.
There are some security advantages of proof of stake.
One of the challenges with proof of work is you cannot slash people.
So civil resistance is the mechanism through which you make it difficult to participate in a network
that stops an attacker just creating, you know, for example, zillions of nodes and taking it over,
right?
So there are three E's civil resistance, right?
there's entry cost, existence cost, and exit penalty.
So in the case of Bitcoin, the entry cost is buying your A6 and configuring them in some suitable
environment.
That's your entry cost.
Your existence cost is obviously managing the machines, but primarily the existence cost is
electricity.
Proof of work is really an electricity burning competition, which is why it's so environmentally
unfriendly.
But the third one is exit penalty.
There isn't really an exit penalty.
if, you know, the Chinese government took over, you know, a vast way of Bitcoin or Ethereum mining machines and took control of the network, there'd be no way of slashing them.
There'd be no way for the good miners to band together, you know, the correct miners to ban together.
Other than changing the algorithm which would flesh everyone.
Yes. And it was, with proof of stake, you know, obviously entry cost is actually obtaining the cryptocurrency necessary to.
stake. Existence cost is really the cost of capital. It depends if you've got delegated staking
and things like that. But perhaps it's also running the some kind of Amazon Web Services instance,
cough, cough. But thirdly, you have got exit penalty in the sense. So if, you know, a sufficient
number of stakers, participants in the network became, you know, faulty included and malicious
whatever, the correct miners could band together and fork the chain and, you know, delete the stake
of the bad guys.
Now it would be very disruptive, of course, however.
But the bad guys, it's like unbounded cost now.
It's unbounded cost to...
Just wanted to come back maybe to part of the discussion earlier.
At least that's my view of it, that even like 5% or whatever,
a small percentage of honest...
So even if just 5% would be honest,
they could simply slash the 95% way.
So it's not enough to have 50% of the stake
to kind of create your own history
as long as you have people running full notes,
they will reject this wrong history and just slash you out.
Well, of course, this is proof of stakes.
So this is one of the reasons proof stakes,
in my view, potentially superior in many respects to proof of work.
I guess that was, to some extent also, again,
well, some reason to have actually full notes
and people that don't even participating in the,
as validated as block producers,
but simply as someone who validates the transactions.
Because that means even if more than 50% of the proof of stake,
participants would kind of try to create an invalid block,
they would simply be slashed out.
Of course, it would be disruptive,
but it would go on and they would lose their state.
Or you'd have an incorrect network and a correct network,
and essentially the incorrect network would have some break in its chain of history.
And yeah, but, you know, the reason that's possible, of course,
is because, you know, you have validated sets
and they're adding signatures and you can compare validated sets
and detect who's double-signed and things like that.
So that's an advantage you get.
And, you know, proof of work has other, you know, disadvantages too.
You know, you get essential.
But maybe you mentioned those three things.
Maybe let's just to round it up to now,
now how is it for internet computer,
kind of entry, entry, running and exit cost?
Ah, well, okay.
So the three is a simple resistance for the internet computer.
Well, first of all, of course,
you need to acquire the node machine.
That's your unit of stake.
and remember each, recall that each node machine receives an equal reward for correct operation.
So that's your entry cost.
Your existence cost, of course, is managing that node machine and making sure that it
continues to perform correctly because it doesn't get slashed.
You know, that's going to involve things like, you know, paying for hosting and internet
bandwidth and addressing any hardware faults that arise and so on.
exit penalty is the same thing
your node has been
statistically deviating
and it doesn't just have to be dishonest behavior
it could just be that this node is installed
in a crappy data center
or it's not got the right hardware spec
so it keeps falling behind when the
network's under load and not producing enough blocks
and so it's going to get slashed, same thing.
Is there slashing?
Because I mean, do I need to stake
do you need to stake kind of tokens
to...
So what you're saying is couldn't somebody
take this node and then re-enter the network under another density.
So, yeah, you know.
How big is the X?
Well, look, I mean, first of all,
slashing can include things like accrued earnings that are paid in arras.
So you can actually, you can, very, yeah.
How quick can you cash out?
Yeah, very, exactly, very, very, very easily included sort of just a traditional
proof of stake-esque financial penalty.
But, you know, in practice, you know, when you're, you know, you would have to,
you'd have to create a new node provider ID.
unless you just sold the node to another node provider,
which is probably not what you want to do.
There's a whole process.
You can't get that proposal into the nervous system,
get it adopted.
There's some identity aspect to that,
because as you know, it's deterministic centralization.
That's how we get the replication down.
So, yeah, there's a lot of overhead.
You know, you don't want to invest a lot of time.
Acquireing machines.
The main cost is the entry, is the entry cost.
Well, I mean, don't forget.
I mean, these nodes consume a lot of bandwidth and power and so on.
I mean, nothing like a proof of state network or, sorry, proof of work network, of course.
But, yeah, yeah, yeah, you know, you've got all three days.
Dominic, I had hoped to understand definitively in the internet computer a lot better than I currently do.
And I'm not quite there.
So I think this, we will probably have to catch up sooner than in five years.
I'm not not too, but I'm still very much not at the level that I wanted to be after this episode.
Anyways, let me ask you one final question.
I mean, so basically the internet computer, it's this new thing.
It's kind of currently, it's not running under load.
It doesn't have a ton of applications running on it yet.
So basically, if you look ahead towards, you know, the network becoming a year or two years old,
what do you see as the biggest challenge, the make it or break it point for the very
ambitiously named internet computer?
Yeah, well, look, we don't have to get that for a head.
Look, don't believe.
the FUD and misinformation that you see on social media. I mean, there's a lot of people that are
very threatened by the internet computer. They're very concerned about the huge sort of advances
that have been made technologically. And it's already got hundreds of projects building on it.
It's active user growth is exploding. If you ignore this blockchain, you do set your peril.
And it's no wonder. There is no choice. If you want to build a DAP that doesn't rely on the cloud,
if you want to build a DAP that actually runs at web speed,
if you want to give users a frictionless way of authenticating
to your DAP, for example,
by pressing the fingerprint sensor on your laptop or using Face ID,
if you want your DAP to serve interactive content directly into browser,
secure, all of these things, there is no choice.
You can't use Pockadot, you can't use Kodana, you can't use Ethereum,
Ethereum 2.0 won't be able to do it, and even if implemented eventually is invisage,
you have no choice.
So to be absolutely clear, if you really are a blockchain maximalist
and you really want to see a blockchain singularity, as I do.
You have to build on the internet computer.
It would take any other blockchain that wanted to do what we have done,
would need to follow a similar path.
You need to build a huge team of researchers and engineers.
This stuff requires professional cryptographers,
and a lot of the areas that we work on require,
you know, world-class cryptographers,
of which there are only a handful in the world.
I mean, you'd probably aware that, you know,
our CTO is a guy called Yan Kammer.
Nish, who's one of the world's greatest cryptographers, very famous.
We've got Jens Groth working here.
Victor Schoop, who's a god of both cryptography and distributed computing.
So I get it.
Difficult to build, but Dominic, biggest challenge.
What's the biggest challenge?
The biggest challenge, honestly, was actually just building the thing, getting this far.
It was very difficult.
To assemble this kind of team.
Okay, let me be biggest challenge ahead of you.
You know, I think, look, blockchain is a very rough and tumbled space.
and we've already seen it. We launched the thing. The price went out of control. It had a fully
diluted market cap of $300 billion at one point. Then it came down and it went down to some
silly price. Now it's going back up. You've got people, you know, trying to scam us with lawsuits
and squeeze us. You've got all kinds of blockchain projects which have shills that are threatened
and they just sort of, just there's a tidal wave of misinformation and nonsense and fud out there about
the internet computer, none of which is true. So, but you know,
we're going to stay focused on our mission, which is blockchain singularity.
We're going to keep on improving the technology as much as we can,
and we're joined in this by lots of other parties now.
For example, some of the most exciting things that are coming in the next few months,
the service nervous system functionality.
So you'll be able to take your DAP, something like OpenChat,
and assign it to a service nervous system,
which is basically just a form of network nervous system, right,
which has its own ledger of governance tokens.
The way that will work is that, let's say you created open chat, you'd press the button,
a new service nervous system would be created.
Control of open chat will be assigned to the service nervous system, so you can only upgrade,
you know, open chat through the service nervous system and so on in the future.
You as a developer might get 25% of the governance tokens.
The other 75% of the governance tokens will be auctioned off by the now autonomous open internet service.
And so essentially, you know, the proceeds of the auction would be held within the service nervous
system and you've got a fully autonomous system.
So that's like ICO 2.0, 3.0, whatever you want to call it.
That's coming soon.
And that's one of the reasons you've got so many people, so many, I mean, hundreds of
developers building now DAPs on the internet computer because they know this is coming.
It's one of the most important aspects of what we're doing.
Then after that, and this is, I guess, between four to six months away, it's very complicated
is we are adapting chain key technology so that the internet internet computer smart contracts
can directly interact with Bitcoin.
Ethereum without bridges, without bridges.
And we know why bridges are rubbish.
We just seen we discussed earlier on what happened with this defy network and people
have lost $600 million.
So the internet chain key cryptography makes it possible for a smart contract on the
internet computer to create a Bitcoin transaction.
To give you an idea of how this might work, let's say Martin has his own Bitcoin wallet,
nothing to do with the internet computer.
Here's a Bitcoin wallet.
There is a small contract on the internet computer that implements a Bitcoin wallet.
it maintains a Bitcoin balance on the Bitcoin blockchain.
And I open my Bitcoin wallet, which runs on the internet computer, and I send a Bitcoin to Martin,
and I authenticate it using Internet identity by pressing the fingerprint sensor on my MacBook Pro.
So essentially, we're adding smart contracts to Bitcoin.
And this is possible, being made possible by extending, you know, building on existing
chain key work with non-interactive DKGs and Kiwi sharing.
but doing it for ECDSA threshold sharing.
So essentially, the internet computer nodes will talk directly, no bridges, no validators,
because we don't believe in that kind of stuff anymore than we believe in cloud and so on.
The internet computer nodes will talk directly to Bitcoin and Ethereum nodes and submit
transactions to them, and they'll actually observe the blocks being produced by those networks.
So first of all, we're going to add smart contracts to Bitcoin.
Smart contracts on the internet computer will have their own build and maintain
their own Bitcoin balances and, you know, send, receive, hold Bitcoin.
Then we'll do the same for Ether, but we'll also, of course, add the possibility for
two-way bidirectional calling between Internet computer and Ethereum small contracts.
And we think this will be immense, absolutely immense.
I mean, what is there?
Almost coming up for a trillion dollars of liquidity on Bitcoin.
And all of a sudden, people will be able to build small contracts.
with all of the advantages the internet computer provides,
they can serve interactive user interfaces securely directly into the browser.
Users can authenticate using, you know, internet identity,
which is in turn based on a kind of chain key technology,
which means, you know, face ID, fingerprints, Uber key, whatever you want to use.
And, you know, two second finality, all of that is going to come to Bitcoin and Ethereum.
And we in the end see, you know, Bitcoin and Ethereum is kind of,
DeFi settlement layers.
And we think computation is going to take place on the internet computer blockchain.
That's where DAPs will really run.
And, you know, Ethereum will become a sort of rails for defy settlement.
Dominic, thank you so much.
I have to say, I learned a lot.
I was following or am following the project since many, many years.
I remember your talks in 2015.
I clearly see that your work and the work of your team has been also quite,
influential for Ethereum, I believe.
Well, the beacon chain and things like that, yeah.
For sure, for sure.
I remember you running around in Shanghai
and preaching BLS signatures.
So, yeah.
You know, but Monty was met, I mean,
DFINITY originally,
so what happened was in 2014,
I was working on this thing called Pabble.
I was basically trying to adapt traditional,
you know, Byzantine-Faulton consensus algorithms
for the blockchain space.
And then I sort of took it,
that I abandoned that project for a bunch of reasons.
But in 2015, I took this alternative path, you know, using BLS special signatures,
and I realized I could create random numbers and a decentralized network,
and that this would enable me to create fast blockchains and I realized, you know, all the rest of it.
But originally, I mean, Divinity was conceived as Ethereum 2.0, Ethereum 3.0.
That was the plan. It was never meant to be an independent project.
But it became apparent that.
You know, Ethereum just wasn't set up to do the kind of R&D that would be a
required to create the internet computer.
And that's why it became a separate project.
And of course, there were some different visions.
Our vision was very cryptography first, and we needed to develop a lot of novel
crypto.
So, you know, in some ways, it's like, I think they've, it's kind of ironic.
I think, you know, the internet computer is really, or at least my interpretation
of the world computer vision that I heard in 2014.
And it's taken years and years.
I mean, Ethereum, I think, because, I, and this is why I think it's, you know, we,
we're at a very interesting juncture because the Ethereum of today is really the Ethereum of 2014
with some improvements around the edges.
And the question is, where does it go from here?
But, you know, I do believe the internet computer is really the world computer.
That's what I, you know, set out to implement in 2015.
And we've probably, I mean, you know, just exerted absolutely massive intellectual and engineering
firepower to get this thing built.
in my head it does make sense to say
well Ethereum is the wallet settlement layer
and of course that's not where you do computations
and that is much closer
or like yeah
that's much closer to internet computer
that would make much more sense to do computations there
I do think there are slightly different
tradeoffs or assumptions around security
and I think we have discussed a few
So to me it became much clearer and kind of also clear how the two systems might complement each other.
Well, that's right. I'm just going to just, yeah, I'm just finishing off with the, you know, the differences between chain key and traditional blockchains where, you know, every single transaction is rerun to create the current state.
You know, albeit, you know, there's still the danger that blockchain is reroute, rewound and rewritten, which is very bad.
at least it's not possible for somebody to actually steal your balance of ether, right?
So, I mean, you know, if people are concerned about that, you know, that's a very strong argument
why, you know, Ethereum can be a defy settlement there, in my view.
Albeit, of course, you know, the internet computer is also designed to be secure, mathematically secure.
Thank you so much.
Pleasure. Thanks for having me.
Good luck with the project.
Thank you, Dominic.
I think we'll have to have you back.
sooner than in five years. But thank you so much for coming on again.
Thank you. I love talking about blockchain. It's my favorite topic. Thanks a lot.
Thank you for joining us on this week's episode. We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast.
Go to Epicenter.tv.tv.com for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
