Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Kevin Wang: Nervos – Scaling Smart Contact Blockchains With Proof of Work and Generalized UTXO

Episode Date: February 11, 2020

While recent blockchain launches seem to leverage various Proof of Stake consensus mechanisms, some believe Satoshi’s consensus mechanism is optimal for distributed protocols. As decentralized ledge...rs jockey to become the chain of choice for enterprises looking to leverage blockchain technology, projects are looking to offer a solution that maximizes security, decentralization, and transaction throughput. Kevin Wang, a Co-founder of Nervos, joins us to discuss why Proof of Work was implemented as the consensus mechanism for the network. To enable greater flexibility for application developers, Nervos created a Common Knowledge Base (CKB) to focus on the security of assets, enabling a complementary layer of Virtual Machines (VM) to scale and facilitate computation.Kevin also discusses the active initiatives underway with the Nervos Grants Program to foster ecosystem development and encourage developers to evolve the permissionless network.Topics covered in this episode:Kevin’s background at IBM, his open source development, and journey to cryptoWhat the blockchain scene is like in Hangzhou, ChinaWhat’s unique about Nervos, and the importance of each layer within the networkIntroducing Nervos’ consensus mechanism, NC-MaxWhy Nervos decided to implement Proof of WorkExplaining the Common Knowledge Base (CKB), and its significance in the Nervos networkHow developer experience is in the Nervos ecosystemThe economic model of CKB, Nervos’ native tokenProgress of the network, and a call for developers to consider the recently announced Nervos Grants ProgramEpisode links: The Nervos Network homepageNervos Network on TwitterWhy We Love Nakamoto Consensus - Nervos BlogNervos CKB in a Nutshell - Nervos BlogNervos developer resourcesNervos Telegram channelNervos grant informationSponsors: Pepo: Meet the people shaping the crypto movement - https://pepo.com/epicenterThis episode is hosted by Sebastien Couture & Sunny Aggarwal. Show notes and listening options: epicenter.tv/326

Transcript
Discussion (0)
Starting point is 00:00:00 This is Epicenter, episode 326 with guest Kevin Wang. Today our guest is Kevin Wang, and Kevin's the co-founder of the Nervos Network. So Nervos is a project that came out of China in 2018 and launched its main net a few months ago. What's interesting about Nervos is that it has such a unique architecture if you compare it to any other smart contract platform that is coming out today, right? So whether it's Pocod or Cosmos or ETH2.0.2. know, all these platforms, they implement at least one of two things, some form of proof of stake and some form of charting to achieve scalability. Now, Nervous has a very different approach. When you first look at it, you notice that it has a two-layered architecture. There's a base layer
Starting point is 00:01:03 for transactions and computations happen on a layer two off-chain. So let's focus on the base layer for a moment because this is where things are really interesting. This base layer is called the common knowledge base or KB, and it acts like a generalized verification network. So let's take a Bitcoin transaction for a second. When you create a Bitcoin transaction, your client verifies that the inputs correspond to unspent outputs in the UTXO set before broadcasting it to the network. So your client does computations locally, and then those computations are verified by the network and added to blocks. Well, Nervos have a similar approach, but they've generalized UTXO in a way that it can also hold smart contract state. So actual computations are done off-chain on a layer two,
Starting point is 00:01:55 and the KB does the verification. So this is in start contracts to Ethereum, where computations are done on-chain, and users only have the assurance that computations are verified once they've received the valid block. So another similarity to Bitcoin is that Nervous leverages proof of work. So their team stands firmly behind Nakamoto Consensus, and they've created an optimized version of proof of work called NC Max that greatly improves throughput. And Kevin describes these improvements during the interview. So according to Kevin, one of the advantages to the KB approach is that apps retain their composability. So I think that there's still quite a few unanswered questions about it 2.0 and how apps living on different shards will maintain
Starting point is 00:02:40 composability. And so the issue here is that apps that depend on each other might end up living on the same shards, which kind of defeats the purpose of sharding in the first place. Well, in Nervis, there's no sharding. There's just a global unified state and preservation of all assets on the KB layer. I should point out that the Nervos Grants program is a sponsor of Epicenter, but not of this particular episode. Sunny and I, I had been planning to do an interview with Nervos after they launched, and it just happens to coincide with the grants program sponsorship of the podcast, which began last week. A bit of housekeeping, France Blockchain Week is happening here in Paris on my home base on the
Starting point is 00:03:20 week of March 2nd. This will be the third year of ETCC. It is on March 3rd, 4th, and 5th. And over the weekend, we've got the Eiff Paris Hackathon on the 6th, 7th, and 8th. So ECC is taking entirely new dimensions this year. They have a new event venue, which is much bigger and much nicer than the previous years. I don't even know how many speakers there are. There are just so many. There is several speaker tracks, but go to ECC.io to get all the details. ECC is organized by Ethereum France.
Starting point is 00:03:58 They're a non-profit organization, so the tickets are actually quite affordable. So it's a great opportunity to come to a great conference in a great city and not spend a whole lot of money, especially if you're in Europe. I mean, it's just, you know, a skip and a hop from just about any European town. So you should definitely come. And if you're going to be here on Wednesday the 4th, we're going to have an epicenter drinks meetup. So, you know, if you're coming to Paris, I'm going to buy you a drink. That's just how it's going to be. So Frederica, Sonny and I will be having this drink.
Starting point is 00:04:33 Meetup on the 4th, and it will be our absolute pleasure to hang out with you. So do register at epicenter.rox slash Paris Meetup. Before we go to the interview, I'd like to tell you about our sponsor for today's episode, PEPO, where the crypto community comes together with short video updates and tokens of appreciation. So whether you're a crypto developer, a podcaster, an analyst, a blogger, or just an enthusiast, there's never been an easier way to showcase your work, earn appreciation, and connect with the community. So ETH Denver starts in a couple of days, and PEPO is going to be there. In fact,
Starting point is 00:05:09 they are the official social app of East Denver. So you should download PEPO to follow all the East Denver action. And if you're going to East Denver, you can use PEPO to complete little missions. They're going to be putting up little challenges on PEPO, and you can win PEPO coins if you complete these missions. And what's great about PEPO coins is you can use them to share your appreciation for others, but you can also use. use them to buy actual things. You can use them in the Peppo store to buy gift cards for Starbucks, Amazon, Uber, Airbnb, and many other merchants. And if you can't be there in person, like me, if you're stuck on the other side of the ocean or something, well, you can use Peppo to watch
Starting point is 00:05:48 updates from your favorite projects, and you can ask questions, and you can share your thoughts. So to download Peppo, go to pepo.com slash epicenter. That lets them know that we sent you. And you can follow me there. I'm at Seb 2.0. That's S-E-B-2-P-O-N-T. We'd like to thank Pepo for those support of the podcast. And with that, here's our conversation with Kevin Wang. We're here with Kevin Wang. Kevin is the co-founder of Nervos. Nervos is a multi-asset store value blockchain, which comes out of China. And Kevin is going to walk us through how Nervos works. And specifically, one of the things that's kind of interesting about Nervos is that unlike many of the other blockchains that are now coming into the ecosystem,
Starting point is 00:06:37 Nervos uses proof of work, which will, I'm sure, ruffle the feathers of many of our listeners, but nonetheless, it's a really interesting model. Thanks for joining us today, Kevin. Good to be here. Thank you, Sebastian and Sanii for inviting me. So let's start with a bit of your background, which is far removed from what you're doing now. Previously, you were a consultant, you work for IBM, and then you started one of the, biggest tech podcasts in China. What is it like running a podcast in China? I've heard a lot of things. I've heard that running a podcast in China is very different from running a podcast here. One of the reasons why, because people in China that run podcasts actually make money.
Starting point is 00:07:15 But yeah, tell us a bit about your background. So we didn't really run a for-profit one. So specifically of the podcast, so it was a technology-focused one called T-Hour. And actually, this is how I met some of the co-founders of Nervos. So it was quite interesting. It was at the time the largest, okay, maybe we shouldn't collect the largest, but one of the largest definitely technology podcasts. Sort of focus on programmers and sort of hackers, entrepreneurs, that type of audience.
Starting point is 00:07:48 And then it was quite fun. We ran somewhere like over 100 episodes, I believe, and you got a lot of folks. But yeah, so for me, myself specifically, I was a train as a software engineer. And like you said, I work for IBM. Started my career there, Silicon Valley Lab, and did some big data engineering solutions before they were called big data. Anyway, so and then sort of jump to the startup world and, you know, gone to open source and, you know, the web. But this is like between 2000 and 2010 where, you know, You see a lot of social apps and whatnot, a lot of entrepreneurs moving to that space.
Starting point is 00:08:31 So I caught part of that wave and I work with a good friend of mine and we started a developer education business or startup. I would focus on training people to become self-engineers, professional software engineers. Yeah. Then, you know, through the journey, you know, I discovered Bitcoin like many other folks and it had, you know, reading the Bitcoin white paper had a really profound, was a really profound experience for me. And so I knew that I always wanted to do something in the space and started nervous with some of the other co-founders back in the beginning of 2018. And then that's been working on
Starting point is 00:09:15 these things. So when I came in a, you know, visit you guys actually in your office last year. and I came visit you guys in Hengzhou, which is, and it was really cool experiences to sort of like the startup capital of China. So can you tell us a little bit about what like the scene is like in Hengzhou and what kind of other startups are there? Are there a lot of other blockchain companies based out of there?
Starting point is 00:09:38 So Hongzhou is known at the FinTech Center of China. And, you know, the biggest player obviously is Alipay, which is part of the Alibaba group. and they have also many, many products. They run a huge operation there. And then you have sort of more traditional fintech companies. You have like these P2P fintech companies. And in the blockchain space, there are also many companies.
Starting point is 00:10:11 And so Jirang University is there, which is one of the largest engineering-focused universities in China. and so a lot of good engineers come out of university, and then there is, you know, we are there. And I would say our engineers are mostly in Hangzhou. And you also have, I'm Token, which is wallet company, and you have SparkPool, which is a largest Ethereum mining pool there. And then you also have some permission blockchain companies there as well.
Starting point is 00:10:44 And also a lot of researchers, a lot of researchers and blockchain engineers, because we have events regularly in Hongzhou, and then so we kind of know that crowd pretty well. It's also very entrepreneurial city. So what would you say is the level of interaction between the fintech community there and I guess more specifically the blockchain community and with sort of the West and Europe and the U.S.,
Starting point is 00:11:09 are you seeing a lot of interaction or is it mostly just sort of encapsulated there? Yeah, it's interesting because it's a very, the wind shifts a lot, right? And it's very much regulation-driven. And in the early days, when there wasn't clear regulation or like where the line will be drawn, then there is a lot of people try to, you know,
Starting point is 00:11:35 sort of cross the boundary and do a little bit blockchain, but also like traditional fintech or fintech company looking into blockchain and things like that. But so the regulation in China, I mean, that in itself is pretty big. topic. So now I think it's more, you know, clear where the boundary is. So the traditional companies tend to be more, you know, permission of blockchain, using permission of blockchain space. And, you know, the public blockchain, which is, you know, crypto assets and smart contract platforms tend to more like, you know, grassroots. And then we see some, you know,
Starting point is 00:12:11 trend, it's only since last year, President Xi Jinping sort of had this top-down mandate for the country to develop blockchain technology. So there is some trend coming along that this can converge. But then that process, I think, is starting, but we're not really there yet. Okay. Interesting. I think it would be really helpful for us at some point to do a whole episode on the Chinese ecosystem and all of the, you know, the regulation, but also all the initiatives that are coming out of there. From this side of the world, it's often hard to dissect. Yeah, it's very different.
Starting point is 00:12:54 I've been actually following a lot of Jan's work, your other co-founder, for a while, because his company, Cryptape, they actually built one of the first alternative implementations of tendermint, and so they had a version of tendermint in Rust, about like two years ago. So I've been following them since then. So what's the relationship between the Cryptape company and Nervos?
Starting point is 00:13:15 Is it sort of like one of the main development companies or is it Cryptape has just sort of turned fully into Nervos at this point? Good question. And it's the first one that you mentioned is Crypt Tape is the main developing companies for Nervos. And so in a way, it's very similar to Tenderman and Cosmos, right? So the Nervos is a public blockchain project governed by its own foundation. And Crypt Tape is tasked with engineering tasks like implementing the protocol and develop the ecosystem, tool chain products and things like that.
Starting point is 00:13:54 And so Cryptape, you know, originally started off with doing some more private blockchain development. And then what drove the vision to decide, you know what? We're going to instead start focusing on building a. a public blockchain. So what was that vision there for Nervos? Yeah. So it really from, like you mentioned, Jensia, right? So he was a core Ethereum researcher, core developer that he used to work with Vitalik pretty closely in the early days, I think 2016, around 2016. And at the time, like you said, cryptic built a permission of blockchain called SETA. So it's a variation of, like you said,
Starting point is 00:14:32 BFT tendermint. But, uh, but, you said, crypto built a permission to blockchain. But, uh, it's a variation. So it's a variation of, uh, you know, he was, at the time, he was still working with Ethereum team pretty closely. And, you know, through the almost two years he worked there, he really got a front row view of, you know, how Ethereum has grown and also how Ethereum has many of the, you know, growing pains, right? And this is, I think that experience informed him of a vision that, you know, maybe we could do a different direction. If we start with Bitcoin, right, and then you get to Ethereum, which adds smart contract capabilities and, you know, general computation capabilities. And then Nervos, CKB or common knowledge base, the layer one blockchain, is almost sort of, if you go from Bitcoin to get to a, you know,
Starting point is 00:15:22 turn complete smart contract platform, you either go to the direction of Ethereum or you go to the direction of CKB, right? So like turning left, you go to CKB, turning right, you go to CKB, turn to Ethereum. I think both ways are viable, but he felt there's the advantages to that going, the technical architecture that we chose. But obviously, we can get into more of that in this podcast. I mean, it's kind of interesting that you guys came from, you know, heavily from the Ethereum community when, you know, just by looking at some of the architectural decision, if I had to guess and I didn't know, I would have thought you guys were like Bitcoiners, like starting to build a smart contracting platform with like proof of work and UTXOs and.
Starting point is 00:16:01 So you're right, it's a very interesting design paradigm that's pretty different than Ethereum and what a lot of the existing smart contracting systems are moving towards. Yeah, just about anybody else. So let's talk about the Nervos vision then. Can you describe at a high level what you're trying to build here and why you're building it this way? So Nervos Network is a layered architecture. So in the early days, we specifically chose the direct. of sort of scaling through layer two, right? So in other words, we keep layer one as simple and
Starting point is 00:16:39 play a limited role in the whole ecosystem as possible, and then use layer two at the scaling layer. So that decision informs a lot of the actual technical tradeoffs that we made, including things like what Sonny mentioned that we used POW, and so the rationale here is Sure, if we're to choose POS and some novel consensus algorithms, we use Nakamoto Consensus NCMAX, a variation of Nakamoto consensus. And we could potentially achieve higher scalability. But on the other hand, if you have chosen to scale with layer two, then you don't necessarily have to do that. We sort of maximize, take a no-compromise approach on decentralization and protecting the phone node, the cause of running phone node. and, you know, just like a Bitcoin, BTC.
Starting point is 00:17:34 And we don't do sharding and want to make sure to keep all the global state in one piece. Obviously, touring complete to support layer two. So all this come together, right, is the nervous network. And also, you know, I know we'll get into this a little bit later too. And also on the crypto economics side, we feel like we fixed some of the issues that Bitcoin faces. And then also have a crypto economics model that specifically designed, to be the layer one for layer two as well. So both, I would say, architecturally,
Starting point is 00:18:06 as well as crypt economically, nervous networks layer one is designed for layer two solutions technologies, and together that's the nervous network. You said that you don't do sharding. So basically the layer one conserves all the different states of the history of the nervous network. So explain then why is nervous better for L2?
Starting point is 00:18:31 than any of the other scaling approaches that we see currently in the ecosystem. You know, it's a pretty big topic, so I'll just kind of spread a little bit and not to get into depth of each one. And I think first you have to look at what layer two is best for, right? So layer two technologies are best for scaling, and you have really cheap, potentially really fast finality and really fast confirmation. and can scale really well. So when you design a layer one for that purpose to complement layer two, you want to get things that layer two need and layer two solutions don't have, which is you want to have some sort of global settlement layer, right,
Starting point is 00:19:16 that is objective and which kind of points to POW. You want to have a layer one system that's secure. And we can talk about the cryptocurrency economic model of Nervo's layer one. a little bit. You want to pick one that doesn't have sharding so that you have global state together, so you don't have to get into the sort of synchronization of different states and all that issues. You want to pick a layer one system that's decentralized, right, maximum decentralized. Again, it's difficult to push for that if you want scaling on layer one as well, right? But if you don't need scaling on layer one, we want to push the other direction.
Starting point is 00:19:56 We want to be no compromise decentralization, right? which means you've got to be able to protect, sort of like a BTC, right? You want to protect the cost of running full nodes as much as possible. And also, you know, complete Turing scriptability. Like Bitcoin cannot be that layer because the lack of off-codes or Turing complete. So one of the main benefits, you know, as opposed to Ethereum, then here would be the lack of sharding. Why is that that important for L2? Like, you know, let's say, you know, as long as all...
Starting point is 00:20:29 all the participants of a particular L2 system are on a single shard, then it should be fine, right? Like, as long as all the players in this plasma game all are on, have an account on that shard, why does it affect the fact that there are other shards in the Ethereum world? That's true. And what we have seen, for example, in the, in the defy space specific use case, right? And you have a lot of applications that kind of depend on each other, and then, or we call this composability. When you have applications dependent on each other,
Starting point is 00:21:07 you want to make sure that they can synchronize state very easily, right? What you will see probably in a sharded blockchain, you see that interdependent applications tend to then reside on the same shard, because that would make the most sense. However, this kind of defeats the purpose of sharding. If, let's say, you know, Defi or some other killer application would evolve and then you have to put all of them the same shard, then you go back to the issue of scalability because how do you scale the shard? So, you know, what we see is, you know, first of all, layer one, we want to have a global and unified global state. And there's a lot of value from that.
Starting point is 00:21:50 and like I said, right, so you can compose very easily. You know, the way we think about this is layer one is for preservation of assets, right? So that's the most valuable, like when we talk public blockchains, the value of public blockchains derives not from the fact that they can do computations or they can do past messages around, right? But they can support applications focus on value or assets or finance, right? then whenever you have access, then sort of defy applications and tend to go with the assets. So you will see that, you know, defy very naturally would prefer a blockchain that has this unified global state,
Starting point is 00:22:35 then all the compatibility can happen. And it is much easier for that purposes. So you mean that there will need to develop an ecosystem of composability even for L2? So let's say I need to be able to close my payment channel that triggers some event happening on a plasma chain. It's good to have those on shared states that way they can kind of affect each other easily. Yeah, you could. But again, that's not the advantage of layer two, right? So unless your layer one is absolutely crowded and then it's too expensive to perform these transactions. Sure, you know, I'm not saying they cannot.
Starting point is 00:23:15 be performed a layer two. But these type of applications I call, you know, settlement type of transactions. These are high value transactions. And then these typically need global consensus. So, I mean, the best place to perform them is on layer one. And you have, you know, a mechanism that's clearly that can, you know, reach global consensus because that's the most valuable, those are the most valuable transactions in a way. So let's talk about the consensus mechanism. So I mentioned at the beginning of the show that Nervos uses proof of work. And it has a variance of Nakamoto Consensus, which is called NC Max or Nakamoto Consensus
Starting point is 00:23:58 Max. Can you explain what is NC Max and how does it improve on the Bitcoin proof of work that we're used to? Yeah. So NC here stands for Nakamoto consensus, which really is Bitcoin's consensus mechanism, right? This chain-based consensus mechanism, you know, preferred availability. And this is, again, another sort of layer one, what we believe is probably the layer one should have. So NCMAX improves on Nakamok consensus in several ways, right?
Starting point is 00:24:31 So first, we believe that Bitcoin's 10 minute per block underutilized bandwidth. and in fact it's a very wasteful. So we want to have a mechanism that can dynamically adjust difficulty so that instead of, you know, to keep like 10 minutes per block, we want to target a specific uncle rate. Let's say, I think currently it's targeting up between 3 to 5%. So this way, you know, within that range of uncor rate, we can keep the block time as fast as possible.
Starting point is 00:25:05 So right now, I think our block time is somewhere sub-tenths, I think seven, eight seconds per block, and still have very reasonable anchor rate to achieve consensus. So I think that's one. The other property that NCMAX prioritized for is to maximize the bandwidth. So Nakamoto consensus has this, for every transaction, they will be propagated twice through the network. And then for NCMAX, we want to make sure that we only propagate those once. So that, again, to best conserve bandwidth. So ultimately, the goal, the design of NCMex is this, right? So what we believe, again, we want to maximize for this validation and, you know, preserve the cost-running phone nodes. And then, you know, for any given, let's say, you know, somewhere like a
Starting point is 00:25:57 to a thousand phone nodes and, you know, around that range, then it really becomes a function, for any sort of consensus outwears, and really becomes a function of how you can, you can, you best utilize bandwidth. That's what NCMAX designed for to best utilize bandwidth. So in Bitcoin, we have a 10-minute block time, and the idea here is that within 10 minutes, we'll have near-perfect propagation of one-megabyte-sized blocks. But given the evolution of bandwidth globally and how that's improving, you know, we can probably assume that that's that's a very high margin of security. So rather than taking this approach, Nervous is taking the approach where instead of trying to limit the uncle rate by having
Starting point is 00:26:50 this very, very long block time, you're going to optimize for the uncle rate by analyzing it and in real time and adjusting the difficulty based on the actual conditions in the network. Is that a fair? Yeah, exactly. Yeah, you got it. if technology improves and then in the future network bandwidth let's say go 10x and it's possible it's been going up since the last you know several decades and with the 5G coming up but there's even more room to improve yeah so again the the transaction per second for our layer one protocol in cmax will increase automatically and another thing that i forgot to mention like
Starting point is 00:27:28 the third property of the improvement is that we for a bitcoin protocol the reason self-mine selfish mining is profitable, it's because they don't count the uncos, the uncle blocks into consideration of a difficulty adjustment. And therefore, attackers can sort of increase uncle rate and then have decreased, you know, difficulty for the next epoch and then, or, you know, difficulty adjustments and then become profitable, right? So self-mining can be profitable. And for us, we count the uncos also within consideration of difficulty adjustments. And therefore, or self-mining is not profitable on the level of CQD. Okay, so here you eliminate selfish mining by adjusting that difficulty rate on, you know, on the fly sort of thing.
Starting point is 00:28:17 And then that makes selfish mining unprofitable for anyone who's trying to attempt that. How does the network know what the uncle rate is? How does that get incorporated into, say, a block or, you know, how does that information get captured and transmitted across the network for that difficulty rate to be adjusted. Okay. This is where I'm a little bit automatic expertise here because I'm not the developer that actually wrote the code
Starting point is 00:28:45 to implement the consistent algorithm. You know, I'll kind of go on the limb and say it's probably a property on the block headers. But again, for folks that really want to find it. I think it is. I think from what I read, it's a property on the block headers. But I guess what I'm asking here and, It's fine if you don't know this, but it's like, how does the network sort of come to consensus on what orphan blocks are?
Starting point is 00:29:13 I don't know if you can shed some lives on this, I mean, it would be similar to how Ethereum already does it, right? Where in Ethereum, the miner will include any uncle blocks up to a depth of seven in the header. And they're incentivized to include them because they also get a reward for including more uncle blocks. And so I imagine it's probably something very similar. And I remember looking through some of the documentation for Nervos. And it's not in the header, but it's in another spot in the block. And it doesn't count towards the block's size limit. So that way as to avoid disincentivizing the inclusion of them.
Starting point is 00:29:55 So yeah, I mean, that's kind of how it's very similar to an Ethereum in that way. What I found really interesting in was in the, you know, reading through that, economic model of the, it was very interesting to see that, like, I've never actually seen it presented in such a way that selfish mining is actually an attack on difficulty of readjustment. Because, I don't know, I feel like when I learned about selfish mining, that wasn't how it was presented. But then I read one of the papers that was actually linked into your documentation, and that was actually very interesting to see that, like, you know, if you actually solve the selfish, how difficulty readjustment works and make it take into account,
Starting point is 00:30:34 the orphan blocks, and yeah, yeah, it actually selfish mining becomes much less of a concern. There's not really much of an attack that you can do there. Yeah, it's very interesting because we actually saw this, you know, firsthand experience during our test net. So we have something called mining competition. Basically, we encourage people to mine the test net and they can get some, you know, testnet tokens that can be converted to main net tokens later. and then we actually saw somebody launch a selfish mining, like the huge organic reward, right? But we saw that going on for some time and then just stopped.
Starting point is 00:31:15 So our hypothesis is obviously that the person who was doing the attack, it was tested so that's why they could realize it's not profitable, right? So compare the income of just being mining honestly versus doing selfish money. So that was really interesting for us to see. Yeah. So just kind of end of the story to what you said. So maybe I take a step back, actually. What made you guys decide to use proof of work in the first place?
Starting point is 00:31:44 Because it seems nowadays like today, all the new networks that are launching are all using proof of stake. In this month alone, I mean, I run a validator and I've been, I think I started, like, you know, there's four new proof of stake test nets that are launching in January. What made you guys to launch Guys decided to launch a Proof-of-Work Network in 2020 or 2019? Yeah, so we're, like you said, we're just about the only blockchain
Starting point is 00:32:10 sort of launched amongst the, you know, other blockchain that kind of started around the same time. You know, again, I think it has a lot to do with the overall vision of the Nervous Network, which is, you know, we don't try to, well, we believe the Layer 1 protocol needs to be rock solid, and then somewhat conservative in a way, not to try to do too many things,
Starting point is 00:32:34 and not to try to do both decentralization and scaling, and then rely on novel cryptography. And what we wanted to have on the layer one is, like I said, very rock-solid, decentralized and battle-proven. And then something that has been studied in research for a very long time. And then this is, you know, at the time, we look at all the readers that's done with the Bitcoin consensus algorithm. It's just about the only one that fits this, only candidate that fits this requirement.
Starting point is 00:33:08 You know, ever since, we started to look into sort of more properties of proof of work. And then, you know, we feel that we made a good choice. Because whether you think about, you know, having this, you know, global settlement layer and then the security property you want for that, whether you want, want to have a sustainable blockchain and then, you know, what you need for that. And then whether it's about desensitization, you know, even people talk about, you know, sure, proof of work, there's mining poles and then they may not be, you know, entirely centralized. But it's the same for proof of stake as well.
Starting point is 00:33:43 You have staking pools or staking as a service programs. Even more so like a recent like Binance and some exchanges got into the staking business and then charge zero fees. So you start to see this, you can say, you know, the staking service. it's becoming a very much decentralized force. And it tends to sort of focus on the ecosystem service provider that already have a lot of power, like again, wallets and exchanges. The reason is because they have coins, they have tokens. So it's very easy to see that these players will consolidate power over a lot of time.
Starting point is 00:34:22 And for proof of stake, it's very difficult to break out of that monopoly, if you will. Because, you know, for as long as, you know, large stakers continue to, because this is like the rich-get-reacher sort of theory, right? So if they continue to stake, they will retain their monopoly in the ecosystem sort of like, you know, we have seen again for delegate proof of stake like EOS and how some of the issues produce there. I think for regular profit stake, it's not. immediate concern, but, you know, things could move towards similar direction with the sticking providers and service community. With the proof of work, the difference is for these monopolies like mining pools to retain their power, it has a huge operating cost. So they have to invest real resources and then keep innovating and then to stay on the edge of technology
Starting point is 00:35:20 to keep their monopoly. And, you know, with technology, you know, the paradigm technology shifts every decades or so, then it's a lot more difficult for them to retain that monopoly power forever. So I'm not saying proof of work, right? There's no like mining pool standardization issues, but it's a lot easier in our opinion so that can change over time. You know, this includes the, you know, the mining machine producers and, you know, mining pools and even like where the lowest electricity cost is. because again, that can change with technical innovation over an alter.
Starting point is 00:36:03 What made you decide to use like NC Max, which is, you know, very similar to like Bitcoin-esque proof of work or Nakamoto consensus as opposed to some of the more, you know, maybe more newer approaches to Nakamoto consensus, such as Bitcoin NG or more Dag like protocols? I feel like a DAG-like system would actually be really, you know, one of the cons of DAG systems is you do need a UTXO system often to make it very efficient, which happens to be what you have. Yeah, I mean, so whether we use Nakamoto consensus or some variation of it, that's half the question, right? So that, for that, I think it would be probably good to have our consensus researcher, Jen, to be here.
Starting point is 00:36:50 because that's exactly what he focuses on for his entire research and his PhD program and all that. And so he is actually known as the person that broke Bitcoin Unlimited. So he will give a very good answer to the question. So he basically studied other variations and then look at the chain quality and look at other properties and decided Nakamoto consensus is actually the best of. all of the alternatives. So just to give a little bit background. So this NCMAX algorithm was developed well.
Starting point is 00:37:29 He worked for Blockstream and under the mentorship of Greg and Peter, some of the prominent Bitcoin researchers. And then so there's definitely a lot of sort of Bitcoin influence into this. And there's, you know, a lot of thoughts on how to maximize the sort of the protocol efficiency of the Bitcoin's consensus algorithm. And so that would be a great question for him. But I know, that's what he says, right? So I know that's his research area.
Starting point is 00:38:02 There's a really great talk that he gave at the scaling Bitcoin meetup in San Francisco last year, about a year ago, which was incredible for my research for this interview. So I'll link to that in the show notes. I would recommend anyone who's interested in learning more about that, check out that talk. There's also a series of blog posts on your website, which kind of summarizes the contents of that talk that we'll link to as well in the show notes. Yeah, I think that would be the best. I wanted to ask you about mining. So you have your own hashing algorithm.
Starting point is 00:38:37 So I presume that Bitcoin A6 won't work on Nervos. Can you talk about sort of like what the mining ecosystem looks like? Yeah, so we have our own hash function and then, you know, we thought this for quite a bit and really reusing any existing hash function put the project at risk, especially when you start, because there is this inventory of existing machines that can always point to your blockchain and then, you know, double spend or attack the blockchain, basically. So this is the reason that we develop our own hash function called Ecosong. So in terms of evolution of the mining ecosystem of Nervos, CKB, it will be very similar to how Bitcoin,
Starting point is 00:39:30 you know, you started with the CPU. In fact, I talk about the mining competition that we did the first phase of mining competition, like everybody just CPU mined some coins. And then it will, you know, shift to GPU miners. and then from GPU to FPGA. So right now, I think as we speak, we have both GPU miners and FPGA miners on those. And then eventually we'll move to a more ASIC-based mining ecosystem.
Starting point is 00:40:00 So we are supported pretty much just by all the major mining pools. We're pretty happy with the hash rate distribution and also the enthusiasm from the mining community. Yeah, I'm actually in the process of setting up a mining rig myself for NERBOS. So I have my grin miner and I'm turning that off. And so I'm trying to install the NERBOS software right now. Unfortunately, I didn't have a chance to get it to finish it in time for the episode. So, yeah, let's move on to the VM, the KBVM, because that's actually one of the most interesting pieces that I really like.
Starting point is 00:40:40 Because, you know, this is the VM I've always dreamed of. but, you know, I wanted to always build, I always wanted at some point to get around to designing a smart contracting system that uses UTXOs. And then I found out, oh, wow, this is, you know, this is what you guys ended up creating. So, can you tell me a little bit about the cell model and what exactly is it? What does it mean to be, like, separating state generation and state verification? Why is this the design you decided to go with? Yeah, so happy to. That's the core of the ledger structure.
Starting point is 00:41:16 And so cell model is a UTXO-like, you know, ledger structure or data structure, if you will. So if you start with the Bitcoin UTXOs and Bitcoin UTX can only express one piece of information, which is balance or amounts, right, amount of Bitcoin's. And so you generalize that. And then you are able to support, you know, for example, any type of information. you can do token balances, for example. And then you add the capability of sort of smart contract or a fully touring complete scriptability
Starting point is 00:41:52 that will execute the virtual machine. And that becomes the cell model. So a cell is basically a generalized piece of UTXO. And when you, just like a Bitcoin, when you create a transaction, you have inputs and outputs. So in Nervo CKB, when I say CKB, by the way, I realized I didn't explain it just used to the acronym.
Starting point is 00:42:14 It stands for common knowledge base, which is the layer one blockchain for the Nervo's network. Okay, so just a little side note. So the KB, yeah, that's an important point. So KB is what you call this common knowledge base, which is in fact the layer one that supports everything else. Yeah, so KB is just one layer one blockchain.
Starting point is 00:42:36 And then for layer two, you can have many blockchains or other like channels. and all that. So when you create a transaction on the NervoCKB, you also create inputs and outputs just like Bitcoin transactions. And then the inputs are basically cells instead of UTXOs, right? So you have multiple cells that can be part of the inputs. And then the outputs are gained as a result. So when you create a transaction, when it's verified and executed, accepted, the inputs then will be spent, right? So these cells go to what we call debt cells or expire cells. And then the outputs are the new cells. So for Bitcoin, it's the set of unspent transaction outputs. That's
Starting point is 00:43:23 the current global state. For Nervos, it's the unspent cell outputs. So that would be the global state for Nervos. So smart contracts can be, you know, the code can be also part of the cell. right so this is where we have two type of script right so what we call like for bitcoin you have this log script and then for nervals there are two type of scripts one is lock script just like a bitcoin so you can still say you know i have my private key i'm going to unlock this for bitcoin i'm going to unlock the xo to be able to spend it for for nervals you'll be able to unlock the cell to put in to make the cell part of the inputs of a transaction right so that's what lock script is it allows you to include a cell as one of the inputs of the transaction.
Starting point is 00:44:09 And then you have the second type of script, what we call type script for a cell. So every cell has two scripts, right? So lock and type. So the type script, not to confuse it with the programming language, but type script, it allows you to put the cells as the outputs of the transaction. In other words,
Starting point is 00:44:31 you can you know the type scripts verifies that the state transition from input to output actually is valid according to a pre-specified rules again that's basically we're talking about smart contracts right that's how i would say you know the the cell model is a generalized utxo model so it takes basically input output you know style and transaction structure and then but just add this type script to make sure that you can run the verification rules in the virtual machine, effectively smart contract to impose rules on state transitions.
Starting point is 00:45:08 Would it be fair to kind of classify this is taking almost a bit of a more functional programming approach. Exactly. You got it. Where instead of this data that has these functions, like Ethereum smart contracts where I'm calling functions that are mutating the state of this contract,
Starting point is 00:45:28 instead what I'm doing is sort of defining contracts as these pure functions, which are these lock scripts and type scripts, and I'm basically burning the old state of a contract and by passing it through this function, and it outputs a new state of a
Starting point is 00:45:45 contract, which is this new cell. Yeah, exactly. It's a very predicate based, right? So the verification engine or the virtual misde execution, the result, it's just a bullet. It's true or false, right? So it's just a valid or invalid
Starting point is 00:46:01 section. If it's valid, then it's accepted by the blockchain. If it's invalid, you'll be rejected. And like you said, right, so it's, you actually spend or burn the old state and then you have this new state coming out of the outputs. And are there rules of what kind of outputs have to be generated from the burning of this input? How would I string a series of functions lock scripts together in order to make one larger, you know, workflow that a user would want to do. Yeah, you got it, right? So that's what you're describing is equivalent of smart, in Ethereum, smart conscious calling
Starting point is 00:46:46 the other smart contract. And then what you're describing, it's the equivalent paradigm in intervals. It's a very, you know, instead of object composability, right? So, you know, or common or Ethereum is kind of this like OO programming paradigm where you have, you know, accounts that have the internal state. And then when you interact with them, you mutate the state and then objects themselves can pass messages and they can, you know, use that to mutate other objects. And then in NervoCKB, that's what you just said is the same, how you can post transactions together or, you know, of the smart conscience together, is that you pass them through a series of transactions so that how all ports can limit into inputs and so on, so forth. You can structure transactions,
Starting point is 00:47:32 the series of transactions that way. And all the computation for this is done off-chain, correct? So the blockchain only stores the state, but the computation is done on, I guess, individual nodes? The verification happens on the blockchain. Again, let's think about Bitcoin, I think it's easier to use the Bitcoin to explain some of the concepts. Again, we're very similar to the UTXO model. So with the Bitcoin, you construct the transactions, you know, the code that's to construct the transaction happens off-chain. So this is in your wallet. You search for ETXOs, right? Okay, I've got this many UTXOs, which I'm going to include, you know, as part of the transaction. So that's not Bitcoin core code, right? That's just the wallet
Starting point is 00:48:18 like we'll search, do the transactions. It's actually searching. But verification, which means things like, you know, inputs and off, output has to be balanced. That happens on chain. So verification always happened on chain. For us, it's the same.
Starting point is 00:48:32 So constructing of transactions is off-chain. And then, but the verification, you know, whether you can, you know, verify signatures and whether the type script, you know, running the virtual machine can return true, bullion true value. is an on-chain. So you make sure that these rules are verified. So if I understand that correctly, so, you know, maybe another way to think of it as well
Starting point is 00:48:59 is not just as a functional system, but it's also a declarative smart contracting system. And so maybe to give an example of like this idea of why state verification should be done on-chain but generation could be done off-chain is imagine you had a smart contract and the point of this was to sort a list of numbers, right, from smallest to biggest. The sorting algorithm would take n-log-n time, but the verification that a list is sorted actually only takes linear time. And so you could basically require that whoever is generating the smart contract, they do the sorting on their wallet, but when they put it on the chain, everyone is just verifying that the list is sorted, not actually doing the sorting algorithm themselves. And that actually allows you to basically
Starting point is 00:49:54 make, you know, the work that everyone else has to do much smaller. Yeah. So, yeah, I think you got exactly right. So you specify specifically what you care about to verify, you know, not the procedure, you know, the steps to get there. And then everybody just by taking the same stacks and see, oh, are we arriving at the same state? But instead, you say, I'm just going to, this is what I care about. And in the end, it has to verify to this, right? And then, yeah, like you said, right, the computation and verification can have a symmetry in terms of complexity.
Starting point is 00:50:29 And this actually, you know, the sorting algorithm you mentioned, it's very sort of reminiscent to a lot of the zero knowledge proof stuff that we see today, right? So how do you sort of reduce generalized computation or special computation into some sort of, you know, set circuit and rule that can be more easily verified, at least, you know, reduce capacity to perform. So right. I was just going to mention this. This seems like sort of like, you know, ZK.
Starting point is 00:50:55 Roll up in a way as well where it's like, you know, the computation of the generation of the proof is all done client side, but the verification, which is simple, is the actual lock algorithm? And so I guess, so is this kind of what you guys are implying when you're saying that it's well designed for L2 systems in the way that it kind of can make it easy to do these roll-up style processes? I think it is. I mean, back then when we first, you know, started with this direction, zero knowledge,
Starting point is 00:51:24 like roll-up was not even the term, right? So there was definitely early layer two solution that was built, but, you know, when we look at it's really, it kind of points to that direction is, you know, I don't care all the, every single state transitions that happen on layer two or either all. off-chain, as long as we can come to a agreement on layer one and, you know, either crypto economically or by some other means that we can say, okay, this is what we all agree. And then that's really it. I want to verify.
Starting point is 00:51:56 That's the final state you want to verify. And then so this sort of paradigm maps very well to that, to the way of thinking. So what happens now when you actually want to create a smart contract where you want to want multiple people to be able to interact with it in the course of a single block, right? So let's say you have an ICO or something that's happening. And, you know, there's no reason that multiple people can't participate in the ICO in a single block. But the problem is there's only one cell and whoever hits it first end up killing that
Starting point is 00:52:31 cell. And the second transaction that tries to buy from that ICO, that cell is no longer there. So how do you construct paradigms like that in this UTXO? cell model. Yeah, so I think what you're pointing to, it's kind of like a parallel processing, right, that could be allowed by the UTXO model. And in NERBELS, when you construct transactions,
Starting point is 00:52:52 you actually specify the dependencies of the transactions. And it's explicitly, right? So then the runtime can sort of do this dependency mapping and see, okay, these transactions can actually be executed in a parallel fashion or verified in a parallel way. So that they could, as long as they're not, conflicting with each other, they don't grab the same cells and, you know, things like that, then they can be processed in parallel.
Starting point is 00:53:18 But what if they are trying to hit the same cell? So imagine this cell was a like counter on a tweet, right? So I tweeted something on this Twitter thing built on CKB and this cell is how many likes it has. Let's say two people try to like it and the current value is five, right? and the smart contract, the lock function is that it will increment by one, right? So two people both try to send a like and they send the declarator value of six. The first person will increase the light counter from five to six. The second person's like will fail because it'll say, oh, it's already at six.
Starting point is 00:53:58 How would I have it so both people's likes can, you know, happen in the same block? I don't want it so that every time you do a like, it's like, oh, your like got sent. in the same block of someone else, you have to go redo the like again? You know, for those instances, you may have to, well, one of them would be accepted. So whichever comes first and then will, you know, that transaction will be progged at the fastest and, you know, have global synthesis. And again, it's chain-based consensus algorithm like NCMAX. So it's possible, you know, strictly speaking, these two votes will be in different
Starting point is 00:54:33 blocks on different chains temporarily. but eventually the network will get to consensus. And if we're talking about transactions in the same block, and yes, so one of them will be rejected. The previous one was included in there, then the next one will be rejected. Isn't this pretty bad UX then? This is, again, you could, I mean, this is where you could,
Starting point is 00:54:56 we have a term actually internally called layer 1.5, right? So this is where you can have aggregators, sort of aggregate all the transactions, and then to propagate and produce blocks. And then that's kind of similar to the issues you talk about. But I think when we talk about here, we do have a concept. Again, this is different from Ethereum account model
Starting point is 00:55:22 and smart contracts where you have effectively everybody's balance within one single account. And then everybody try to kind of mutate the same object, if you will. With the nervals, it's different in that all the, let's again say ICOs, right? So all the ICO participants actually operating on their own cell. So if I have a balance of a token, let's say 100 tokens,
Starting point is 00:55:44 it's actually contained in my own cell, right? And then you token balance, you also own the cell that contain your balance. So I can unlock my cell and then, you know, spend my token and maybe sent to Sebastian, and you can do the same as well. And this is important, again, this is just like a big on utax. So think about it.
Starting point is 00:56:07 So everybody's own asset is, we call this a bearable asset, right? So everybody truly owns the assets that you have. So this way, when I try to mutate myself, let's say I send Sebastian, you know, a few tokens, then that's going to be independent of you trying to mutate. You maybe want to send somebody else some tokens. We don't have to like lock the contract so that I can update and then you are. date, for example. In Nervals, it's called first-class assets, just means, you know, the ownership of assets or tokens are segregated by users, if you will.
Starting point is 00:56:44 So taking a step back and looking at this and comparing it to Bitcoin, just so that we get an idea of where it sits sort of next to Bitcoin and how it compares to it. The only things that separated from Bitcoin are the cell model, which is a generalized version of U.T. XO or we also have state in addition to public key balances. The other thing is this consensus mechanism that improves the throughput by detecting the uncle rate and adjusting difficulty based on the current uncle rate. Are those the only two things that separate this from Bitcoin? Yeah.
Starting point is 00:57:23 Another big difference is the big survival virtual machine. So Bitcoin does not have a virtual machine, right? So it's a fixed amount of upcode, and then you can use that to construct like a multi-sig, like simple smart contract, if you will. But with Nervos, it's full, turn-complete scriptability. So it means, you know, you can have, like, our logs and our typescripts can run the virtual machine. So this is, I think Sonny kind of mentioned a little bit,
Starting point is 00:57:47 only before we got to sell model. So Rix 5 is a standard for, like a CPU standard, CPU architecture. The virtual machine of NervoCB, it's essentially like a Rix 5 computer simulator, which means all the programming language that can compile down to like the LVM or GCC toolchain can be then used to script on Nerva CKB.
Starting point is 00:58:11 You can use these languages to write your equivalent of smart contracts on NEROS. So that is, I would also say, a big improvement over Bitcoin. And also the equivalent model we haven't got to yet, but we can get to that later. I guess that's an important distinction as well. I mean, from what I was asking was more a sort of protocol and consensus aspect of it. Yeah, let's talk about Risk 5 a little bit. So Risk 5 is a framework, like a very low-level framework where that gets implemented in things like CPUs,
Starting point is 00:58:41 and then that's where then very low-level languages get built and can interact with, like, core processing of computer. I mean, that's kind of a simplified version of what Risk 5, description of what Risk 5 is. But why did you want to build such a low-level VM and not keep it at a sort of higher level of a description? I think there are several reasons. So one of the most important reason is if you look at hardware specifications, it's very rare that they change. Or if they change, it's a very rigorous process. And then they almost always observe backward compatibility, so not to.
Starting point is 00:59:18 Because again, producing hotware is very, very expensive. They want to preserve the prior investments. This is perfect for blockchain space, because blockchains almost have a hardware-like property, which means, like, new op-code is very difficult. it needs a lot of justification to add. It's very difficult to break the current operating system, if you will. And then you don't want to upgrade too often. And then when you want to update often, you want to be able to make sure that existing smart contract and whatnot can still be preserved.
Starting point is 00:59:47 From that point of view, I think it's really good. It's an open protocol. And then it has a lot of ecosystem players. It's been rising in popularity in the last couple of years. It's essentially the sort of the anti-intel alliance. Right. So, a lot of big ecosystems, a lot of industry players pour a lot of money into this. Even it evolves at new instructions, for example, it always well compartmentalized. It doesn't break the previous one, which is very different from other set of standards like WebAssembly, because WebAssembly is sort of this standard created by this alliance of competitors, of browser vendors, and they have very specific concerns.
Starting point is 01:00:32 And then, you know, they have sort of conflict of interest. And then they use, you know, for example, it's a more high-level virtual machine. And then we can get to a little bit like why lower-level it's easier. Also, it's not designed to work for like a blockchain or this kind of space. For example, for Rix 5 virtual machine, because of the CPU simulator, we can actually use the CPU cycles to precisely measure computation unit, like the equivalent of the gas in Ethereum. right so how much this computation costs the CPU computation cycle will tell exactly how many cycles that this competition will cost whereas in wonson it's a lot more difficult because it's a higher level virtual machine and even had garbage collection so when that kind of got thrown into the whole
Starting point is 01:01:17 equation it's just really difficult to do that being such a low-level programming language what's the developer experience like as a developer entering the nervous ecosystem I want to build a DAP or a Dow or something, what do I need to learn in addition to knowing how to code, say, like, Go, for instance. Basically, whatever languages that can be compiled down to run on a Rix 5 computer, you're able to run on a virtual machine.
Starting point is 01:01:45 Again, it's just a computer simulator, right? So, again, as I was saying, that this industry standard is evolving very, very fast, and then there are, like, industry players put a lot of money into this. For example, like the C ecosystem, you can definitely, we don't recommend people like build smart contract with C, but theoretically speaking, that's the easiest way to start with the C programming
Starting point is 01:02:07 and even higher languages if they can compel down to C, then they can be supported as well. One of the, I would say, like a really good property of this is that a lot of the crypto primitives are actually very well supported like in the C ecosystem. A lot of them that are written you see or can be easily compelled on to C. and so if you want to use a crypto primitive, let's see, you know, for your zero knowledge proof solutions, you'd have to wait for Ethereum the hard fork and then add that, you know, pre-compile or things like that. You can just roll your sleeves and then drop that in and then into the Rik5 computer and then add your own library and then you can just use it very, very easily.
Starting point is 01:02:49 We have some community members that are working on this and it's been pretty good and, you know, supporting some signature algorithms that's natively supported by, you know, mobile phone chips and, you know, even browser. So, like, the private key solution will be much more smooth, unnervous than, or much faster to get there than other solutions and other platforms. Is there, like, tutorials that people can follow for, like, writing smart contracts on this? Because, like, you know, compiling Rustant WebAssembly, or to Risk Five is one thing, but you also
Starting point is 01:03:21 need to make sure your contracts follow the lock and type script like rules and how those are formatted and whatnot? To your answer, there is. I think at this stage, the best way is probably to pop into our telegram neural network slash deaf channel and to get help there. So we do have documentation and there are developers that have built tutorials and show how to do things, but it's definitely not as mature that's some of the other ecosystems. So I will recommend that developers want to roll their sleeves
Starting point is 01:03:55 just come to talk to us. We're all hanging out there, so we're very friendly and helpful. Let's just move on to the last thing that we met, we talked about, which was the big differentiator here is the economic model of CKB. Can you tell us about what CK bytes are
Starting point is 01:04:13 and why that economic model was chosen in order to design a multi-asset store assets chain. Again, so this is kind of our tagline for layer one protocol. And so to answer your first question, the native token of the nervous KB is called K-bytes, right? So one CK-byte or one coin, if you will, represents one byte, a claim to one byte of global state.
Starting point is 01:04:37 So in a way, if you own, you know, let's say 10,000 CK bytes, you own 10,000 of the global state, right? 10,000 bytes of the overall global state in the blockchain. The reason we do this, so I'll start with a problem. Then we can talk about why we'll arrive at the solution. And one of the differentiator, I feel that to really become a good layer one is if transactions are moving to layer two, and that's where it's cheaper and faster and everything to do transactions, then what's the purpose of layer one, right?
Starting point is 01:05:11 So in our view, layer one should be for asset preservation or provide security and sort of this censorship resistance or just an adjacent property for the assets. So layer one should be where assets are. And then different from Bitcoin, Bitcoin, there's only one asset, which is the BTC itself on the Bitcoin blockchain. On any smart contract platform, you have many infinite user-defined assets.
Starting point is 01:05:38 The goal of the layer one protocol is to make sure, sure, it provides the sustainability so that the assets are going to be long-term secure. So this is what we call the concept called store of assets or multi-assess store of value, right? If you can pay Bitcoin to single asset store value, what are the properties that when you design this sort of multi-asset store value system, so think about this, right? So think about when you have multiple assets, let's say on Ethereum. miners in Ethereum, minors are paid with fixed amount of ETH per block. So in Bitcoin, miners are paid with a fixed amount of BTC per block.
Starting point is 01:06:22 And this makes sense. And this will make their economic work because if the asset value increases a thousand times, right? If Bitcoin's value appreciate a thousand times, the miners' income will also appreciate a thousand times because they get fixed amount of BTC per block. So this makes Bitcoin a good store value because it's like when you have a city, right? So when you assets are appreciating, then the defense automatically appreciating as well. The protocol can provision defense, you know, as the asset value rises and go down. So on Ethereum, that's not the case because if the assets on Ethereum goes up and down, right?
Starting point is 01:07:06 they have almost no correlation with the ETH value. So in other words, if you think about this, hypothetically, the ideal native token for Ethereum blockchain, if you take the store of assets of the mindset, the ideal native token should be some sort of an index fund unit of all its assets weighed by market cap. Then a single unit of that index fund can be used to pay miners. Because then if this whole ecosystem goes up and down,
Starting point is 01:07:35 the whole asset value goes up, then this native value also goes up and provide more protection for the security of the protocol. But for Ethereum, it used entirely different ETH asset to add minor composition. So as you all the crypto S on the Ethereum
Starting point is 01:07:52 goes up like 10 times or 100 times, you can't guarantee that the defense will go up with them. And this provides in our mind, this is the issue for cryptonomics because attackers can always just attack your base consensus protocol to be able to double-send these assets. If there is enough incentive,
Starting point is 01:08:13 if MakerTocon really appreciates that many times, then they can attack the Ethereum consensus algorithm. Another flip side of this, it really is the different side of the same coin. The flip side of this is you can think of on Ethereum, it's really the eth-holder, ETH holder, that provides security by volunteering, diluting the ETH supply to provide that security to the miners.
Starting point is 01:08:37 But these crypto asset holders are not making similar contribution, right? So these tokens might not inflate and then pay the miners for their, because their assets are also protected, but they're not contributing to this common good. And then whenever you have this tragedy of common issues, when the incentives are not aligned and you're not making people to contribute to the common good, then you can be abused and there will be issues. And you've got people that free ride the security. And then in our mind, it's not sustainable.
Starting point is 01:09:08 So how do you turn K bytes into that index fund? It's very difficult to truly build index fund, right, on the blockchain. So the example I gave is to point to a direction that, you know, where we need to think about this issue. And then the solution in our mind resides in the fact that you need to have a native token that can capture the demand of all the crypto assets running on the blockchain. So in other words, if I'm a maker holder, token holder, and I need to hold a maker token,
Starting point is 01:09:41 I need to contribute to the blockchain overall security as well. So it needs to be this single asset that can capture the demand of all the assets on the blockchain. The other one is that this sort of contribution has to scale over time. In other words, if I hold my token for longer, then I need to contribute more to the security of the network. So it has to scale with both space and time. So our native token is called K-K bytes, which represent the claim to the global state.
Starting point is 01:10:10 So the idea is any crypto asset demand will result in occupation of global state. If Maker, for example, grows and their market cap and more users, and then they will also occupy more of the global state, which put demand on the native token. So the native token is basically this value capture of the entire ecosystem, if you will. So this is, we typically like here, we make the analogy of land, right? So you can open different shops.
Starting point is 01:10:41 You can open, you know, McDonald's and laundry, laundromat and all other different shops. They are very different in their own way and they have their own ecosystems and, you know, economic properties. But they all occupy land. So whatever shop you open, it will put demand on land. So land is then this value capitals. of this ecosystem. So the single property or single asset that can capture the value of the multiple assets or economic being built on the blockchain. So that's, I think, our insight. It seems to capture the number of users that are holding a token, but not exactly the value of the
Starting point is 01:11:19 tokens, right? Because let's say there's one million users holding MKR. If the value of MKR 10xes suddenly, that doesn't actually increase the amount of storage that MKR is using. Only if it's the number of MKR holders 10Xs, does it increase the demand for storage. If you add another pre-insumption, which is the global state, it's scarce. It has to be, if it's an infinite resource, then you're right. But if it's a scarce resource, which means it's actually market priced, right? So which means that the more demand you put on land, it actually drive up the land price. So which means if you think about a corporation balance sheet, right, if the cost of preserving value or, you know, build a building in this city, it's this much.
Starting point is 01:12:10 If it's very high, then I will not build the building here because the relative cost is too high. If maker value increases, that will decrease the relative cost of every single holder's relative cost of putting on the blockchain. So if it cost me 10 cents to store $1,000 of value, for me, that's no-brainer. So if it costs $100 to store $1,000, then it's different calculation. So the way we see this is it will be sort of a process, because the global state discards resource, it will become a process of value density on the blockchain keep increasing. And as you keep increasing, those lower value sort of application or occupation will be slowly move out of the blockchain and move to layer two, and then probably use some sort of proof and
Starting point is 01:12:59 really simple ones, can still utilize the security guarantees, but maybe make some sacrifice somewhere else. And then the high value assets are still going to be kept on layer one, because again, this is the most secure, global consensus and everything. As value goes up, as value density goes up, so it encourages people to stay and then encourage people to occupy and not to leave, because the relative cost is lower. What did we also just start to see people move towards more off-chain state? So instead of actually holding a lot of data in a cell, they just store hashes in the cell, like state hashes in the cell, and then pass in the data when they make the transactions.
Starting point is 01:13:45 So it'll kind of be like stateless verification on Ethereum. And that way, any user would really only ever have constant demand for state, because they could take all their personal state and put it into one 32-by dash. I think it's all trade-offs, right? So again, we try to make the analogy of, let's say, Manhattan's land, right? So in the early days, because the land is so cheap,
Starting point is 01:14:09 and then you can just build fast food restaurants in there and then not feeling like this is wasteful because it almost costs you nothing to acquire the land and build the shop. You won't just drop a skyscraper in Manhattan on day one because it's not that it's impossible. I mean, you make sacrifices, which means you have to ride the elevator up and down every day, right? So for like what you described, it's putting state root and, you know, sort of proof on layer one does have its consequences as well because you probably have some latency tradeoff you have to make.
Starting point is 01:14:38 And then, you know, if you're running on the platform, maybe there's token economics assumptions or livelence assumptions of, you know, watchtower. You have to make those assumptions. So those are not without its cost or tradeoffs. So what we foresee is when the cost of global state is low, then people will come and directly build on layer one, sort of build your sort of Manhattan in the early days, and then as more valuable applications come up, and then the ecosystem grows, the land price will increase,
Starting point is 01:15:13 and then will automatically provision higher security, and then protect more protection. And again, what we believe is this is actually, actually has a positive feedback loop, which is just about very, very few positive feedbacks. I actually gave a talk on flywheel economics in, I think, last year's crypto economic conference on the specific topic. We can get to that. This is positive feedback loop will, it's very important because the most important requirement or demand of assets on blockchain is security, right? It's not its storage cost, right? This is, some people,
Starting point is 01:15:50 feel like, oh, you know, storage's getting cheaper and cheaper, but why is this getting more expensive? The answer is you're not just saving something there, right? You're saving something of huge, very, very important value. So what you're paying for is really the protection of the security.
Starting point is 01:16:07 As a store of asset blockchain becomes more valuable, you know, have more protection, it will attract more valuable assets to migrate to the blockchain. And as you migrate to the blockchain, then this is the positive feedbacks loop. It will increase the token
Starting point is 01:16:25 because you put more demand on the token and then increase the property and then you attract more valuable assets. That is the flywheel that we talk about. And I feel you almost have to have this sort of token economics to be able to be sustainable and to preserve more and more assets. The other chain that I think does a very similar economic model
Starting point is 01:16:47 is EOS, right? Because they also have to be sustainable. this sort of similar notion of holding US tokens, grant you access to more of computational resources and storage resources, so the RAM and CPU and NET that they have. Their model gets much more complex because of, you know, I think they added too much complexity there.
Starting point is 01:17:07 One thing that does make sense is for them, each token represents a percentage share of the capacity rather than an absolute amount. Given that K-bytes is a, you know, fixed supply system, that means that there's a fixed amount of storage that is ever possible on the system. And doesn't that like, you know, as we see advancements in storage and storage becomes cheaper over time, wouldn't we want to basically maybe have total storage size be a governance
Starting point is 01:17:35 parameter and your CKBice is a percentage of that? And as storage gets better over time, over the years, then we can increase the total network size. Very good question. So I think I'll dissect this in detangle. this in a few ways. The cost is not just about that hard drive space. Really the cost for the entire network is that we limit the global state, we cap the global state with the monetary policy, right? So every year, the maximum sort of global state is predictable. That's governed by the monetary policy,
Starting point is 01:18:08 how much CCBites you issue over time. We do this to make this scarce resource. The reason is that we want to do this no-compromise, design. addition approach. In other words, the goal of doing this is to preserve desalcadation, right, so that everybody can run a phone node, just like the BTC philosophy, everybody can run a phone node. It's very cheap. It's very fast to sync. Developers don't always have to depend on Inferra for developer applications. And then there's not this very, very expensive, you know, phone node that will take days to synchronize and all that, right? Everybody can verify independently transactions. And that's what we believe it's necessary for a true decentralized global.
Starting point is 01:18:50 network. If you do that, then you have to cap the global state. It's not that your hard drive is not large enough, right? It can increase many, many folds in the future, but the way we cap it, it's because we want to preserve these statistics and properties. And also, like, how fast the network can sync and things like that. So with this, decentralization itself, it's really public good. It's really a piece of public good. So which means everybody has to contribute to it. If you think about our issuance policy, which is part of the crypto economics, if you think about this issuance policy, we have a Bitcoin-like sort of fake supply, as you said. We call this first issuance or base issuance. So this is a Bitcoin-like fixed supply issuance. That is not enough because I talked
Starting point is 01:19:39 about this earlier that people that preserve assets on the blockchain have to pay with the time that they preserve their assets on the blockchain. So this is in the Ethereum called state rent, right? So the whole concept of state rent, how do you pay for these and things like that? Because our native currency is a byte of the global state, we come up with the mechanism that you can pay rent automatically with issues. So this is what we call secondary issues.
Starting point is 01:20:07 So the idea here is this. Imagine on Bitcoin, you only want to charge a certain type of Bitcoin. coin holders, how do you do that? Let's say the people who use their UTXO to store, like it's a color coin to store some other data than balance. How do you charge only them for fees for state rent? So the way that we do it is, okay, so instead of, let's say, Bitcoin's issuance is 50, 25,
Starting point is 01:20:35 like it goes down every four years, and then we tax something along with that schedule. Effectively it would be like 51, 26, 13.5, and so on so forth. this constant component of the issuance, we call secondary issuance, right, for each block. And then imagine you can give everybody this one block first. And then you take away the ones that are not using their global, their k-bys to store data, to store state. Then you sort of compensate them for the issuance.
Starting point is 01:21:08 And then for the people that do use k-bite to store state, there are this additional issuance, we'll go to the miners. So if 30% of the Sikabyte's owners use their coin to store state, and then 65% of the people, they just don't use their CKBite to store state, right? Then 30% of the second issuance will go to the miners. Eventually, if we give everybody the same coin, that would be, you know, same issuance that would be fair to everybody. but then you sort of divert this part to the miners. So the longer you use your CKBite to store data, you keep paying miners a percentage of issuance
Starting point is 01:21:53 that would have been given to you. And then we have a special smart contract called the NervaSdahl. And if you save your coin, your native token to the NervaSdial smart contract, you will automatically receive exactly the same sort of yield or percentage of income or percentage of the second issuance so that for you, as if the second issuing does not exist and you are holding a Bitcoin-like fixed supply native token.
Starting point is 01:22:21 So what kind of projects are being built on Nervos? What is the sort of initial types of applications that you're seeing here? We launched Mayna at like two months ago, and then we're already seeing the type of applications, I think, is best for Nervos, is asset-based or asset-focused. sort of applications, and then, you know, broadly you can put them into the defy sort of camp. What we want to be is this, you know, if you think about Nervous Network itself, it's also this multi-block chain topology similar to Cosmos and Pocodon and whatnot, right?
Starting point is 01:22:54 The difference is we want to have this one single blockchain that concentrates value, and then all the other blockchain sort of specialized for scaling. So in a way, like in the finance world, you can think about this is like your custody provider, but decentralized. obviously. And then all the other ones are like transaction-based systems. So that's the overall sort of mind model for Nervous Network. Where can people learn more about Nervos and potentially, you know, if they're interested in building a blockchain, how can they get started? Go to our website, Nervous.org. And I think the best way to get a quick understanding of the project, it's read our petitioning paper. And if just go to the website, you see the positioning paper
Starting point is 01:23:35 there. It goes to a lot of we talk to it today. Thanks, Kevin for your time today. Thank you, guys. Thank you, Sebastian, and Sunny. Thank you for joining us on this week's episode. We release new episodes every week. You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts.
Starting point is 01:23:55 And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast. Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen. And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released. If you want to interact with us, guests or other podcast listeners,
Starting point is 01:24:14 you can follow us on Twitter. And please leave us a review on iTunes. It helps people find the show, and we're always happy to read them. So thanks so much, and we look forward to being back next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.