Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Charles Hoskinson: Cardano – A Third Generation Smart Contract Blockchain

Episode Date: May 8, 2018

We are joined by Charles Hoskinson, who played an early role in developing Ethereum and BitShares, and is currently the CEO of IOHK. IOHK is an engineering company that undertakes cryptocurrency resea...rch, contributes development efforts to the Ethereum Classic ecosystem, and is spearheading the release of Cardano – a third generation blockchain protocol. IOHK has made the news recently after the publication of fundamental research papers on the Proofs of Proof of Work and the Ouroboros PoS algorithm, and its recruitment of highly rated academics. Topics covered in this episode: IOHK’s role in the Ethereum Classic ecosystem, and how Ethereum Classic differs from Cardano Cardano as a “”third generation blockchain”” and what this means Governance system of Cardano and the challenges behind developing a decentralized governance system Ouroboros PoS algorithm – why was it developed and what’s special about it Ouroboros Genesis: how full nodes can be bootstrapped without requiring checkpoints Cardano’s bet on K framework for smart contract execution Episode links: IOHK analysis of the Dash governance system Ouroboros Genesis intro by Prof Aggelos Kiayias Ouroboros technical paper Ouroboros Praos technical paper Algorand Proofs of Proof of Work K Framework formal analysis technical paper K framework website This episode is hosted by Meher Roy and Sunny Aggarwal. Show notes and listening options: epicenter.tv/234

Transcript
Discussion (0)
Starting point is 00:00:00 This episode of Epicenter is brought to by Knosis, an open platform for businesses to create their own prediction market applications on top of the Ethereum network. They recently launched Gnosis X, a challenge inviting developers to build apps on top of the Knosis platform. To learn more, go to epicenter.tv slash Knosus X. Hi, welcome to Epicenter, the show which talks about the projects, startups, and technologies. driving the decentralization and blockchain revolutions. My name is Meher Roy, and today we have a new show host joining Epicenter. His name is Sunny, and he works with the Cosmos project, and we hope that he'll make a great show host, and you like his questions.
Starting point is 00:01:16 Sunny, welcome to the show and give us a brief intro about yourself. Hi, everyone. Yeah, thank you for the introduction. My name is Sunny, and I am currently a researcher working, on the Cosmos and Tenderman Projects. I first started learning a lot about blockchain through the Epicenter podcast. I was like my main go-to and so I'm excited to be on it for the first time and starting out as a host
Starting point is 00:01:41 instead of a guest, which is kind of cool. So for this show, we have Charles Hoskinson back on Epicenter. We're going to focus our conversation on Cardano. Charles, welcome back to the show. It's a pleasure to be on, guys. So we had you for the first time in episode 144. This was right a few weeks after the Dow hack happened and Ethereum split into Ethereum Classic and Ethereum.
Starting point is 00:02:12 And give us an idea how it's been since then. What's been going on in your life? Well, you know, I still like long walks on the beach and, you know, things like that. But, no, but levity aside, it's been pretty crazy. You know, IWHK has grown from a few dozen people to about 130 people. We operate in 10 countries now. We work on a lot of projects. We work on Zen Cash.
Starting point is 00:02:34 We work on Ethereum Classic. And the one we're increasingly becoming known for is Cardano. So the Cardano project is like a Leviathan. It's got dozens of researchers and engineers and we're doing a little bit of everything. I suppose I should give you guys since my first time I was on here was with Ethereum Classic, a brief update there. When I came on Epicenter, we were just kind of talking in hypotheticals like IOHK wants to build a wallet, and IOHK is going to hire some developers, and we're going to go do some cool stuff. Well, we did that. We actually hired seven full-time Scala developers, and what we were able to do with those developers is build a full Scala client.
Starting point is 00:03:16 So it's about 12,000 lines of code. It's a full node. In fact, not only have we released it, we're actually on the next version, 1.1. which has some performance improvements and bug fixes and things like that. It's gone through a full security audit from Kedelski's security, and at the moment I think it's the most concise and the only functional Ethereum client implemented, whether it be ETC or Ethereum. So that team's really had a heck of a lot of fun.
Starting point is 00:03:42 We learned a huge amount in the process of building a client, and now we've kind of gotten to the point where we have to make a decision to either kind of retire the client or to scale up the team and start making some substantive changes to the Ethereum Classic ecosystem. So there's kind of a loose governance structure that's been forming that Barry Silbert brought together
Starting point is 00:04:01 with a bit of funding, and we'll have some discussions there, and if we can get funding, we'll scale up Mantis and take it to the next level. If not, we'll continue maintaining it, leaving a few developers on it as an alternative that people can download. But mission accomplished.
Starting point is 00:04:16 We said we were going to hire some people. We said we were going to build an ode. We went and did that, and it took a whole year, and it was a huge learning experience for us. That's really awesome. Yeah, I mean, I've actually used the Mantis client before, and it's a pretty good user experience, honestly. So what do you guys think is like, you know,
Starting point is 00:04:36 before we jump more into Cardano, like, what do you think is, you guys are still relatively focused on Ethereum Classic as well? How do you see the relationship between Cardano and Ethereum Classic looking like going forward? I know that you guys are building Ethereum Classic, support into the Datelis wallet, right? Right, right. Well, Datalus is eventually going to be a platform. So the goal is the Deadless is to start looking a lot like Android in that.
Starting point is 00:04:59 You have one-click install, and developers are able to kind of package and bundle their own daps or their own wallets, and there's an obvious way of doing that. And so really difficult to build an architecture this way that's secure and user-friendly and developer-friendly and so forth. So there's a lot of discussions there. But, yeah, Deadless does support Mantis, as it does Cardano as well. So what's the relationship between the two projects? We kind of view them as different styles of cryptocurrencies.
Starting point is 00:05:27 Ethereum Classic, because of the economics, the culture, and the ecosystem, looks a lot like a better Bitcoin. It's basically, it's got supply mechanics like Bitcoin. It's got proof of work. It's going to stay on proof of work forever. So if you're cut from that cloth and you like that cloth, it's like a better type of commodity in a simple respect. Like, you know, silver does more than gold, you know, and they're both commodities.
Starting point is 00:05:49 and, you know, people look at them along those lines. Whereas Cardano is like the whole banana. It's got a governance system. It's got it. It's a POS versus proof of work. It's going to have multi-party computation. It's got, you know, side chains built in. It's going to have lots of computing stacks.
Starting point is 00:06:05 So really what Cardano's about is doing more than just being a store of value that has some utility attached to it. It's about saying, let's say I go to Ethiopia and I want to rip out the entire financial stack of the country and replace it with a cryptocurrency framework. what would that look like? So it requires you to answer a lot of questions, like what's the relationship between permission ledgers and permissionless ledgers? How do you actually have voting built in so you can make changes to it? You're not necessarily going to be immutable in every single case. You're not necessarily going to be private in every single case and these types of things. And there's some sort of system to reconcile of that. So it's a much broader scope and therefore requires a kind of a different technology and a different philosophy. So we believe in both and we maintain both in our portfolio. and for the foreseeable future, we'll continue to support these projects, especially Cartana for being paid to do that one.
Starting point is 00:06:57 Just to point out, I've spent over a million dollars on ETC. I haven't been paid anything yet, so, you know, principles are getting pretty expensive. Yeah. Yeah, so, I mean, you know, last time you guys were on the podcast, you mentioned that one of the things you liked about IOH was that you guys were able to focus on very, like, general research rather than having to focus on a specific project. And, you know, now it seems that, you know, you are working on other projects as well, but there is generally a large focus on Cardano.
Starting point is 00:07:28 And so what were the, what kind of decisions led you to make that change? And, like, how do you think that's going to affect the future of IOHK and stuff? So what happens is that you kind of have cryptocurrencies come in in stages or generations or phases, whatever you want to call them. And every time they come, they tend to introduce a collection of new concepts. So, you know, Bitcoin came and really Bitcoin wasn't trying to actually be money or a payment system. It was something much more simple. It was a collective dilution problem.
Starting point is 00:07:59 So basically, the goal was to convince people that these magic internet tokens somehow back by math are actually worth real money. And we can buy and sell and trade them, right? So basically, it took a few years for that to set in. And really, the turning point year was 2013. And at that point, like, Bitcoin was here to stay. people said, okay, this is not going to go away. This is a real thing. But then immediately people said, well, hang on a second here.
Starting point is 00:08:24 This is just a push payment system that's really slow and really expensive and not very user-friendly. I want to do a lot more with it. Can I do a lot more? And then we had this grand conversation about how do we augment it. So like we saw things like color coins and master coin. We saw all coins like NXD, for example, and they all brought a lot of innovation. But at the end of the day, we need a programmability. We need it.
Starting point is 00:08:45 It's like when JavaScript came to the web browser. So, you know, Vitalik and I and others, we created Ethereum. And the goal of Ethereum was to say, okay, give control over these sub protocols that run on a blockchain to the developer and give them an easy way of doing that. And that's another proof of concept. And so a lot of people thought we were crazy. In fact, a lot of people still think Ethereum is crazy. That's fair. Although a lot of people still think that Bitcoin is crazy.
Starting point is 00:09:09 So, you know, that's fair too. And so basically Ethereum comes out as the next big generation. And they kind of introduced the notion of smart contract. this notion of more complex computation in the transaction, and then all these emergent structures that you can kind of build from it. All right, so now that generation's done. It's set. People seem to agree it's a good idea. There's a lot of competitors like Neo and Eos and others that are coming out or already are out in market. And we're now entering a third generation where we say, okay, the delusion's good, we like that. Computation's probably a pretty good idea. But how do we
Starting point is 00:09:43 do these things at a scale of millions of users to billions of users if they're actually going to be useful for people. Second, there's probably going to be hundreds to thousands of cryptocurrencies long term. I don't think we're going to see the great Cambrian extinction, you know, where we lose all these cryptocurrencies. There's still going to be a lot around because, you know, everything humans do, we do a lot of. We have lots of languages. We have lots of cultures. We have lots of governments and we have lots of religions. It's really hard for human beings to agree or consolidate on anything. And so something is controversial. as money or your financial life is probably not going to consolidate around just one universal
Starting point is 00:10:18 standard. So as a consequence, you know, you need interoperability. You need the ability to actually move value and data and preserve certain things like security and privacy as you traverse the internet of value. And you go between all these things. So there's a scalability component. There's interoperability component. And it's things like Ethereum Classic or Bitcoin Cash have really brought to the forefront. There's a governance problem as well. Where as we move, as we move, beyond just a couple of nerds who like meetup and you know they they enjoy talking to each other over slack to an actual system which has millions to billions of users and has control over your financial life and and all the facets of that maybe including
Starting point is 00:10:58 your identity and your property you need some form of a democratic process to make decisions of where the system's going to go and how do you pay for things and so forth now whether that's meta to the system meaning that there's some sort of meta consensus or it's embedded within an existing government or it's built on blockchain. There's a lot of debates about that. And so you see projects, for example, like Dash and Tezos or Cardano, we view that on blockchain governance for at least some things is probably a good idea. Other projects like Bitcoin, for example, a lot of people have been arguing to keep that
Starting point is 00:11:29 off-chain and to have some sort of open-source consensus material rise around it. I think Vitolic has also been a bit skeptical of on-chain governance as well. So these are kind of the three design criteria that we view our. are required for the next generation of cryptocurrency. Keep what we know and love. So keep the collective delusion, keep it weird, keep the computation, that's pretty cool. But then also go ahead and move into a domain
Starting point is 00:11:55 where you get faster, you get more users, you can talk to all the different cryptocurrencies, and you have some way of sorting out who pays and who decides. So that was really the 2015 idea for the Cardano project in a nutshell. These were the kind of the business requirements at a super high level. Then what had to be decided is, well, how the hell do we do that? So we spent the first two years kind of thinking really carefully about a lot of deep tech.
Starting point is 00:12:19 We started a really deep research agenda. We started tons of different threads, threads like better consensus algorithms, threads on voting threads on things like, you know, better crypto primitives like making us resistance to quantum computers, you know, threads involving things like side chains and so forth. And what we've been doing is gradually now closing out those threads, turning them to actual peer-reviewed white papers, and then converting those in the specifications, and then gradually implementing those and putting them into a production system.
Starting point is 00:12:51 So the first version of that system came out in September of 2017. It kind of runs like Ripple. It's a locked delegation version of Warboros. It's a lot simpler than the papers we've recently published. And over time, we're just going to more and more decentralized the system, and then eventually add-on components like our side-chain components, so we can link our smart contract layer and so forth. and that'll be coming in the next few months to years, depending on the features. So that's a good high-level summary of, I think, what we're doing.
Starting point is 00:13:19 I see. So to break that apart, so, like... Yeah, it's a lot, right? So, yeah, so, like, you know, you guys had a lot of ideas on, like, what needs to be done to make a third-generation blockchain, and then now Cardano is basically the amalgamation of, like, okay, let's take all of these ideas and, like, make a prototype and show that, like, these things actually mesh together and work together, we can make a production system using these. It's important to point out that there are other projects that are chasing this,
Starting point is 00:13:47 but usually what they do is they chase a particular dimension. Like Eos and Iota, for example, are really chasing the scalability side. They say, oh, we can do lots of TPS. And then interoperability, you have projects like Aeon and Ripple, for example, with interledger, and they're really trying to talk about that internet of value. And then in terms of governance, you see projects like Dash and Tezos. So I think there's consensus in the space that at least one or more of these dimensions are really important. I think we're the only one that tends to view them so interconnected
Starting point is 00:14:13 that you kind of have to do all three at the same time. Right. So scalability, governance, and interoperability. Those seem to be like the three key things. So I guess maybe we should like dive into some of them. Let's start with governance, right? Can you give us a bit of a short summary about your governance mechanism for Cardano? Yeah, so that's still under design. So we have a team led by a professor in Lancaster University named Bingcheng. And we've done a few videos. They're on our YouTube page. But basically the idea is that you have to combat a couple of demons at the same time. So one thing that you have to combat is the who gets to decide a demon.
Starting point is 00:14:51 And that's a really difficult question. So, you know, we tend to be beings of desired fairness. So we tend to be egalitarian and say, oh, well, everybody ought to be equal and so forth. But the reality is people aren't in terms of skills, time, expertise, these types of things. So it's really difficult to build a voting system because you can either overshoot. and then you end up having mob rule, so you get very poor decisions that are made, or you can end up creating a very ivory tower, pristine group of voting people who end up making your bettors are making decisions on your behalf,
Starting point is 00:15:22 and that's quite bad too. So you have to kind of find a sweet spot, and this is not a new problem. It's something that political theorists have been talking about for hundreds of years, and they've come up with numerous different voting systems. You know, everything from what are called linear preference sorting systems where for any decision, you don't pick one, you pick a collection, and you rank them. So Condorcet and Borda are examples of that to things like liquid democracy and liquid feedback.
Starting point is 00:15:46 So we said, you know, liquid democracy seems really cool. So this is kind of delegating your vote. So, you know, you say, all right, well, let's talk about two events to elucidate the example. So everybody in the beginning gets, let's say, the same vote. And the votes are situational. So, you know, Bob has, you know, is proposing a nuclear power plant design, and you're allowed to vote on it for whatever reason. Well, most people aren't nuclear engineers, so they're probably not going to have a very informed opinion, and they're going to talk more about the bike shed in front of the nuclear
Starting point is 00:16:17 plant than the actual design of the nuclear plant. So what if you could delegate your vote to Bill, who's your neighbor, who happens to be an engineer you respect a lot, and you've done this for 25 years? Great. But then, let's say there's another vote, and that votes on, I don't know, you know, your roof, you know, or zoning laws or something like that. And you're in a big this bait with Bill and you don't want to give them your vote. So in ordinary representative democracy, what you tend to do is just give one third party like a congressman or a senator power for a period of time, and then they go and decide on your behalf for better or for worth. In a delegative democracy system, you can delegate real time. So you can say, okay, for this particular vote, I give it to Bill,
Starting point is 00:16:57 for this particular vote, I give it to Bob and so forth. And there's a lot of theory behind why that would make more sense. Just as an example, if you had a delegated voting class in the U.S. election for the presidency, then, you know, Donald Trump would have a really, really hard time getting elected. Why? Because most people would delegate their votes to local leaders in their communities. And so Trump is not talking to the general American public anymore. He's actually talking to a voting class that's been specially selected to analyze him. So when he says, we're going to make America great, they'll say, well, what exactly are you going to do? And he has to go into policy specifics. And he can't do that, right? He's built this whole campaign on low information.
Starting point is 00:17:35 So, you know, there's a lot of people seem to feel that liquid feedback, liquid democracy is a good idea. So we said, okay, let's try that. So we're going to push a paper out for CCS, come around May. We were hoping to get it out in February, but the universal composability stuff took a little longer to get done. But it has some details on exactly how we think that would work and it involves a lot of complex math. But if you're curious of how it basically works, we did a video. It's about 45 minutes long, and it goes into kind of details of how the crypto, works and how we view the system and so forth.
Starting point is 00:18:07 But that's one demon, the who gets to decide demon. And you make voting proportional to your stake in the system. That's the obvious starting point, but you can change that metric or reweight it based on other facts and circumstances. The second demon is the demon of rational ignorance. And this is the harder of the two to solve. So what is rational ignorance? Well, rational ignorance is basically where the value you get from knowing something is less
Starting point is 00:18:31 than the time it takes to know that thing. So, for example, let's talk about health care. The American health care system is horrendously complicated. It would take you literally years to decades to understand all the facets and insouts of it. So if somebody comes to you and says, hey, Sonny, I want you to make a decision on whether, you know, universal health care is a good idea or Obama care is a good idea or something like that. The only way you can actually really have an informed decision is to invest those years of effort. And what is the output of it? Your vote counts exactly the same as my vote.
Starting point is 00:19:02 and exactly the same as the crazy hobo's vote in the streets of San Francisco. So then you get a little disenfranchised. You say, hang on a second here. I got exactly the same payoff for putting a lot more work. So what's the rational behavior to not stay informed? That's why we tend to focus on wedge issues and elections because they're very easy to have an opinion on and understand, but in general they aren't very substantive.
Starting point is 00:19:29 You know, like the things that really have a big impact over your life or things like foreign policy and health care policy and retirement policy and so forth, not gay rights or abortion rights. These are issues that affect small minorities, but those issues are much easier to understand than, you know, the complex intricacies of the Syrian conflict, for example. So most voters tend to stay ignorant. Now, what is this relevant to the cryptocurrency space?
Starting point is 00:19:52 Well, just look at basic debates we've been having, like the block size or the difference between proof of work or proof of stake, or should we bail something out or not. These are very complicated issues. They have many different dimensions to them, and they take a huge amount of time to become informed about. So most people just either vote with their wallet, meaning whatever they think is going to give them the most money, or they just vote along cults of personality, where they just say, well, what did Vitalik think or Charles think or Dan think or something like that or Andreas think? And they just kind of go along that particular line.
Starting point is 00:20:22 So the problem of rational ignorance is a heck of a lot harder to solve than the who gets to decide problem. And there's been some ideas like maybe you incentivize people to be a vote. like Dash has master nodes, for example, or, you know, maybe you can create some sort of bounty system or something. So we don't have quite a solution yet for that side of it, and we don't really need a solution at the moment because the voting parts of Cardano won't be out until probably 2019. It's one of the last components we're going to pull into the system as we can, you know, turn things on. But we will, you know, touch that topic in the May paper that's being released, and then we're going to try to experiment with some things. The other thing is that we're not in this game alone. Everybody who runs a cryptocurrency kind of has to deal with this in some capacity.
Starting point is 00:21:07 So what I'd love to do is start a dialogue with a lot of other project leaders and try to create some common notions or at least some universal standards that we can follow about these things. But those are the two demons. I'd say, you know, we're on the research spectrum or at or a boris or consensus algorithm is very advanced. It's near done. whereas the voting is in the early phases, I'd say, you know, 20, 30 percent along that research spectrum. So we're not really too far along. But there are some videos we have, look at liquid democracy and liquid feedback, and Google rational ignorance, and it'll give you a really good sense of what are some of the challenges. Another thing is there's actually a course on
Starting point is 00:21:45 Coursera. If you're really interested in democracy, I think it's from University of Michigan, and it's securing digital democracy or something like that. And then there's another class on voting theory that they occasionally run. And these are really good classes. And they kind of give you a high level view of how these things work and also some of the things to think about that you probably have never thought about before. And it's a really fascinating topic.
Starting point is 00:22:08 Like in the governance aspect, you talk about these two issues. The one is the issue of rational ignorance, which is how do you incentivize the voters to actually invest the effort to vote well? And the second is the issue of deciding who gets to vote, right? Now, many people would say that with proof of stake, there's a third very fundamental problem of governance and voting.
Starting point is 00:22:36 And that problem is somehow that, like in a traditional proof of stake system, you need to stake your coins and then participate in consensus and then you'll make more coins, right? So in the early days of cryptocurrency, people used to worry about the rich get richer problem. right now now but when we start to have governance decisions about the system and the voters are the stakers the rich get richer problem gets magnified what i mean to say is let's say like the inflation rate in cardano is i don't know 2% um if i'm really rich if i'm really ada rich i want the inflation rate to be much higher because like i will stake my ADA, I will run a validator's take my Ada and I want the inflation rate to be high.
Starting point is 00:23:32 So I'll get more ADA and I'll consolidate my position of having a lot of ADA. Whereas somebody that has very little ADA might not run a validate themselves and slowly they'll get diluted out. Now the decision of what should be the inflation rate in a system is dependent on governance. And then governance, in governance, if I have a lot of Ada, I have a lot of voting power. So don't you see this as a problem that when you have the people that have stake vote on governance decisions, you might somehow magnify the rich get richer problem. Yeah, so it seems like a problem on the surface. But actually, if you break into the analysis, it's a bit more complicated. So first off,
Starting point is 00:24:19 rich people are going to vote on whatever they feel is going to produce value for them. And the reality is short-term value is not as much as you would think. They look more long-term. Why? Because these markets are traditionally thin. So even if you could do something to temporarily increase the price 10 or 20%
Starting point is 00:24:38 at a long-term deficit, if you hold 10 or 15% in the supply, you can never divest that amount. And if you do long-term damage the cryptocurrency, you're actually costing yourself money. You'll make a little money in the short term, but you hurt yourself on the back end. So the decision making, the rational decision making analysis of what I do, if I'm a custodian of the system and I hold a lot of it, is a bit muddled. And it's not just what will maximize my utility day two or day three,
Starting point is 00:25:05 but maybe a longer term view. Second, let's say you have problems, like a 51% attack through your plutocracy, you can always fire your stakeholders by a fork. And that's something you can't do with a proof-of-work system. See, this is endogenous consensus versus exogenous consensus, so external versus internal. So when your security comes from an external provider, they own all the hardware. And if you fire them by changing the proof-of-work algorithm, you have to kind of rebuild your entire security base from the ground up. But let's say Bob and a proof-of-stakes system has 51% of the stake, the 49% minority, which let's say is much more diverse and they're actually doing everything and Bob just happens to have a pile of coins and he does nothing, but he's malicious.
Starting point is 00:25:51 You can just fork the currency, create Cardano 2 and remove Bob completely from the set. Now, what have you done? You've removed the malicious actor. The remaining people are honest and you actually have a much more secure system now and you haven't lost any security. You've actually gained security from that event, whereas you can never do that type of a thing. proof of work. Now, getting on to the voting set of things, you know, the most obvious to do voting for changing system level parameters, like
Starting point is 00:26:20 inflation or something like that, would be proportional to your stake in the system. And that's fine to do, but you can create other parameters in the system. For example, you could create the notion of a good citizen in the system, like a reputation system, like I've relayed lots of data.
Starting point is 00:26:36 Or when your systems can more complicated than just consensus as a service, when you could have other things like trusted data feeds, or multi-party computation providers or these types of things, then in those particular cases, you could then get points, and those could then bias your voting power. So it could end up that the person who has the most stake doesn't necessarily be the most powerful person's system.
Starting point is 00:26:57 Rather, it's the most useful person in the system. But that's a complicated metric to construct. So what you have to do is start somewhere, and the most obvious places to start where the people have the greatest financial incentive to participate in the system for the long-term growth and well-being. of the system. There's another thing I'd like point out. It's called the lump fallacy. So it's a common thing comes up in economics where people tend to believe that all the wealth ever created is already
Starting point is 00:27:22 been created and it's just about proportions and how much did Bob get versus Bill get. And it's a big misconception. You'll see it a lot, especially in liberal politics where people say, look how much money these billionaires are getting, you know, proportionate to the rest of the people and, oh God, these evil people, the rich aren't paying their fair share. Wealth is created. wealth is created through actions, wealth is created through productivity. So when you are participating in consensus, it's not a rich gets richer thing.
Starting point is 00:27:50 You're performing a service for the network. And the network is paying you to perform that service. And this is complicated stuff. It's not just the state. Eventually, it could be a decentralized file system. It could be computation as a service, which is effectively what Ethereum is. It can be relaying huge sums of data.
Starting point is 00:28:08 Like if EOS does live up to its claims of performance, it could end up moving gigabytes every second of data. Okay, so you're being paid to provide these services, which is producing wealth. It's producing value for everybody, and therefore horns are being minted to pay you. The last point is that the same hundred people who control the wealth aren't going to be the same hundred people year two and year three.
Starting point is 00:28:29 People sell tokens. You know, if people just held on to their bitcoins forever, then the original miners would have had, you know, much, much more. but the reality is people run businesses, they tend to sell things. So you're going to see a lot of rotation of value. Why? Because there's tons of volatility. So if your coin goes up a 5x or 10x, you don't say, boy, I should hold on to this because I can get perpetual 2% return.
Starting point is 00:28:54 You say, I don't think this 5x or 10x is sustainable. I'm going to go liquidate 30 or 40 or 50% of my holdings and lock in my 5x or 10x. I'm so lucky. What does it mean? You've just diversified holdings. So in all things, whether it's a startup or, or any cryptocurrency, what we've seen is a gradual deconsolidation of holdings. Bill Gates, for example, with Microsoft, had about 64% of the company when he started it.
Starting point is 00:29:18 He only has 5% now. He could have held on to that chair. He could have never diluted himself or diluted himself as small as possible and kept going along Microsoft, but he realized that diversification matters. And that's generally what's probably going to happen with most people who are involved in cryptocurrencies, is that as volatility goes up and down, they liquidate, and then you see a gradual deconsolidation, which is much greater than the particular inflation rate. But, you know, these are different things. You know, with proof of work, you kind of kick the can down the road and you hide
Starting point is 00:29:49 inconvenient truths. It's inconvenient to the Bitcoin space that 10 people, 10 pools control more than 51% of the hash power. And I don't know these pools. I can't be these pools. They say, oh, when anybody can buy a minor, yeah, but if it's patented and it has a very bespoke supply chain, and people are using subsidized power, that's not a fair game. I remember I bought a butterfly labs miner. I got it a year and a half after I put the pre-order in. And they were testing them, testing them. They were mining with my minor for until it was no longer prosely.
Starting point is 00:30:22 And they shipped it to me. That's not a fair game. It's a rigged game. So, you know, what we do is we say, oh, well, you know, there's a simplistic notion of control. It's very butterflies and rainbows and people like it. But instead, what you've done is giving it to some silent, a small group, and with proof of stake, you have to explicitly have this conversation of who should
Starting point is 00:30:42 be in control. How many people? Why should they be in control? What are their incentives? How do we ameliorate certain things? So if the rich are malicious, you have the nuclear option of forking them out. You have other options of introducing metrics to dilute their power, like a proof of usefulness, for example. And then you have social metrics that can be applied as well. And if they're malicious, the price will go down as well. So generally the best outcome is to just have them behave following the protocol as it's written. And I think that's a much better way of going about things because it's out in the open, it's explicit. You're going to not get it right the first time around, but you can fine tune the system. There's going to be a lot of opinions on how things ought to be done, whether it be a delegated proof of stake or a bonded proof of stake or a peer proof of stake system.
Starting point is 00:31:28 And it's very Darwinian in a certain respect. rather than just pretending like there's this God mode that's somewhere off in the rainbow, and they're always going to act honestly and behave properly, and don't worry about it, they'll just be there for you. And they become actually barriers to change, as we've seen with the Bitcoin network. So that's my big rant. This episode of Epicenter is brought you by Gnosis. Gnosis is an open platform for businesses to create their own prediction markets on the Ethereum network.
Starting point is 00:31:57 Prediction markets are powerful tools for aggregating information about the expected outcome of future events. So this can be used for things like information gathering, incentivizing behaviors, making governance decisions, or even creating insurance products. So in order to turn Gnosis into the most powerful forecasting tool in the world, they recently launched Gnosis X. It's a challenge that invites developers to build applications on top of the platform. And the best applications per category will be rewarded up to $100,000 in GNO tokens. So throughout the year, Gnosis will announce different. categories for the challenge, and the current challenge has categories for science and R&D,
Starting point is 00:32:36 token diligence, and blockchain project integration. Gnosis also provides the SDK, which allows you to easily get started with everything you need to get coding, and they also provide dedicated support channels throughout the challenge for teams and solar builders. Are you up for the challenge? Get started now. To learn more and to sign up, go to epi-center. com.tivis.com. We'd like to thank Knoissex for their support of Epicenter. So one of the things that you mentioned That was actually very interesting is like the idea of
Starting point is 00:33:04 if the This is one of my biggest benefits of proof of stake or proof of work as well is that when people do something malicious We can just easily hard fork them out Right And my here's question is about what happens If they try to abuse the governance system
Starting point is 00:33:18 One thing that So I've actually watched the whiteboard video By the way I love your guys These whiteboard videos are awesome Which one with Benching The one with yes the one on the governance and stuff. And so one of the things that you guys,
Starting point is 00:33:33 that he mentioned was that you use, the idea is that the voting is going to be in like zero knowledge. And so this actually is an interesting question because we've actually had this discussion at Cosmos as well is should voting be private or transparent? And like we're actually going down the route of like, so what do you think is the effect of having private voting where you actually lose that transparency?
Starting point is 00:33:58 and potentially that accountability. So let's say there's a vote and a lot of people decide to like increase a bunch of money for themselves in some convoluted way. It's not going to be that clear, but how do we make sure that we figure out who was the one who voted that
Starting point is 00:34:16 so we can hold them accountable, maybe not either through like protocol by hardworking or at least socially accountable. Okay, that's a great question. So first it's important to understand that there's a spectrum of voting. So you start with things like SRO, style things, like membership-based self-regulatory organizations.
Starting point is 00:34:31 And those are very fluid and things can change in a single meeting. Then you have like municipalities, right? And these are like your local mayor and your county council, and you can go and complain to the county council. They take a vote and maybe they meet every two weeks or three weeks, and they can change things fairly quickly. Then you have your state assemblies, and those move it at their own pace, but they're still somewhat democratically accessible to people.
Starting point is 00:34:53 Then you actually have your federal government side of things, and that's much more bureaucratic. It's much more difficult to get things done. And then you have even further the Constitution, where you say, okay, to change the Constitution, it's only been done in America about two dozen times over our entire 200-year history. So that's a real challenging thing to change. So what you have to do is say certain parameters in my system live in different spots. So, you know, things like user experience, taste texture feel, you know, nice to have features like HD wallets. Those types of things probably live towards the SRO style side. So they don't even necessarily require a vote. That's just you, the developer, making decisions of what's best for your users.
Starting point is 00:35:35 And there's some ambiguity in wiggle room in the protocol to allow you to have that kind of flexibility. Other things like, for example, monetary policy or how you achieve consensus or the voting system mechanics themselves, those are much more towards the Constitution side. okay so that requires a lot more consent a lot more time it's a it's a it's a lot more drawn out process than it would to be able to change something like that so you have to find that entire spectrum so that's the first step second step is should you have a blind ballot which is anonymous voting or should you have transparent voting now i've seen arguments on both sides for example switzerland they have a cantonal system so they're a true confederacy and some of the canton
Starting point is 00:36:14 still do public votes so you can actually go out and And there's a voting day and people shout out their vote and they go into groups. There's also other public voting systems like America. We have caucuses, for example. And a lot of these caucuses, you go into groups and who supports Rom Paul and who supports Romney and who supports McCain and so forth. And we all had to deal with that for those who are involved in those elections. And you get various degrees of outcomes. The problem with a transparent system when you're talking about money is that, well, let's say there's a delegation and one particular delegate ends up getting a lot.
Starting point is 00:36:49 of influence. Let's say Andreas, you know, for a Bitcoin or something like that. And there's a voting system. And everybody likes Andreas. So he ends up getting like 30% of all the votes. Well, if that's publicly known, what you've basically done is painted a big red target on his back. So now he's got to look over his shoulder and say like, oh shit, you know, I got to worry about my security. Like what if somebody kidnaps his girlfriend or his kids or whatever? If he has children, I don't know. But, you know, they try to coerce him. And, you know, they know he has 30%, so they can use the wrench or use some, you know, blackmail or something like that. And so he'll go and vote, but he's no longer voting with his own free will or conscience.
Starting point is 00:37:28 He's voting being coerced. Whereas if you have a private ballot, the advantage there is that you can never be able to conduct that attack, because it's difficult to know, depending on how anonymous you are, who these people tend to be, except for, you know, you personally making that decision to whom to delegate to. So on average, you eliminate a vector of attack for your system. Now, if you are clever about how you sort out system-level parameters on that spectrum from the SRO all the way to the Constitution-level setup, you still gain the benefit of not changing things quickly that ought not to be changed quickly.
Starting point is 00:38:02 Like the inflation example that was brought up earlier, that's a constitutional level of change. And if that's going to happen, that's something that probably require multiple votes over a long arc of time. And there's a lot of mitigations and a lot of opportunity for debate and discuss. discussion, and if it does get committed, it's something that gets committed the months to years time frame. For example, with Ethereum Classic, we kind of did this. We had to make a decision about monetary policy, and we also had to make a decision about the difficulty bomb. This process took a
Starting point is 00:38:29 year and a half to go through from start to finish, and these things are just now being locked in. So, you know, these system level changes, whether you have an on-block chain or off-blockchain governance mechanism, they ought to take a heck of a lot of time and be a very open process with lots his debates, and what ends up happening is that better arguments tend to float to the top over time, and it changes public opinion, and the probability of a vote maliciously succeeding all those rounds ends up failing. And, you know, there's a lot of ideas on, you know, the best way of doing this. The Venetians had a voting system where they kind of randomly sorted people and had multiple rounds, and the ideas you'd never know which person to bribe and so forth. So it's really amazing how much
Starting point is 00:39:08 ingenuities come through. But at the end of the day, it's also an empirical thing, you have theory and you think you know what's going to happen and you think you know who's going to be rational and how they're going to be rational, but you just have to launch it and you actually have to see how it works in practice and where it's gone wrong and so forth. There's also a participatory problem where these systems assume a rational majority and these systems assume reasonable participation. And the reality is that most democratic processes tend to fall apart when participation falls below a certain threshold. That's why things like delegation are so powerful because you probably can't get everybody to participate, but it's a lot easier to get people to at least delegate their vote to somebody. You know, the American election going back to that, if you look at 2000 versus 2004 versus 2008 versus 2012, 2016,
Starting point is 00:39:59 and you chart the total amount of people who were eligible to vote versus the percentage of who actually voted, it's declining cycle by cycle by cycle. But if people had a delegative ability, the conjecture would be that you would see a relatively high level of participation there. Because the people who are going to vote are going to vote anyway. But then that delta between the people eligible or not, most of them would still probably delegate to community leaders or people that they know and trust. And you'd get a much richer conversation probably out of that process. But that's a conjecture. And the only way to verify that conjecture is empirically.
Starting point is 00:40:34 So you have to run a system and take a look at participation. the quality of participation. So we do have some data. We studied the Dash Treasury system, and we took a look at the MasterNode counts and which ballots were being proposed, and we wrote a big paper on it. So if you go to IOWHK research and look at our library, you can see the Dash Treasury report we wrote. And there was some good, some bad, and some ugly there in that we discovered that there was very little funding diversity from when we studied that, meaning that the same groups of people tended to be the ones submitting the ballots and getting the ballots. Now, That can either be because the ecosystem is still very young, and there's a founder effect where people tend to trust the internal group of people to get it done and it hasn't diversified yet, or it could be an endemic failure of the voting system itself.
Starting point is 00:41:22 And really the only way to understand that is to kind of look at it month by month and see if you're getting a trend of increasing diversity or it's staying relatively stagnant. And if it's staying stagnant, it's probably a structural problem with the system. So we'll link to the dash governance system analysis in the show notes. So our listeners can follow up on it. So Charles, yeah, it does seem like Cardano, you have done a lot of research on governance and we look forward to what you end up implementing. We understand it correctly that governance is slated for 2000, sometime in 2019, not 2018. Yeah, the big focus of 2018 is smart contracts and decentralization.
Starting point is 00:42:07 You kind of have to do things in order. 2017 was, let's get a product and market. Yay. It's really hard to get scientists to do that. So, you know, how long is a piece of string? So we got Byron out in 2017, and 2018 is moved to specification-driven development, fully decentralized the system, and then turn on Gogh, which is our smart contract system, and get all that rolling out.
Starting point is 00:42:28 And that's a huge coordination problem. Then 2019 is about performance and governance. And so basically take the system, start sharding the system, and then turn on the governance components, There's also a user education component to it. So you need, if you're going to have an effective governance system, to have leaders materialize. So you need meet up groups to form. You need people to understand the philosophy of the system and why we're building it.
Starting point is 00:42:50 You need people to understand the underlying technology and to develop an intelligent opinion. Or otherwise, we'll end up happening as you'll have a beautiful voting system, but nobody actually ends up using it. What they do is to say, what did Charles say, or what did Duncan say, or what did Aguilos say? And so I'll just vote for that. So you need about a year or two of community management and growth and development to create diversity. And once you have diversity, then you actually get a pretty vibrant ecosystem. So it would be counterproductive in my view to have a voting system at the moment.
Starting point is 00:43:18 It would be just a voting system in name only. So 2019 is what we feel. And then if there's any delays, 2020 is when we do that. We have an additional year specifically for that. Okay. So right now, is it correct to say that like the focus? is on like proof of stake and having a system in which different stakeholders actually validate the blockchain?
Starting point is 00:43:43 Yeah, and proof of stake is such a hard problem. You know, we got a cosmos guy here, so you're keenly aware of it as well. And, you know, the problem with proof of stake is that you're trying to take something that ordinarily takes enormous amounts of money and energy and resources and coordination and reduce it down to something that doesn't require all that you're synthesizing it, but then you want to get all the benefits. So it's like saying, I want to eat the cake, but I don't want to get fat. You know, that's basically what proof of steak is saying.
Starting point is 00:44:10 So the first thing we had to understand is what are we even trying to accomplish? And that's the first question that was asked. So what is security? What is a ledger? What is a blockchain? You know, it's a basic question. And surprisingly, that question wasn't answered until we wrote a paper called GKL15, where we defined basically what a ledger is, and we created some security properties for it.
Starting point is 00:44:30 Then we had to create a baseline and say, well, does proof of work, provide that. And the answer is yes. Proof of work provides a secure ledger as defined. So you kind of have a basis of saying, like, this is what we're trying to accomplish with a blockchain, and this is what proof of work can do, and yes, proof of work is secure. Great job, Satoshi.
Starting point is 00:44:48 So then the natural question to ask is, under any assumption, realistic or not, can proof of stake achieve that same level of security? That's step one. And so, Orr Boris was published in 2016, and it was a very impractical protocol when it first came out. It was tightly synchronized, and it had some understanding,
Starting point is 00:45:04 undesirable characteristics about it. But it was basically a proof of concept for saying that the security models are identical, so they're both provably secure. Whatever you get from proof of work in terms of what it can construct, you get the same thing in terms of proof of stake for what it can construct. So that's a good starting point. So you have theory. And then the next step is, how do we go from theory to practicality?
Starting point is 00:45:26 How do we actually take this thing and put it into a real-life system and something that works? So most of 2017, and about half of this year, has been consumed with that practicality. decality question. So you move from a synchronized model to a semi-synchronized model. You move to a model where an attacker can corrupt clients whenever they want. You move to a model where you can bootstrap from the Genesis block. You don't need a checkpoint. You move to a model where you have composable security proofs. You move to a model where, you know, you have a delegation system built in. So if a person doesn't want to show up, they can delegate their stake to a stake pool. You move to a model where your random number generation is not done with an MPC, but it can be done with random Oracle. And so it's
Starting point is 00:46:04 much faster, but it's still, you know, resistant against grinding attacks and things like that. And it requires you to add a lot more crypto end, like VRFs and Perfect Ford Secrecy and these types of things, and it's not an easy task. So the follow-up of Orboros was another paper called Orrboros Proust, where we did about half of the heavy listing. And we've just released a paper today called Oraboros Genesis. It's on the IHKid website. I also just tweeted it. And we'll be presenting both of those at EuroCrypt in Israel here in about a week. And this basically seals off most of the practicality concerns. At this point, there's still some little lingering things we have to clean up, but we feel this plus a delegation scheme is all that's necessary for actually
Starting point is 00:46:44 having a production proof-of-state protocol in a system. Now, the next step is performance. So, where do you go from there? You know, you've built this beautiful thing. You can drop it in. Bitcoin would run just fine with it. Any blockchain would run just fine with it and run forever. And, you know, as long as we've got our economic incentives aligned properly. You still can, scale as new users join in. It's still in a replicated system format. So you need to shard. And there's a lot of opinions on how to do that. So there's protocols like Omna Ledger, for example, or Thunderella and others in the academic circles. And these guys have come up with some concepts. And there's some engineering ideas like you guys at Cosmos, I think, are doing some
Starting point is 00:47:21 things. And Casper certainly has some ideas with plasma. So everybody has kind of their own idea of how should we shard. Eos, for example, I think put Byzantine agreement on top of Depos. And it's less about can you shard. and it's more about what is a trade-off profile. Usually what ends up happening is you go from 50% Byzantine resistance to a third to a quarter, depending on how aggressive you do things. The other thing is that performance tends to decline very rapidly as you shard if there are Byzantine actors.
Starting point is 00:47:49 So if everybody's following the protocol and everybody's absolutely honest and great, protocol screams. It's beautiful. And if you're Google or Amazon, that's your normal operations, right? Because you own all the servers. But if you move to an actual Byzantine setting where people are unreliable or dissonance, honest or trying to break your network, then you have to really worry about performance impacts for malicious actors and the types of attacks that can perform. There's some great research, for example, that was done with Ghost and Specter, which are
Starting point is 00:48:17 directed acyclic graph ideas that Avi Zohar and Unitem Selopensky came up with. We actually implemented Ghost and Ethereum, showing that while Ghost does improve performance, there are attacks on Ghosts that can degrade performance quite rapidly if you're clever attacker. So you have to understand tradeoff profile. So that's the next step in our research agenda after we click. close out the remaining Genesis and Proust-related stuff. And that's called Orobor's Hydra, because it has many heads, right?
Starting point is 00:48:40 It's going to be sharded. And at that point, we think we'll balance everything properly. It'll have the right trade-off profile. It'll kind of have the right security design and so forth. But the nice part about the approach we've been following is that every step of the way, we've been doing it with peer review and with proper security modeling. So what happens usually when you're building these protocols is if you want to add another McGuffin, another feature to your protocol, then what ends up happening?
Starting point is 00:49:04 you have to regress and go back and say, oh, am I going to break something? But when you actually have a solid security foundation and good proofs and everything's composable, when you keep adding things, you don't usually have those regressions. So you tend to see a lot of research acceleration at the tail. So you pay a much higher upfront research cost to get everything started and get that pump primed. But then when you're rolling, it all makes sense. It's all composable. It's all modular.
Starting point is 00:49:28 It's really easy to parameterize it. The other thing is that it can work in different settings. Like if you want to go to a permission setting, it's really obvious how to do that for like a hyperlager style scenario. But if you want to go to a permissionless setting, there's also an obvious way to convert the protocol to that. You don't have to build two completely new protocols for that setup. So that's where we're kind of at with Oroboros. Big team. There's about 10 scientists who work on it, and they do different dimensions off and on.
Starting point is 00:49:55 And we've written, I think, six or seven papers. I can't remember how many now. They just keep coming out. I think, you know, if we ever go out of business, at least we can get into the white paper business. And, you know, 10 or 20 of these things every year or something like that. But I'm pretty proud of the direction of the research. You know, the other thing is that this is going to be the age of the academic proof of stakes. There's some very tough competition coming with Thunder Token and Thunderella.
Starting point is 00:50:22 There's a very tough competition coming with Al-Garand and so forth. And these are not protocols written by everyday people. I mean, like Sylvia McCauley has the Turing Prize. That's the Nobel Prize of Computer Science. And he's sitting at MIT, surrounded by some of the brightest people and a legion of graduate students that would just die to be able to work at his venture and build something with him. So the rigor and the standards and the community expectation
Starting point is 00:50:48 for what makes a good proof of state protocol and what your protocol must be able to do is going to dramatically increase, I think, over the next year or two, thanks to, you know, tough competition. And we're excited to be in that running. That's really cool. So yeah, I mean, I know like when I first read the Oroboros original paper, I was kind of like, eh, this isn't that, you know, interesting.
Starting point is 00:51:09 And then Proust came out, and I'm like, okay, this is actually, you know, this is usable now. Right. And so, I mean, I'm excited to hear what, read this Genesis paper. Can you tell us a little bit about it? Like, I heard you mentioned that you could, you have a way of like bootstrapping from Genesis without checkpoints. That seems like. That's like the Holy Grail, right?
Starting point is 00:51:29 Yeah, because, like, you know, for my assumptions, like, once you're past the unbonding period, you know, unless you have some sort of verifiable delay function, like it's really hard to prevent long-range attacks. So could you tell us a bit about how you're doing this? I'm going to punt on that one a little bit because we released a video today. So there's a 45-minute presentation on how Orvore's Genesis works, and we just released the paper, and we're going to be doing an actual presentation. on this Eurocrypt in a week. So if I talk about it now, I'm going to front run the Eurocrip presentation. But basically, you just have to make some assumptions about the nature of the signature scheme and how random numbers are generated.
Starting point is 00:52:11 And then within that, you're able to very creatively construct a model where you can get, when you look at two alternative versions of history, you can calculate which version of history is correct. Proof of work is real simple. You know, you just do a weight calculation. You say, ah, this one has more work than the other, so that's my longest chain. So this kind of gives you a clever way of doing it using the magic of crypto but that particular crypto is a little involved
Starting point is 00:52:34 and it's in the paper so if you're really curious about it watch the presentation and read the paper if not wait for the EuroCrip presentation and then we'll talk about it a little bit more openly but we had to do a lot for this paper we had actually two papers we were running at the same time we wanted to reprove the Orboris paper
Starting point is 00:52:52 using something called universal composability and then we had another team that was working on this bootstrap from Genesis idea actually it was related to our sidechains research. And so we were kind of, and then what ended up happening is we discovered that one needed the other. And so both teams merged. And we went ahead and created this paper. And it was just like a mad dash.
Starting point is 00:53:11 It took three months of hardcore work and lots of revisions. But we got it out. And I think we actually submitted it to crypto 18. So we'll show it off at EuroCrypt. And if it gets into Crypto18, we'll be in Santa Barbara and actually have a dedicated session of it. But read the video. I'll read the paper. of the video and then next week I'll talk more about it. Sure. Another question I have about
Starting point is 00:53:35 Oroboros, right, is how do you guys decide to go between like, you know, in proof of state, there's usually like the chain-based model and the BFT-based model. Right. And so at Cosmos, we're working on like tendermint, which is a very BFT-focused way of doing it, while Oroboros and its descendants seem all are more on like the chain base side. And I remember like, you know, one of, I've actually had this argument with you on Twitter before about like, you know, your claim was like asynchronous networks are impractical that like, you know, weak synchrony is good enough for all real world situations. So could you explain a little bit why of that like thought process there?
Starting point is 00:54:19 And yeah. Okay. So if you're electing an epic and you have a bunch of slot leaders and there's a degree of professionalization amongst those slot leaders, meaning that if you, in reality, you're always going to have some degree of federation. So whether it's a mining pool or a stake pool, there's going to be, you know, some actors who set up dedicated servers they run 24-7. And the entire history of the Bitcoin network, it's been fairly reliable. Blocks are generally produced roughly every 10 minutes. There's a little bit of variation there, but it's been basically as expected. And people have been pretty synchronized. throughout that entire setup. So it's more of a practicality argument where, you know, like asynchronous is nice to have, but first off, you know, you have theoretical things like Fisher Lynch-Patterson you have to worry about when you're talking about asynchrony. And seconds, like, are you ever going to really be in a network operating mode for a system like this where that's going to come up? So in practicality, semisynchrony is sufficient, where people may not necessarily
Starting point is 00:55:14 show up on time, but eventually they'll show up within a bound. And that's what Prowse is all about is to say, okay, well, you know, with a reasonable amount of time, they'll stay synchronized or semi-synchronized, and there's a way of kind of sorting all of these things out, just like we do with any normal consensus protocol. You know, if you're super worried about it, we could probably reprove everything in an asynchronous model, but, you know, the downside is we'd probably regress a little bit in Byzantine resistance, and we'd probably regress a little bit in terms of performance for the system. But, I mean, for all intents and purposes, if our goal, is to have consolidation around a collection of stake pools for the large shipment network,
Starting point is 00:55:53 we anticipate that things will be probably running in a synchronized mode or a semi-synchronous mode. I mean, just to give you an example of Bitcoin, there's a Corello relay network and the Falcon Relay network where the mining pools actually get to see the blocks before everybody else, because they actually want the tighter couple themselves. The other thing is our network model is different. Like we're moving from a traditional network model to something called Rina, recursive internet network architecture.
Starting point is 00:56:16 So from the outside, it kind of looks like UDP going into the, black box of madness. But basically because of that assumption and how we connect these nodes together, and because you actually have a permissioning system, thanks to the election system of Waroboros, you really can create a synchronized private network of stake pool operators or slot leaders that's permissioned because they're elected, they have credentials to prove they belong there. So you can use a different network protocol to guarantee that setup. That's why I don't worry about it too much. I mean, it's a nice thing to worry about from a theory perspective, but at the end of the day, it's kind of a much-to-do about nothing.
Starting point is 00:56:49 The other thing is you can hybridize these protocols. That's what EOS did, right? They started as a chain-based model with DPS, and they're moving with Byzantine agreement on top. So you can combine them together if you really want to, and there's some evidence that that might be a good idea. Algarand also looks like it's doing this. It's starting was like a traditional Byzantine agreement,
Starting point is 00:57:07 and then it found a fast sortition process, and they dramatically spit things up. So that's kind of the best non-answer I can give you to your question. It's like I view it more. is a, it's not a big concern. And if it is a big concern, we have a mitigation strategy for it. But in reality, if our network is consistently running in an
Starting point is 00:57:24 asynchronous mode, there's a more serious problem at hand than the consensus protocol we're using. It's a participatory problem. So the incentives are wrong. So Charles, zooming out a little bit. So you have developed like Auroboros, now Oroboros, and now
Starting point is 00:57:42 Oroboros Genesis, right? Star Trek Rass of Khan references here. Give us Genesis. So actually, like, there's, like, a lot of teams in this space working on proof of stake in, in, in parallel. So there's, there's cosmos, which is using practical Byzantine fault tolerance and other Byzantine fault tolerance algorithms invented earlier. And it's like fitting them with coins and the needs of, like, it's, it's combining practical Byzantine fault tolerance with, the fact that the people that want to come to consensus need to be the coin holders that are staking coins. Right.
Starting point is 00:58:23 Then there's Algorand, which is on which we did an episode, which is also using a different kind of Byzantine fault tolerance. I think it's called Fast and Furious Byzantine fault tolerance. Invented a decade back and it's combining it with some clever cryptography to decide on who should be the parties doing the BFD. then there's Ethereum which is a different flavor of proof of steak, which is, as I said, availability favoring proof of stake rather than consistency favoring proof of stake. So the thing with Cosmos and Algorand is once a block appears and is confirmed, it's confirmed, you can't go back.
Starting point is 00:59:05 Whereas an Ethereum is more like proof of work where chains of blocks appear and you're not sure that they are confirmed, but you'll be sure that they are confirmed after a while. while. So in this whole kind of ecosystem of projects pursuing different approaches, tell me what is special about Auroboros and how are you differentiated from the others? Well, Auroboros, everybody always wants to say, oh, my thing is better than everybody else's thing. And I could care less about everybody else's thing. I really, I couldn't care less about it. The thing is that it's more about saying, look, we have to get the theory right.
Starting point is 00:59:45 We have to get the security foundations right. We have to make these concepts and ideas accessible to the university environment. You see, what we did is we walked into an environment where if you went to a cryptography conference like crypto or EuroCrip, you say, hey, I work in a cryptocurrency space. I actually did this. I went up to Diffy at crypto, and I said, Diffy, I worked in a cryptocurrency space. he grabbed his glass of wine and walked away from me. It didn't even say anything. He just walked away.
Starting point is 01:00:14 I was like, but what just happened? So the brand of cryptocurrencies is badly damaged in the cryptographer community. Why? And rightfully so, because what's happened is you have all these people coming around making these magical claims about performance and security. They don't do any of the basic stuff, like build a model, write a proof, clearly state your assumptions, tell it what you can do, you can't do. So the first goal of the Orrbor's agenda wasn't to go and build the best, fastest,
Starting point is 01:00:43 most amazing protocol ever. It was rather to introduce the entire proof-of-stake problem in the grandest possible way to the entire cryptographic community. And we've gone to hell of a lot of conferences. Financial Crypto, EuroCrip, Crypto, ACNS. You name it, we've been there. And we've had hundreds of conversations. Since, for example, we published the Orrbors paper, I think it's been cited more than 50 times. Seven papers have been derived from it or done things with it or built on top of it.
Starting point is 01:01:13 And what we've now started as a great conversation in the cryptographic community about what is the design space of proof of stake in general? Like, for example, what are the incentives need to look like? If you are going to do delegation,
Starting point is 01:01:24 what does delegation look like? How do you do cold staking? How do we ameliorate some of these meta concerns? Like, we haven't discussed this one, but it's a big one. How do you handle exchanges? They, on average, hold
Starting point is 01:01:35 double-digit percentages of the entire supply the currency. They don't own the currency, but they would technically be eligible to participate in consensus with the proof of stake style system, right? That's a big problem. That's like they don't own it, but they can control it
Starting point is 01:01:49 and they can derive value from it. That's not a concern proof of work has. So, you know, there's strategies to mitigate, but these things need to be discussed in a broader context. So the first major global was just have that conversation and pick best practices. And we were very pragmatic.
Starting point is 01:02:05 If there was a better solution like we you know snow white came up with something or tendermint came up to something that we felt was better we'd take it cite it put it in and there you go uh so that's step one so anybody using our stuff knows that really smart people have checked it and that it's gone through a very rigorous process and it's kind of created a standard step two was in the process of having that conversation try to get a sense of the impossibilities or the real difficult to do things because i'm getting so tired of people posting a piece of people posting a paper from Andrew from Blockstream about this is why proof of steak can't happen as if it's
Starting point is 01:02:41 canon or something like that. I get so tired of that. You know, it's just like, guys, I would much rather in the academic community have all of our sins be, and there are numerous sins, right? And we've even discovered some, like we have a paper on something called steak bleeding that we discovered along this research process. So create a way of going about explaining what are the tradeoffs and what will be giving up when we move to the system. And let's not have that connected to financial incentives or to cult of personality, let's try to have that connected to an objective truth. So these are kind of two meta advantages of the Orrboros Protocols, that third-party verification and that kind of festivist-style erring of grievances of your protocol.
Starting point is 01:03:19 Then in specifics, the nice thing about Oraboris is by its design, it's very modular. So you can go from a permission setting where it kind of looks like POA or something like that. It runs in like a mode that you would expect to see for something like BFT Simple to something like running in an actual global scale permissionless network. And there's a way of tuning the protocol to behave in both of these things with kind of a common core to it. And it borrows a lot of good
Starting point is 01:03:43 best practices. You know, this notion of an epic, this notion of slot leaders, great because it allows you to construct a heterogeneous network stack as opposed to homogeneous ones. You have a permissioning system, and you can really defend yourself against DDoS and a lot of things as a consequence
Starting point is 01:03:59 of that. And there's a litany of other little basic design principles that we've rolled in. The other thing is that we're very agile in the way we've been going about Oroboros. The team has gone through, I think, six or seven major revisions of the protocol since inception in 2016, and we're probably going to go through another six or seven revisions over the next year or two as we learn more. And every time we do it, we gain some new dimension. Like this bootstrap from Genesis is a major advancement, and I think a lot of POS vendors are going to be inspired from our work and modify their protocols accordingly to try to capture what we've done.
Starting point is 01:04:32 So far, I think only Algarand and Orboros have this particular property. So that's more of a meta point, I think, to your question of, well, why is it better? It's more of a question of what does the space as a whole demand? If you're an investor or an external person, the papers are getting too complicated to read. The technology is getting too complicated. There's too much domain-specific knowledge. You have to have the sort of fact from fiction. So rather, what you need is trusted third parties or a trusted third-party process
Starting point is 01:05:01 to verify that the things I'm saying are actually right. And so that's why peer review is so essential. Second, we need to have better conversations with each other. And the problem is we have commercial incentives not to have good conversations to each other. Because if Bob has created his thing, and I've created my thing, and we have competing tokens, Bob doesn't want to make my token better.
Starting point is 01:05:23 Bob wants to make his token better. So he's going to be closed off unless I adopt Bob's token. So the academic world is kind of like a fair commons where we can have these conversations and quickly learn from each other and steal from each other to try to converge to a collection of good design principles.
Starting point is 01:05:38 And then third, you have to get much better about what are your business requirements for the cryptocurrency that you're deploying. Who gets to be in charge? Do you want to have finality or probabilistic finality?
Starting point is 01:05:49 What do you want to have for these types of systems? What do you need for your business domain? Do you need really fast settlement? Or is it okay to have much longer probabilistic settlement? It's like, what do you actually need in that setup?
Starting point is 01:06:01 And so what we'd rather have is a spectrum of protocols or a Boris being one, Algrant being one, tenement being one and others. And then what happens is once you collect your business requirements for the blockchain that you need or the system you need, you run the – you spin the wheel, and then it says, ah, this is the flavor that I require from my system. And when you adopt that, the hope is that because they have a common DNA in terms of the rigor, the security, and the design, you know that whatever you're implementing is going to work well. for you, the system architect. So I think this is the process that needs to be followed.
Starting point is 01:06:37 Design and consensus algorithms hard guys, and it's not a new field. It's been around since the 1970s, and there's a lot of people who know how to do this really, really well. What bothers me is when people just go and do it and think they're experts at it, and they make outlandish claims. You know, for example, the HyperLedger guys, they're very grounded. They're very realistic people. When I go talk to Christian and I say, what do you guys use?
Starting point is 01:07:05 And they say BFT Simple. Here's why. The guy's written literal the textbook on distributed systems theory. You're either reading that or Nancy Lynch's book or something like that. It's a good book. Okay. And then you go to the EOS guys and they claim they're getting two orders of magnitude more performance than this best practice, you know, BFT protocol.
Starting point is 01:07:22 And I'm sitting here thinking is the head of the IACR and all of these scientists at IBM incompetent. And a guy with a bachelor's degree from Virginia Tech is just a source. solvent and he's come up with a system that performs two orders of magnitude better. It's crazy. And let's see, it does perform at this level. They're saying they can move blocks around at 500 milliseconds around the entire globe. It takes 300 milliseconds to set a signal around the entire planet. So, okay.
Starting point is 01:07:47 And then I say, well, what independent verification do we have that? Did anybody performance benchmark any of these things? The third-party firm provide? No. But they just say it. Hashgraph says stuff. Everybody just says stuff. Iota says stuff.
Starting point is 01:07:59 There's no verification. that these things that people are saying are actually true. And oftentimes when you deploy them, you discover, oh, there was something I wasn't accounting. Either I've actually deployed a broken system that's not secure at all, or my system is secure, but it's a hell of a lot slower than I claimed, right? I've heard two people claim like 10,000s of TPS, and I'm like, where did you test that? It's like, oh, on my local machine. Right.
Starting point is 01:08:23 That's not how you test the distributed system. No. You know, it's so, it's so this is my biggest grade for it. So the point of the Orboros project is to try to separate fact from fiction, to try to federate the POS problems that we're having as a community, to try to create a trusted commons, which doesn't have a financial incentive to pick winners and losers, rather it's just focused on what's right and what's wrong,
Starting point is 01:08:46 and to try to have an intelligent discussion about trade-offs, an intelligent discussion about things like performance and benchmarking and best practices and so forth. As the inheritor of that, our conjecture is that the output of this process will be a really good protocol for a cryptocurrency. It might not be the best protocol for your cryptocurrency, but for Cardano, we feel that it'll converge to that particular state. And every project will benefit from our research, because it's all out there.
Starting point is 01:09:10 We try to annotate as much as we can, and anybody's free to borrow anything they want. There's no patents that's completely open source. I love Silvio to death. He's a good friend. Every time I see him, I complain about the patents on Algarand. I think that's counterproductive for the space, so we've chosen to follow an open source philosophy. And that's what we're trying to accomplish with Cardano as a whole.
Starting point is 01:09:30 It's every building block in the system, whether it be an interoperability building block with side chains or a performance building block with Rina or a Boris or a governance building block with the Treasury system and the voting system, that's all out in the open domain. It's all through peer review. And if we got something wrong, you have a difference opinion, that's fine, change it.
Starting point is 01:09:49 But if we got something right, everybody in the world is free to use it. And it just makes all of our projects better in a certain respect. Right. I mean, no, that's really good to hear, Like, I know a lot of people like, I've heard that, like, people criticize I-HK and Cardano as, like, you're using, like, academic pedigree as just basically a marketing thing. But, like, I find this is actually a really good reason. It's a way of, like, reaching an all branch out to, like, the more academic community.
Starting point is 01:10:13 And so that sounds really useful. And, Sonny, they don't need us. That's what we have to understand. Like, these cryptographers have their own lives. They've been around a lot longer before us, man. Public Key Crypto came out in the 70s. So, you know, what we have done as a community is we're kind of co-opting cryptography's brand, and it's really pissing off the cryptographic community.
Starting point is 01:10:36 Because they say, look, guys, if you're a full-stack Ruby developer, you are not a cryptographer. And just because you can implement one of these protocols doesn't mean you can design them and make them secure. And what they're doing is they're having, like, this acid flashbacks of the 70s when the very state of cryptography was anybody who knew how to write code was implementing their own crypto, and they're violating every basic principles, you know, security via obscurity and, you know, they weren't getting proper random numbers. It's like, it's like just horrific
Starting point is 01:11:03 when you actually look at these things in practice. It's not elitism, it's just a science. I mean, it's kind of funny, and everything else we're okay with professionalization. Like, you would like your doctor to actually be certified, and you'd like your doctor to have actually gone through residency
Starting point is 01:11:17 and proper training if he's going to perform an operation on you. But then in cryptography, it's like totally okay for somebody to have no professional training or knowledge, but then go implement something that's going to keep you private and secure and prevent, you know, the Iranian government from knocking on your door and blackmagging you because they discover you're gay or something, or they discover you've been using it illegal cryptocurrency
Starting point is 01:11:37 or the same for China. Yet, it takes an equal amount of training to become a cryptographer as it does to become a doctor. In fact, in some cases, more. You know, you're really at it in your 30s. So it's just dreary. And I think what we need to do as a community is respect that there are people who came for us, respect that these are very hard problems and respect that these problems are not going to be solved in one grand paper and by one person, they're going to be solved in stages as a community over a long arc of time as we've built up normal computer science. Right. Like, basically, we have to be able to, like, talk the talk so we can, like, get the ear of the people who've done it. And I'm trying to, like, start my own saying, which I've been
Starting point is 01:12:22 pushing, ever since, like, the Neo fiasco, I've been trying to, start the saying, like, don't, you already have, don't roll your own crypto, don't roll your own consensus, I'll go. But did the network break when one node broke? That's not even Byzantine tolerant, right? Yeah, it was some weird stuff going on. So another question I have about Auroboros, right? One thing I've always, like, had some trouble understanding is, like, in a lot of proof of
Starting point is 01:12:46 steak algorithms, like Snow White, Oraboros, especially in DFINITY stuff, there's always just like huge focus on randomness, which I never quite understood. So for context, in tendermint, we do a, so there's every block has a proposer and we have a deterministic round robin. And like, I know who the proposer one block from now is. I know who it's going to be five blocks from now. I know it's going to be 2,000 blocks from now. And, you know, maybe it has something to do with the fact that tendermint, like, distributes rewards equally to all the validators and all the validators are, because of the BFT nature, everyone's participating in it. But, like, to me, the biggest thing that the, you know, I understand randomness is nice.
Starting point is 01:13:35 I think the biggest thing it can help with is, like, DDoS prevention. But why the huge focus on, like, the randomness? Well, there's partly legacy, partly practicality. So most, if you do a literature review of cryptographic attacks or, like, where's systems, have been broken. Almost always there's some sort of random number issue at the end of the rainbow. You know, we screwed up somewhere and there's, you know, some bits got leaked or something, and there's a bit more determinism than we'd like to admit.
Starting point is 01:14:06 So cryptographers are professionally paranoid about how clean their randomness is as a basis. The other thing is that there are common best practices that exist. Like you can use MPC, for example, to develop randomness. Shown Makers scheme from Crypto-99 is one. we made a modified version of it that's linear time called scrape. But in general, if you say, okay, the security assumptions in my proof rely on, you know, a pure source of randomness, then your system is not provably secure unless you know you have that. So you have to put a lot of work into your paper, and the cryptographic community holds you a very high standard to prove that you've done that.
Starting point is 01:14:48 That's why it almost seems disproportionate. A lot of efforts put into the explaining that in the paper, because it's just a community expectation requirement. In a more practical sense, there were a litany of attacks in early proof of stake, like grinding attacks, where, you know, people could bias their chance of winning to favor them just by careful selection of things. And, you know, it's like, so you'd like to make sure that you've ameliorated that.
Starting point is 01:15:11 It's a historical problem in the space. But, you know, again, it depends on your network topology. You know, if you're DPS, for example, and everything is 21 nodes or 101 or whatever the quorum set is, and you kind of know the order that they're doing this, whether it be round robin or not, does it really matter as much about randomness? Well, there's some question there, but not really. Whereas if you're actually going to elect a true committee of people proportion to their stake, you kind of have a different need for that.
Starting point is 01:15:40 The other thing is if you build a source of randomness carefully within your protocol, that becomes a cryptographic beacon that you can reuse. used for a collection of other activities. So let's say you have smart contracts that require a source of randoms. It's a horrifically bad idea to have people roll their own source of randomness within a smart contract. In fact, I think there was a paper out of Cornell that did an analysis of the RNGs in smart contracts, and they said these things failed miserably.
Starting point is 01:16:05 Or it might have been at a UIUC. I can't remember which group did it. So it'd be nice as an API to say, hey, we have a beacon built into the protocol. And because we've done a really good job of making that as pure as possible, that becomes a public utility that the blockchain provides, in addition to consensus that can be reused for other building blocks, whether that be lotteries or player matching protocols or these types of things, and you know that that's a fair source of randomness, which is very valuable for all cryptographic. So it's kind of a mixed bag. Part of it is legacy because when it wasn't a focus, things went
Starting point is 01:16:35 horrifically wrong. Part of it is a community expectation where literally your paper could sometimes not survive peer review if you don't spend enough time talking about it and doing things with it. part of it is a public good and part of it as a structural property depending upon how your quorum is setting up and how you're voting on people. And systems like Orboros or Algarand, for example, do require a pure source of randomness or really good source of randomness to operate with proper security. Okay. So one of the interesting things about Cardano is most of your implementation is in Haskell.
Starting point is 01:17:10 So you've chosen Haskell. The only other project that has chosen a functional language is It's Tesos, right, with OCaml? Oh, no. Kedina also implements their blockchain in Haskell. And also, digital asset holdings does all their stuff in Haskell, but that's in the permission side of things. And Barclay's innovation group is all in Haskell as well. I think Chris Clack is there.
Starting point is 01:17:31 So if you look for it, you can find it, but you're right. It's not a common language choice. So why did you make that choice? What's the advantage of Haskell here? Yeah. really when it comes down to it, there's first the imperative versus functional war. And I think over time, people are starting to concede that even if you're on the imperative side, like you love your Java, and just never going to not love your Java.
Starting point is 01:18:00 There are some things that make sense to do in a functional sense. Like, that's why Lambda came to Java, was Java 8, right? And so programming in general is becoming increasingly more functional, because we're worried a lot less about resources, and we're worried a lot more about things like concurrency, and we're worried a lot more things like correctness of behavior and conciseness, because these repos are just getting so big and there's so many things going on. So I like functional languages because I just believe that they're easier to reason about. I believe that it's easier to test implementations,
Starting point is 01:18:33 and I also believe it's easier to build distributed systems in functional languages. And actually, if you look at a lot of the Internet giants, like, for example, Netflix, their entire backend microservices with Scala. You know, if you look at Facebook, large chunks of their systems are running in functional languages or the components that are mission-critical have some sort of a functional component.
Starting point is 01:18:53 Same for Google in a certain respect. So it's less of a decision of, well, okay, should we go functional or imperative for mission-critical software that's distributed that requires lots of testing, potential verification, in my view, it is functional. It's more of a question of, okay, which functional language ought you pick?
Starting point is 01:19:09 And there's a spectrum of functional languages. You have hybrid languages like closure and Scala and F sharp, where when you want to be imperative, you can be imperative. Scala's great at that. Whereas if you want to be functional, you can be functional. Then you have peer for languages like Okamol and Haskell are very peer in that respect. And then you have even more peer languages like Idris, which is a dependently type language that is just mind-bendingly hard to write code in.
Starting point is 01:19:34 So why we chose Haskell was A, we have access to the Royal of the Haskell space. The inventor of the language, Phil Wadler, amongst others, there was a committee that created it works for IOHK. So when the guy who created the programming language happens to work for you, it's like, it's probably a pretty good idea to at least consider that as a viable candidate for your system. Second, we have access to pretty much all of the Haskell consultancy firms, well-typed, tweeg, we've talked to FP-Complete and so forth. So all the people who are acknowledged to be like the top 5% of the space in terms of development and ability work for us or consult with us or we can talk to on a regular basis.
Starting point is 01:20:15 Third, I think there's a huge value in going boutique as opposed to going mainstream. So we are one of the largest Haskell projects in the world and actually one of the most valuable and prominent Haskell projects in the world. So if you go to Haskell Reddit and you go and say, what is Cardano? Everybody in the Reddit will know it as a project. So basically, you know, a lot of the people in Haskell space like us, know us, and means that we have a higher probability of getting independent contributors from the Haskell community to come in and eventually over time, commit code, read our code, give us feedback and so forth.
Starting point is 01:20:50 Another reason is that we're following formal methods, and so we start with a formal specification that we'll be releasing one in a few weeks for our wallet back end. And if you want to prove that you've actually correctly implemented code, you kind of really need a functional language to do that. And you can do with O'KML and Koch, you can do with Isabel and Haskell. there's differences of an opinion of what's better, but you need some flavor of that, so that we don't really have an option there.
Starting point is 01:21:13 It would be damn near impossible to do it with JavaScript or Java or something like that. It's really, really, really hard. And also, nothing is the conciseness is amazing. Just to give you a sense of numbers, our Mantis client for Ethereum Classic is only 12,000 lines with code.
Starting point is 01:21:28 Compare that to the C++ Bitcoin client. It's like over 100,000 lines of C++ code. So it's about a tenth of the size to do something that does more. has a virtual machine and all this network stuff and things that are more advanced than Bitcoin. So you have a more sophisticated protocol, and you use 10 times less code for a more sophisticated protocol. What does it mean? It means that just by lines of code, there's much less to do, there's much less to think about,
Starting point is 01:21:54 and it's much easier to write good test suites and good test coverage for that and reason about the code. So the conciseness has a huge maintenance and technical debt value, in my view, over verbose. And so I really like that. And then there's a lot of Haskell-specific things that we really enjoy. You know, Haskell is probably the most advanced functional language because the community is really invested in an enormous amount of time into making GHC and other things really advanced. And they put a lot of cool things in with the type system.
Starting point is 01:22:25 The whole concept of the monad is really easy to work with. And it gives you a lot of tools for modeling, concurrency, and distributed system theory. There's also actually some beautiful things like Cloud Haskell, for example, which takes all the Erlang goodness, it was OTP, and it brings it into the Haskell space. So, like, that's the another language you could look at,
Starting point is 01:22:42 like Erlang or Elyxer, for example. I think Eternity is using Erlang. And that's a great language for building a cryptocurrency in as well. We don't have to make any compromises. We have Erlang stuff that we can pull into Haskell as well. So from a correctness,
Starting point is 01:22:57 a concisis, a testing, a community, oh, one last thing, personnel-wise, if you hire a Haskell developer, they probably have a master's degree in computer science or a PhD, or at very least they have a lot of professional experience. No one starts with Haskell, unless they go to Edinburgh or Oxford or like some really good place. Most people start with an imperative language,
Starting point is 01:23:17 and then professionally, out of frustration, they find functional programming, and there's something they like about it. So if you hire a Haskell developer, generally you end up, it's a beautiful filter that gives you access to a much more experience group of people that are more mathematically oriented. They have an easier time reading formal stuff. They have an easier time reasoning about things, and they tend to be a little bit grayer in the beard. So we wanted to have that added filter in our personnel, so we can have smaller teams that are more experienced, that are smarter, and overall, I think that will have a better output for our process.
Starting point is 01:23:50 But, you know, we do more than just Haskell. We also write Scala code, and then we also write a lot of JavaScript code as well. The entire deadlist front end is written in JS, and we try to make that immutable where we can, but, you know, it's still an imperative approach. and that's a separate team. Yeah, I mean, I think this formal verification focus is really great. I was a huge fan of Tesos as well for the same reason. I think that what you got, especially like when you guys are doing the formal verification of your wallet, because I don't think I've ever seen anyone ever do that before, and that would be really cool.
Starting point is 01:24:21 Especially because that's usually one of the points where like the easiest attack vectors on systems. Another thing, could you tell me a bit more about the K framework? Because I know this is something you guys are focused on a lot. And it's a very, like, cutting-edge topic that I don't think that many people are, not let alone understand, like, even aware of. And so could you speak a little bit about what made you decide to focus on this? When there seems to be, you know, other blockchames, there seems to be a huge shift towards, like, WebAssembly, everything, right?
Starting point is 01:24:53 And so why the K-framework? Right. Okay. So, first, you guys should really have Professor Gagori Roshu on your show. He's based at the University of Illinois. wonderful guy. He runs runtime verification, and he's worked at NASA, and he's done all these really, really cool things. But he's the creator of the K framework, and he works very closely with IOHK. We actually have on contract 19 people full-time for this plank. So it's not an insignificant pool
Starting point is 01:25:20 of resources, and they do amazing work. Okay, so what is K all about? K is kind of like a meta-language. It's a language to build programming languages. So languages have syntax, and they have semantics. and syntax are your symbols and your letters and these types of things, and semantics are when you chain them together, what the hell does that actually mean? Now, it's a formal language, so it has to be ambiguity-free. And so what ends up happening usually is the programming language designer will go ahead and write some document
Starting point is 01:25:47 and have all the semantics written down in a math-y-style language and say, here are your language semantics. So if you encounter a statement in a language, here's how you're supposed to interpret that. And then the developer will go implement the language and hopefully if they read the spec correctly, they should be one-to-one. So for any statement that you read,
Starting point is 01:26:07 there should be semantics to cover that. Now, in practice, this has not historically been the case, even in languages like C, for example, especially for languages like C. So what K is all about is saying, instead of writing your operational semantics of your language in a math symbols that you put on paper, write an encode in the K framework
Starting point is 01:26:29 and a special type of markup. And then what will happen is K can actually build your language for you. So you have a correct-by-construction implementation of the language, all your tooling. Now, where it gets really interesting is less about the construction of the language. That's really fascinating from a PL perspective and a correctness perspective, and it's great for research. What gets fascinating is translation. So what K can do with something called semantics-based compilation,
Starting point is 01:26:51 it's something we're working on with RV very closely, and it's a lot of work, is that you can take a K-defined program and translate it into a, another K-defined program. So let's say you write the K-Semantics of Java, which has been done, and you write the K-Semantics of C, which has also been done. Hypothetically, using semantics-based compilation,
Starting point is 01:27:09 you can take a C program and translate it into a Java program, and it runs just like the C program ran. Okay. And where this is interesting is then that means you can build the ideal, perfect virtual machine for your cryptocurrency. In this case, it's yellow. It's the virtual machine that RV designed for Cardano.
Starting point is 01:27:28 And then, for interoperability, all you have to do is just go and write the case semantics for lots of programming languages. And the SBC stuff will just translate those languages right to your perfect VM. So this kind of helps you a bit because, you know, like the whole argument for WebAssembly is it's not that it's the optimal thing for cryptocurrencies, it's that a lot of people are working really hard to make this as interoperable as possible with as many programming languages. And they have to do a lot of work there. But the thing is it's not fine-tuned at all for cryptocurrencies. So what you can do is build something that's fine-tuned for your language.
Starting point is 01:27:58 your application, and then your interoperability strategies just go and spend a few weeks writing semantics down, and then you're done forever. You just append those to the blockchain in there. And if you're curious to see what they look like, just Google runtime verification, GitHub, and you should, or excuse me, K-framework GitHub, and then you can actually see the semantics for Java and the semantics for JavaScript and things like that. Now, here's the other cool thing. Let's say you version your virtual machine. You go from version one to version two. Ordinarily, when you go to version 2, you have to update all the compilers. So you kind of have like a maximum threshold of comfortable languages you can support.
Starting point is 01:28:33 Because when you update your VM, you have to say, well, if I have a thousand languages, I have to update a thousand pieces of code to get all these things to work with my new version. With the K framework, you don't do anything. You just update the semantics of your VM, and then the SBC stuff sorts it all out for you. So you can support unlimited languages and you don't do any additional work whenever you version your system and upgrade your system and so forth. But guess what? Here's the best part.
Starting point is 01:28:58 Let's say for the sake of the argument that WebAssembly wins out. We could write the semantics of WebAssembly in K and support that as well. And so translate to WebAssembly or from Yale if we wanted to. I think I've actually already seen that. I think there exist. They are actually working on WebAssembly semantics. And we're actually also building a K-L-O-V-M backend so that we can get basically handwritten code performance from the machine-generated code.
Starting point is 01:29:23 So it's a lot of theory. there's a lot of things involved here. It's a deeply involved project. That's why there's so many people working on it. I think all of them have PhDs or damn near close to it. And there's about 10 years of computer science research that's gone into this particular framework. And it's actually already been used in practice. It's a practical product.
Starting point is 01:29:42 Runtime verification, for example, works with NASA and Boeing. And they do verification work with these organizations. Not small companies and not low assurance settings. So we're really excited to try to bring this into the the cryptocurrency space. Already it's given us the ability to be very pragmatic. We wrote the operational semantics for the K-EVM.
Starting point is 01:30:01 So we actually have Ethereum semantics. And when we launch our test net, we're going to actually launch a Ethereum test net alongside the yellow test net. So if you're a solidity developer, you have Web3 and Viper and all this other stuff, guess what? It's going to be one-to-one compatible,
Starting point is 01:30:14 just as if it was an Ethereum node. But we didn't actually have to implement that virtual machine. It's actually being implemented by the K-framework. So when we launch our test net, it will actually have a correct-by-construction VM that's built right from the semantics. Pretty cool stuff, no ambiguity. It passes all the Ethereum test vectors,
Starting point is 01:30:31 and it'll be the same for Yala. Later on, we'll start looking at things like proof-carrying code and things like hooks in the VM that allow formal verification to be much better. Like, for example, RV wrote formal semantics for the ERC-20 token standard. And this is kind of a vogue topic right now because some of these ERC-20 tokens
Starting point is 01:30:49 aren't necessarily implemented completely correctly, and it creates problems. So it wouldn't be so cool to actually have an artifact that you can have with your ERC20 deployment that verifies that you follow the specification. So you get a proof of correctness with that. So you know that your ERC20 is right. You know, if you have billions of dollars of value behind something a la EOS, probably a really good idea to, you know, to verify that that token is correctly implemented.
Starting point is 01:31:13 And these are kinds of things that are going to be really easy to do in our view when yellow comes out because we've custom built the VM to accommodate that, but we don't have to sacrifice interoperability. Now here's the other really cool thing. Let's say that you want our virtual machine to support your language. Eventually what you'll be able to do is write the semantics of your language in K and then issue a special transaction to embed it into the blockchain itself. Then as a developer, here's how your development experience looks. You write your contract in that language and you go and put a header in the thing
Starting point is 01:31:45 to go look up on the blockchain for that particular language. And if the semantics are there, it can pull it and then use the SBC mechanics just translate it to run on the yellow. on yellow. Don't have to talk to me. Don't have to say, Charles, can I get support, or go and build a complicated compiler or anything like that? You just have to write semantics of your language, which you already have to do if you're creating a new language,
Starting point is 01:32:02 for example. But now there's a more rigorous formal way of doing that where these things are there. And there's a lot of other things like debuggers and, you know, all this other framework that will be interoperable between each other. So, K's great project. Kay framework is just a phenomenal piece of work, and it's being
Starting point is 01:32:18 incubated at a major computer science institution, University of Illinois or Bono Champagne is, I think, in the top five of all CS schools in the world, or at least in the United States, it's right up there with MIT and Carnegie Mellon, and it's known for formal methods in PL. This is a big hub for it. So it's got kind of the right balance of practicality, of theory. There's definitely the right team working on it. And if you guys want to know more about it, Gregori is a wonderful guy to bring on, and I think he'd be able to give a far more justice to K than I could.
Starting point is 01:32:49 man that that seems like an ferociously powerful invention and thing to adopt for smart contracts right because at the end of the day developers are going to want to write contracts the languages they want it's kind of funny when joe or any of these other guys from ethereum so like
Starting point is 01:33:06 Ethereum is one it's inevitable you know we have all the developers it's like you have all the solidity developers the language you created yourself but how many Java developers do you guys have or how many c++ developers do you guys have like the vast majority of people who write code do not know how to write code or are not writing code for your platform. So you need to be pragmatic and create a way to bring those people into your ecosystem, and that should not be using different programming language
Starting point is 01:33:30 and throw away all the things you've come to know and love in your entire career. It has said, bring as much of what you have into our system, and it should just work. Now, it doesn't mean it's going to be secure. It doesn't mean it's going to be practical. It doesn't mean it's going to be performance. It doesn't mean it's going to optimize gas costs. All of that is like facts and circumstances. but it says that you can use the stuff you're already familiar with.
Starting point is 01:33:51 And then you can have a conversation about those other questions over time as those communities emerge. Cool. So now we have a guest idea, Gregor Rosh. So we'll have him on Epicenter and discuss the creative framework. Charles, thanks a lot for the great conversation over the past hour. It's like whenever we invite you, we learn a lot of things that we never knew about. I remember the last time you came on the show. Prior to the show, you showed us your collection of books,
Starting point is 01:34:22 and that was mind-blowing in itself. That should be an episode, you know, like Charles' collections of books. I don't know if I showed this one off. This was actually one of my favorite math books. It's something I read as an undergrad. It's naive set theory from Paul Halmos. Did I show that one off last time? No, no.
Starting point is 01:34:38 That's new to me. So it's like if you ever actually become serious in math, you have to take set theory and math logic. And anyway, Halmos wrote this as kind of like a primer to help you learn about things like Russell's Paradox and Banach Tarsky and how natural numbers are constructed and so forth. And Hamos is one of my favorite authors
Starting point is 01:34:59 because he followed what was called the inductive book method. So basically what he would do is he'd write the first chapter and then he'd go and write the second chapter and then go back and rewrite the first chapter. Then he'd go and write the third chapter and go back and rewrite the first and the second chapter. So if you ever read a book by Halmos, the first chapter is amazing.
Starting point is 01:35:15 It's like the best thing you've ever read. You're like, wow, this guy is such a great author. But then something happens that the quality tends to decry as you go from chapter to chapter. The last chapter is terrible. What the hell is this guy talking about or something like that? But wonderful book, highly recommend it. And if you ever want to know about set theory and piano's axioms and things like that.
Starting point is 01:35:34 Cool. So we look forward to having you again. Perhaps when you have the next release of Cardano, I think that's the Shelley release. we'll probably invite you back and talk more about your smart contracting system and other elements of Cardano that we couldn't touch on today. So if you're a listener, thank you for joining us today. We release new episodes of Epicenter every Monday or Tuesday and subscribe to our show on iTunes, SoundCloud or your favorite podcast app for iOS and Android. You can also watch a video version of this show on YouTube at YouTube.com slash Epicenter Bitcoin. We also recently started a Gitter community to hear more from our listeners and what they'd like to see.
Starting point is 01:36:19 You can find that at Epicenter.tv.tv slash Gitter. And we always welcome iTunes reviews. That helps us improve and iterate on our delivery. So do write some reviews for us on iTunes. And we look forward to being back next week. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.