Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Greg Meredith & Nash Foster: RChain – The Scalable, Concurrent and Performant Blockchain

Episode Date: February 8, 2018

We’re joined by Greg Meredith and Nash Foster of the RChain Cooperative. A fundamentally new kind of blockchain platform, RChain is rooted in a formal model of concurrent and decentralized computati...on. Powered by the Rho Virtual Machine, and secured by Casper proof-of-stake, RCain is partitioned, or shareded by default, forming a network of coordinated and parallel blockchains. The project, which is formed as a coop, leverages correct-by-construction software development to produce a concurrent, compositional, and massively scalable blockchain. Topics covered in this episode: How Rchain spun out of the Synereo project The primary objectives RChain seeks to establish The fundamental principles of RChain and RLang The concepts of concurrency and parallelism in simple terms How building concurrent systems implies scalability The unique features of Casper consensus in the context of RChain The importance of namespaces in RChain Why this project was started as a cooperative The current status of the project and roapmap Episode links: RChain Website RChain Rholang SDK (v0.1) Released A Visualization for the Future of Blockchain Consensus Introducing Rholang! at Devcon 2 This episode is hosted by Meher Roy and Sébastien Couture. Show notes and listening options: epicenter.tv/221

Transcript
Discussion (0)
Starting point is 00:00:00 This is Epicenter, Episode 221 with guests Greg Meredith and Nash Foster. Hi, welcome to Epicenter, the show which talks about the technologies, projects, and startups driving decentralization in the global blockchain revolution. My name is Sebastian Kwe. And I'm Meher Roy. Today we are talking about our chain, which is an ambitious blockchain project that seeks to solve the problems of scalability and smart contract safety. On this conversation, we have Greg Meredith, who's the president of the cooperative building this blockchain, and Nash Foster, who is the CEO at Pyrofx Corporation that is leading much of the development work. Nash and Greg, welcome to the show.
Starting point is 00:01:16 Thanks for having us. Yep, thank you. Yeah. So Greg, you've already been on the show, and at that time you were working for this project called Cineiro. And we talked about RoLang, the smart contract language. you were developing. Tell us a bit about what has happened in the meantime for you personally and professionally. Well, I mean, one of the things that was crystal clear back in, at the end of
Starting point is 00:01:45 2015 and beginning of 2016, it was that if we were going to create a blockchain that could scale to kind of be the combination of Visa and Facebook, that the existing architectures were not going to work. And so we conceived a version of our chain at that time, which was built on a model of computation that I had discovered back in the early 2000s. And there was so much community support. There was this groundswell of community support
Starting point is 00:02:21 that when Scenario decided that it wasn't in their best interest to pursue that, I said, well, you know, but we promised the community we're going to build it, so let's build it. And the community gathered themselves around the project. And it's just been remarkable. Everyone has kind of pitched in and created this well-funded and viable project. And we're just moving at this unbelievable pace. We just dropped the SDK last week.
Starting point is 00:02:53 Kudos to Nash and his team for that. It was just amazing. The work on consensus has accelerated to this huge point where we're now able to take a variety of existing consensus algorithms and express them in this Casper Correct by Construction framework. And we're already committing code against an implementation of that. Rolang itself, the SDK includes a version of the compiler. so people can start testing that out for themselves. And, you know, one of the things that I'm most proud of with respect to this community is how we have addressed governance. So the model that we've addressed here is the cooperative model.
Starting point is 00:03:46 So our corporate structure is basically completely cribbed from REI, recreational equipment, incorporated, which is a very, very successful cooperative, Washington State cooperative. So we're modeled after REI. And I'm just delighted. In October, we held our first annual membership meeting, and the membership expressed its will and voted. At the time, we were about 300 members. Now we're over 600 members.
Starting point is 00:04:13 And we're about to go into our first governance forum in which the membership talks about what it means to scale, not only in terms of throughput, but in terms of the size of the organization. What does it mean to be a scalable blockchain that is also the first, and to my mind, only publicly owned and publicly operated blockchain? So Nash, this is your first time on our show.
Starting point is 00:04:41 Tell us a bit about your background and how you came to work with the Archene project. Yeah, thanks for having me. I've spent about 25 years in the industry building large-scale systems and securing them. I spent a number of years working on the cyber liability team at Google, where I met my co-founder of PyroFex, Mike Stey. And we got involved in our chain because Mike and Greg have been research partners on category theory and some other mathematical stuff for a number of years. We started Pyrofax in 2016 as a company that was dedicated to building. distributed systems tools to make it possible for large-scale distributed programming to become
Starting point is 00:05:24 convenient for the average programmer that has really blossomed with the coming of the blockchain and our ability to contribute to our chain. So I'm having a lot of fun. Our focus is on execution. We're here to make sure that the code gets written and delivered and that it's correct every week. So Greg, you mentioned that our chain is a cooperative. That's a really interesting, I guess, structure to have for an open source project. Is this, I haven't really heard of any other open source projects that organized as a cooperative?
Starting point is 00:06:06 Can you explain why you chose this model? Yeah, largely because we were looking for something that was, in alignment with the open source, the principles and values in the open source community, which seemed to me to be largely about democratic processes. And the most democratic process that I can imagine was one member one vote, which is what's, you know, in our bylaws. I honestly think that over time we're going to see an elaboration
Starting point is 00:06:44 of our governance structure that makes it more nuanced or more sophisticated than just than our beginnings. But I wanted to have a good beginning. I wanted to have a good structure, especially as we scale out, it's going to require, I think, a lot more sophistication in terms of governance. But this was a very good start. I mean, it certainly makes a lot of sense for an open source project. Maybe others will follow you down that path. Who knows? Well, actually, there are. So we have been working with a group called Resonate, which is based out of Berlin. And they're a blockchain-based music sharing service. And they've also structured themselves. So yeah, that's right. They're also cooperative. So what, as a member,
Starting point is 00:07:39 as a voting member, what types of things do you get to vote on? What does the governance afford you as a member? So kind of the first and most important thing is that you can vote for the board members. So in that sense, it's kind of a representational democracy. But at any point, the membership can call a special meeting and get proposals in front of the board. So they craft and shape how the co-op operates. I'll give an example where recently we had a deal. The co-op is exceptionally well funded right now,
Starting point is 00:08:22 and so we need to attract applications to build on top of the platform. So we set up a venture fund to do that. And the membership, they can, crafted the basic shape of the structure of that deal. You know, we had a shape. We put it for review to the membership. The membership came back with a bunch of feedback. And the board listened to all of that and said, oh, wow, we need a different structure and came back with a structure that was
Starting point is 00:08:54 in alignment with membership wanted. And then we executed that deal. And then the membership came back and said, oh, wait, no, you needed more milestones in terms of how the funds were delivered. And so we went back to the venture fund and said, the membership is unhappy with this structure. Can we restructure the fund allocation in terms of milestones? And they said, whatever keeps the membership happy, we're absolutely happy to do that. And so we post facto corrected the deal with respect to the membership. So the membership's goals and desires. So that's how this works.
Starting point is 00:09:35 This is, in my opinion, democracy and action. So let's explore what our chain really is. Now, I've spent the past day going through the art chain documentation, and I can see that it's a very ambitious system that seeks to solve scalability and smart contracts, safety at the same time. And this project has some underpinnings in theoretical mathematics and the kind of components you have ended up building to realize this system are very unique.
Starting point is 00:10:15 So we'll seek to cover all of it, but perhaps just to start, explain to us your big picture vision on how you seek to solve scalability for blockchains. Sure. Let me give a, just take a step back and just look at the space more generally. The analogy I've been using a lot for people is HTTP. HTTP 1.0 is a really, really stupid protocol. And people have heard me say this a million times, but I'm just going to keep saying it until someone hits me over the head with a two by four. It's so stupid that no computer scientist in their right mind would design this protocol. It took a physicist to design this protocol. But its stupidity was its saving grace because any network administrator could look at the protocol and say,
Starting point is 00:11:09 oh, well, this can't do very much. And as a result, they were willing to open port in the firewall to let HTTP packets through. And thus the World Wide Web was born. If it had been smarter and contained session and other kinds of information, it probably wouldn't have gotten off the ground because nobody would let it through. So likewise, proof of work and sort of the Bitcoin blockchain is a really stupid protocol. It won't scale. But it will scale in the sense of adoption because it's really easy to explain. People understand it and there's an existence proof.
Starting point is 00:11:52 You can stand one of these things up. People can grasp what it does. And then you suddenly see, oh, wow, there are all these consensus algorithms that we hadn't thought of before that are essentially economically secured consensus algorithms that favor availability over lockstep consistency. And once you see that, then you can kind of go, oh, so there are a bunch of other ones that we could provide that are scalable. So let's start looking at that whole family of algorithms because it's really interesting if we have this economically secured leaderless consensus And then and then if you just take that as stipulated. Okay, there's a bunch of the bout there. We're going to find a few that suit our needs. What do you store? What do you come to agreement on? And again, the Bitcoin blockchain has a relatively interesting answer. Let's store a ledger, right?
Starting point is 00:12:48 Mayor has this mini Bitcoin and Nash has this many Bitcoin. Sebastian has this many Bitcoin. And obviously, right, if we just look at the market today, that's a somewhat interesting application. Ethereum says, how about instead of storing a ledger, let's store the state of a virtual machine. And then we could make ledgers as programs on top of that virtual machine. So that's a great idea. That's a vast improvement on the original proposal.
Starting point is 00:13:14 But now comes the rub, because which kind of computer you choose to store has a huge impact on how you scale. And in particular, if you choose a computer which is sequential, one thing at a time, then you are going to be forced, at least for all the financial transactions executed against that computer, to give a global serial order, which will never scale. So you need to be a little bit more circumspect and a little bit more careful about the kind of computer, or what I would call the model of computation, that you're going to store. And then you can kind of go through, you can list out all the models of computation. And you can, and there's a, to make it easier to list them out,
Starting point is 00:14:10 instead of listing them out willy-nilly in some kind of zoo-like taxonomy, There are four properties that you can use to analyze your models of computation. There's completeness. Everybody knows that now because we kind of know why Ethereum chose a Turing complete language as a part of the model of computation. There's also compositionality. Can I make larger programs out of smaller ones? I build programs out of tinker toy sets, which are sort of programs themselves.
Starting point is 00:14:44 It turns out that not every model of computation is compositional. There are two that stand out that are not. Turing machines are not compositional and neither are Petri nets. Now interestingly, the dividing line between Turing machines and Petri Nets is concurrency, and that's the next property, which started this whole rant of mine right now, which does your model natively support concurrency? So let's just go through it again. Completeness, compositionality, concurrency, and then finally you need something that
Starting point is 00:15:14 that in the literature is called complexity. But really, that's just, can you measure in your computations the use of resources like space and time? Okay, so those are your four Cs. If you then analyze all the proposed models of computation against those four C's, there's exactly one family that stands. And I discovered this in the late 80s, early 90s.
Starting point is 00:15:38 I listed out the four characteristics. I saw that there was a gap, There was a whole, there was no model at the time that was proposed that had all four properties, and I predicted the existence of one. The next year, Robin Milner publishes his seminal paper on the pie calculus, and I evaluated against my four Cs, and I discover, hey, here's a model, finally, that has all four properties. And then I started looking at that model of computation, and the whole family of models of computation that arises variance on that.
Starting point is 00:16:09 I found a small little niggling bug in the pie calculus, which we fixed with something called the row calculus. And the row calculus is effectively the only model of computation that has all four Cs and also does something else, which we know from computing, is essential for industrial scale computation, and that is reflection. The reality is that people don't write programs, programs write programs, and people write the programs that write those programs. So if you look, all of the major mainstream languages have ultimately had to have some kind of notion of computational reflection to be able to get to industrial scale. Whether you're talking about C Sharp or Java or even templates in C++, reflection is an important part of the model of computation. So when you add that feature in, there's exactly one model of computation that fits all those needs, and that's the row calculus. And that's part of the story.
Starting point is 00:17:03 But what's interesting is how those features interact. When you start to notice how the features interact, you see that the model of computation is auto-sharded. So it's not like it just gives you concurrency. It gives you concurrency with this notion of sharding built in. We didn't bolt it on the side. We didn't add it later. It was a part of the notion of computation from the beginning. It was built in from the ground up.
Starting point is 00:17:31 Likewise, security, our notion of security isn't bolted on the side or developed post facto. It comes as a part of the model and lines up with the best proposals for security, i.e. the OCAPS models that are a part of that. So when we talk about scaling, there are other things that we need to talk about when we talk about scaling. It's not just the total number of transactions per second. throughput, bandwidth, storage, these are all important parts of scaling, but it's not the only thing. In order to get that throughput, you have to have concurrency and sharding and a bunch of other stuff, but you also have to have correctness.
Starting point is 00:18:17 So here's a little thought experiment. What if Ethereum back in 2016 ran at the transaction rate, that we expect out of the Visa network. So two or three orders of magnitude faster than they were running at the time. And then they deployed the Dow bug. Instead of $50 million being drained, all of it would have been drained in a heartbeat. So the point is that scaling also includes correctness.
Starting point is 00:18:53 If you have garbage, if you make garbage run faster, that's not scaling. So hopefully that gives you a picture of what we mean by scaling and how we approach scaling. I'm sure Nash has a much better way of describing this than I did and probably much more succinct too. So let's hear from Nash. That was actually really clear. I mean, for the non-engineer that I am, I really like that explanation. I had to have one question, though, is concurrency?
Starting point is 00:19:28 necessary for compositionality? Just maybe to come back on this idea of compositionality. Compositionality is this idea that I can build a program out of smaller little pieces. So I have functions, right?
Starting point is 00:19:44 And I build like a class out of these functions. Is that a good representation of what that is? That's one way of doing composition. Okay. That's right.
Starting point is 00:19:56 And so you're absolutely right. that there's a relationship and it's really easy to get that relationship screwed up. And I'm so glad that you brought up object-oriented because object-oriented's paradigm, I think initially people thought of the object-oriented paradigm in the notion of class and instance and inheritance and specialization as a good paradigm for also including the notion of composition that includes two autonomous computations running side by side, right? Like two cells in your body, you know, like living side by side and potentially communicating by passing molecules back and forth.
Starting point is 00:20:39 So it turns out, however, that there's a whole host of literature that shows that that doesn't work at all, which is why there are a whole language proposals now that say, no, to get rid of the object-oriented notions of composition, eschew most of those and move more towards these compositions that are organized around autonomous execution. The language go is a profound experiment along those lines. So yes, there are different notions of composition, but not every notion of composition. In particular, the most famous one is the Lambda calculus. The Lambda calculus sits at the heart of F-sharp, Scala, Haskell, O'Kamel,
Starting point is 00:21:24 Lisp, take your pick. All the functional languages, they arise out of this model of computation called the Lambda Calculus. If you go study the Lambda Calculus, you'll see there's a theorem called Barry's theorem, which proves to you that the Lambda Calculus as a model of computation is sequential, one thing at a time.
Starting point is 00:21:46 So if you had a blockchain-based solution that was organized around the Lambda Calculus or a functional language, it would be sequential. And you'd know that without having to do any other kind of investigation. Then you'd immediately go, that's an interesting idea, but you're limiting your scope and your scale. So composition doesn't necessarily mean concurrency. Well, an interesting thing to note is that in the Google style guide for C++, you're not allowed to use inheritance or polymorphism,
Starting point is 00:22:16 which are the composition primitives that Greg is basically talking about at the object-oriented level. Rather than do that, what Google does inside its architectures, it uses remote procedure calls to have two independent services interoperate over the network by sending messages, which is effectively the design that's built into rowlining at the syntactic level. We just decided that rather than make it a toolkit with a bunch of libraries and a whole bunch of boilerplate code that you had to write, we would design it right into the syntax of the language from the get-go. And I do have another question regarding, so you mentioned sharding, and you said that sharding was made possible as an inherent property of concurrency. So if I understand correctly, then the idea is that by having concurrency, by having processes that may run in parallel whether or not they have to be synchronous with each other, you introduce sharding because you have these computations that can occur. in separate spaces? Yeah, that's very close. There's a little bit of nuance, a little bit of subtlety.
Starting point is 00:23:24 You could have concurrency that didn't have sharding built in, but the way concurrency is manifest in this particular model, which is that you have processes that are communicating with each other by passing messages over channels. It's the fact that the channels, the space of channels can be structured. So if you think, if for a moment we imagine that URLs are kind of channels, right? It's a channel between someone who's seeking a resource and the resource.
Starting point is 00:23:57 Right. So, so I use this, I use this URL as a way of addressing the resource. And because that address has structure to it, I can talk about, you know, all the addresses that have ABC as a prefix. So that's a space of addresses. And that space of addresses is, and that space of addresses is, and the sort of built-in structure of that in the row calculus is what gives you the auto-sharding. So it's, again, it's how the features fit together rather than just the fact that it's concurrent. If you look at the sort of solutions for sharding data sets
Starting point is 00:24:36 that we've actually built that are successful so far, they all sort of revolve around the central problem of finding the thing that you're looking for. You've got, you've taken this one big haystack and you've turned it into a dozen smaller haystacks. Now, which haystack do you search, right? If you have to search them all, you've made no improvement. And so the, the structure of names is what allows the, the, the, the, the, the, the blockchain to find the data that it wants to find quickly, uh, without having to search through all the haystacks. Okay. Uh, there's, there's one notion, I think that maybe it would
Starting point is 00:25:12 help to, to clear up here. And that's the notion of concurrency and programming and how that relates to parallelism. And the way that I understand it, and you can tell me if I'm right here, so like, concurrency is, say, for instance, you know, you wake up in the morning and you're going to like brush your teeth and then you're going to go make your bed. And then, like, while that's happening, you're making coffee and like you're, you're doing all these different things. And they can happen in order and sometimes they may overlap.
Starting point is 00:25:37 But like, ultimately, you can't as, you know, in theory, like, you can't as one person be doing two separate things at a time. you have to go back and forth. Now parallelism is like I'm brushing my teeth while I'm putting my shoes on and I'm doing two things at once. Is that a good way to look at it? These are great questions.
Starting point is 00:25:56 This is just me not understanding anything about this stuff. No, no, I just love it. These kinds of, you know, everyday examples are really, really good. I often do everyday examples from traffic, right? So if you imagine an eight-lane freeway, imagine an eight-lane freeway in which there was no lane crossing allowed. So you can get eight streams of cars, so you can get eight times as much throughput on that section of freeway, right? That's parallelism. And parallelism means they're not allowed to cross lanes.
Starting point is 00:26:35 Concurrentcy means that they can cross lanes, which means they have to synchronize, right? one car just like can't smash into another without causing all kinds of havoc and really reducing the throughput on that section of highway, right? When they cross lanes, you know, there's some message passing, like you turn on your blinkers. There's a signal, I'm moving to the right, right? And then there's a response in the lane over, which will, which, you know, either there was a natural space or a space was opened up as a response to that signal, which allows the car to change. That's concurrency. Right. And then we make a further, distinction typically between concurrency and distributed, which has to do with the failure modes.
Starting point is 00:27:16 So in concurrency, typically you're thinking about even though they're running at the same time and synchronizing, they kind of all fail together, whereas in distributed, they don't necessarily have to all fail together. So those are three sort of terms of art, and those are some rough guidelines for how to distinguish them. I want to take issue, though, a person can't do the same two things at the same time. A piano player is a really good example. And you can get six-year-olds who can play the left hand of some sheet music and the right hand and then put them together.
Starting point is 00:27:51 That's sort of a basic example. I'm learning the piano now and I'm figuring out that, no, you can't do two things at once. Well, I'm learning that you can, in fact, do two things at once. So to put it back in terms of your original analogy, parallelism is where you make coffee while you're brushing your teeth. And concurrency is where you don't care whether you wake up before you brush your teeth. So is there like some kind of analogy you have for this model of computation? Yeah, I mean, basically you could think of it, you could think of it like molecular computation, right? So processes are molecules.
Starting point is 00:28:28 And then the molecules synchronize with each other by sharing things. Or, for example, they could share electrons that would enable them. them to synchronize. And when they share electrons, that creates a larger structure. Another, you can go up a level in scale. So processes are not molecules, but they're cells. And then they synchronize and communicate by sending molecules.
Starting point is 00:28:58 They share molecules with each other. And it turns out that you can take this model up arbitrarily in scale. So processes are human agents. And they synchronize. they synchronize it with each other by sending messages to each other over a variety of channels. So they might use, for example, cell phones and telephone numbers as a way to send messages. Or YouTube.
Starting point is 00:29:23 Exactly. So the nice thing about it is that it's fairly abstract. And in fact, if you look at the theory, the theory is parameterized in the notion of what you say a channel is. So essentially there are two basic forms of processes. One is you have a process that's blocked waiting on input at a channel, and then after it gets that input, it's going to go and do some other stuff, which processes do. And then the other form is that a process is asynchronously sending some output on a channel. Those are the two basic forms.
Starting point is 00:30:02 And then the others have to do with composition, right? So we can put two processes together to run in parallel. and in order to get the whole thing off the ground, you have to have some kind of core building block processes. So you could start in the most primitive version, the core building block is just a stopped process. It does nothing. It's completely inert.
Starting point is 00:30:26 So you could talk about building off of that. You could say, well, I'm waiting for a phone call on my home number, and then I stopped. or a more complicated process is I'm waiting for a phone call on my home number and when I received that phone call, it's going to give me another phone number. And then I'm going to take that phone number and I'm going to send it out on Nash's number. So there, that's an example of a process that we just built using our primitives. So in some senses like in this model of computation,
Starting point is 00:31:05 there are these processes. Each process is essentially touring complete. It can take data, it can do all of the operations that a Turing complete machine can do, and then it produces an output. But in this model of computation, there's not a single process, but there are multiple processes,
Starting point is 00:31:24 and they communicate with each other. And using this model of computation essentially allows the programmer. The programmer has one big task to program the task in a way that it's distributed to these different processes, they will run some part of it themselves. When needed, they will interact with each other and distributed like they will complete the whole, the complete task. Yeah, that's not a bad mental picture.
Starting point is 00:31:54 I'm just going to throw in some brain candy for you and for your audience. So you kind of bootstrapped that description by, by assuming another model of computation, which was the Turing model. What you're imagining is a bunch of Turing machines that are all coordinating with each other by passing messages. It turns out that message passing is all you need. You don't need to assume any Turing complete stuff.
Starting point is 00:32:22 You can actually code up arithmetic just as message passing. And that ends up being a big, it's a twist in people's brain. usually when they encounter this. If that was candy, then I don't want to know what something is. Well, maybe liquor is quicker. I don't know. But, yeah, sort of the main thing that's really interesting is that all computational phenomena arises interaction.
Starting point is 00:32:55 And this was Robin Milner's key insight when he found the pie calculus. that all computational phenomena arises as interaction. Super interesting. So basically it's the exchanging of messages that is the computation. That's it. The process itself is not computing anything, but the pattern of how that process exchanges messages with the other processes is how it computes.
Starting point is 00:33:22 And this is exactly how your computer actually works, right? Consider just adding two numbers inside of, the Intel CPU. What's the last thing that the adder has to do? It stores the results in a register, which requires the results to be moved from one part of the CPU to another part of the CPU. When you're done with the register and you want to store the results more permanently, you have to move it out to main RAM or to disk or onto the internet, right? And so the thing about von Neumann machines or Turing machines is that they sort of just ignored all of that because they viewed it as physical complexity that they weren't interested in reasoning about.
Starting point is 00:34:03 But it turns out that if you start there, you get a more powerful model than Turing and Van Doeman knew how to work with. And that model allows you to work with great networks of interconnected machines. It's EMM is essentially like the EVM is trying to be a touring machine, right? So there's like sets of instruction. Technically, it's a Von Neumann machine, right? The EVM is a 1-9M machine. It is representing like what the machine needs to do as these op-codes, right? And then you give it an input and it's going to use these op-codes to process and get the output.
Starting point is 00:34:43 But central to how the EVM operates is that these op-codes will be executed one-by-one sequentially. So it gets the input. it might use the add op code, then it might use the jump op code, then it might use some other op code. It does one of these operations one by one, then the output is produced, and then this output can be given away to the next thing.
Starting point is 00:35:10 So like if I created a transaction in which I ping a smart contract, the EVM executes that, and then it might create a transaction and ping another smart contract. So there is this sequential nature that is in-builder, to the EVM. And what you are essentially saying is if you change the model of computation itself in a way in which there are different processes and all of them can execute in, is parallel or concur in the right?
Starting point is 00:35:43 Concurrently. And in a model where the exchanging of messages is somehow implementing the computation itself, then obviously you will have, and each of these processes, becomes a smart contract, then you will have a natural substrate. So you have one smart contract that you distributed into like 100 processes. Each of these processes can happen in parallel. They can exchange messages and then they can compute. And so because these happen in parallel, they can happen on different blockchains essentially.
Starting point is 00:36:21 Well, they don't necessarily have to happen on different blockchains, but they can, like, let's say that some of those communications don't really impinge upon others of those communications. Then you don't have to have the state of the one visible in scope to carry out the computation for the other. That's what's really important, right? So, I mean, this is, this is all more or less common sense. When I buy my coffee from the wildwood market down the hill, right, that transaction is almost certainly isolated from someone buying grilled tofu from a street vendor in Shanghai. Right? That's how our financial system scales right now, right?
Starting point is 00:37:11 So we got to, we, all we have to do is just model that. and we're going to be much, much faster. If I have to sequentialize or serialize the order of the street vendor transaction in Shanghai with my coffee here in Seattle, that isn't going to fly, right? So we have to, and the important way to understand when to sequentialize or when you have to serialize is when they touch on the same state, or they touch on overlapping state. And so if your model of computation allows you to determine when they're touching on the same,
Starting point is 00:37:46 state, then you can figure that out and you can say, oh, these transactions have to be ordered. Now, to bring it home, if you look at the op codes of the eBM, at the core of the row calculus, there's exactly one op code. This output and this input matched together, so reduce them and ship the data from the output into the awaiting continuation of the thing that's blocked on the input. There's one op code. Now, it turns out, it turns out that in order to, after you've figured out that you can model arithmetic this way, and you can model string manipulation this way, do you want to do that? Well, maybe not. It depends on what your goals are. But if you want to build a fast, scalable blockchain, maybe you can go
Starting point is 00:38:35 ahead and already use primitives for arithmetic and primitives for string manipulation and so on and so forth, and then you decompose the problem of correctness. So if you have an arithmetic library that you've already proven correct, because I don't know, some very, very smart people at Intel have spent over a decade doing that, then you don't have to do that again. And you just focus on the correctness of how you're assembling the arithmetic computations. So again, the importance of what we were calling compensations, and how it related to concurrency is not just that it allows you to scale in terms of throughput,
Starting point is 00:39:18 but it also allows you to scale in terms of how you approach proving or securing your computation. So this compositionality corresponds to modularity, and the modularity is what we have to have in order to make the correctness problem be tractable at all. So again, what I'm trying to say is these pieces fit together and not just this way, but also this way. There's many layers to this that are all stacked in a very particular way, so that many problems which in the past had been intractable suddenly become tractable. So consider this example.
Starting point is 00:40:05 there's one of these football tournaments that are ongoing right and these football tournaments has this sort of structure that there's 16 teams right Manchester United Arsenal whatever and there's games of two so Manchester United versus Arsenal and there's another game of two another game of two eight will be selected to go to the next round and there'll be games there then four then two and finally one one will emerge there's this the a tournament like this, right? And essentially, I want to bet with Sebastian. Like the four of us want to bet on the outcome of matches
Starting point is 00:40:45 with each other. And we want to represent these bets as smart contracts. And we are going to ultimately make these smart contracts interact with each other. And then we are going to figure out how our chain would handle these interactions. So let's think of the simplest thing. The simplest thing is me betting with
Starting point is 00:41:05 Sebastian. Right? So me betting with Sebastian on the first game, which is Manchester United versus Arsenal, right? So he wants to bet on Manu. I want to bet on Arsenal. And let's say we are betting on the R token, our chain token. Betting against me on football is not only simple, it's a sure win. Well, one of the things I was wondering is like in your setup, you're forcing everyone to bet.
Starting point is 00:41:34 like could it be that both Nash and you bet on Arsenal? Yes, it could be. I would never bet on Arsenal. I was waiting for that one. It was a perfect setup, yes. So like, let's first understand
Starting point is 00:41:54 what a simple smart contract is. So the human interaction I want with Sebastian is he has five R tokens. I have five R tokens. I put these five-hour tokens in some process slash smart contract, he puts the same. Some data comes or five days later, man you won. And like they tend go to either Sebastian or to me, depending on who bet correctly. A simpler example of this, like the sports books are actually fairly, the intuition is a little bit less crisp maybe than if you're playing poker in a casino, right?
Starting point is 00:42:32 because when you start out, the first thing that you do when you go to a casino to play poker is that you get on a list and then you have to wait for a seat to free up. So then the casino comes to you and tells you there's a free seat. They sit you down. And then what you do is you exchange money with the dealer that's at that table. You don't exchange money with the pit boss. You do it with the dealer who's right there at the local table. He takes your money. He turns it into chips.
Starting point is 00:42:58 and then all of your transactions are with the guy that's right in front of you. The pit boss occasionally will come by to check what's going on, but only just to verify that everything is all right. In general, he's not interested in what bet you're making on which hand. And so if I'm implementing a poker application as a smart contract, all of what I want to do is I want to model the communications between you and the dealer, the deck of cards, and the other players at the table. They don't need to model the casino as a process.
Starting point is 00:43:28 that interacts with you other than when you walk in and when you're ready to leave, right? And so we write the programs so that they actually fit the physical model that you're looking at. Yeah, totally. Absolutely. You know, for a sports book, it's a little bit more complicated because you can, you know, you have the sports, the bookie is basically running a clearinghouse. And clearinghouses are slightly more complicated than poker games because you can take partial bets and somebody can lay down a bet and then the house will actually clear it
Starting point is 00:44:01 instead of another counterparty coming along and that sort of thing. All of that can also be modeled very easily with the row calculus, but it just takes probably more work than it's worth it in a podcast. Yeah, yeah. I think what I was going to do
Starting point is 00:44:13 is just to go at just this idea of sending an asset or a resource to a contract which is held and then testing for a particular condition and once a particular condition is met, then releasing the combination of those resources out to one of the parties. Right? So in that particular case, you're going to have a channel that represents the resources that you've received or the assets you've received from one of the parties,
Starting point is 00:44:45 a channel that you receive the other. It looks like, at least for the purposes of this discussion, we can assume that there's a way to combine those assets. So you can add them together. Like right now you can't add an ether to a Bitcoin. You could convert one to the other and add those, or you convert them to some other thing and add that, but you can't currently add an ether to a Bitcoin straight up. So we have some assumption in your description
Starting point is 00:45:16 that we can add these assets somehow. And so then you will also have a channel where you a channel where you can probe the condition of your test. Okay, so, so, so you, so the, the smart contract is, is waiting for both of the assets, and you can do that in parallel. So we have this notion of a join. So it's waiting on the channel where we're going to receive asset from, from player one, and on a channel where we receive assets from player two. Once we've received those, then we can, in a variety of ways, we can either wait for a signal from the test condition,
Starting point is 00:45:56 or, ugh, yucky, we could pull on the channel for the test condition. And either of those are implementable in the row calculus, but let's say just for the sake of simplicity and cleanliness, we wait now for a signal from the test condition channel. And once we have the test condition channel, we match that against the outcome. Either it was outcome player one, one,
Starting point is 00:46:24 or outcome player to one. And in either case, what we do is we send to a return channel. Let's say, again, for simplicity, that the return channel is exactly the same as the channel of the winning player. So we send out to that channel the sum of the two assets. Very straightforward contract. It's straight line in the sense of its structure,
Starting point is 00:46:51 but it's already got parallelism in it. Why? because there's waiting, the waiting part is happening. We don't care the order in which we receive the two assets. Does that make sense? It makes sense. So basically like this is a process with two channels, one between my account and this process,
Starting point is 00:47:10 and one between Sebastian's account and this process. And it's a matter of sending these assets over the channel, process waits, gets the results, and sends the assets back on one of the channels. That's correct. Now, what happens if I want to do this? Now, I want a contract in which we have this channel. So this represents our bet, right?
Starting point is 00:47:36 And then I want to say that only if I win this bet, then automatically make a bet with Greg on the outcome of the next match that this winning team is going to play. So I'm going to say I'm betting on Manchester United. and I sent the money, only if I win this money, then automatically make a bet with Greg that in the next game versus Barcelona,
Starting point is 00:48:02 Manu is also going to win. Yeah, yeah, so the modification of the one we just did is straightforward, right? So you'll have to have one more channel, which is the channel for the next bet. You want to subdivide this into your contract and the betting contract. right?
Starting point is 00:48:22 Yeah. Okay, so in that case, you're going to modify the contract that represents you. So the contract that represents you initially was very simple, send these assets to that channel, right? But now you're going to run that, send these assets to that channel in parallel with something that's waiting on the outcome, right? So in the, and when you wait on the outcome, then you wait on the channel that you sent on.
Starting point is 00:48:49 Now you wait, right? Now, in this case, to avoid confusion, it's now much better to separate those two. So the channel that you send out on is not the one you waited on because you could potentially get this point of confusion where you receive the thing that you send. So it's a lot easier to separate those out. There's a send channel to the betting contract and a received channel from the betting contract.
Starting point is 00:49:15 So on the channel that comes back from the betting contract, you test the result. Is it zero? I lost the bet. I didn't get any assets. Or is it greater than zero? I won the bet. I got some assets, right?
Starting point is 00:49:28 And then the next thing I do is I make a bet with the next player, right? Which means I now send to an instance of that other contract that has the two players bound to you and Greg. So if these two essentially bets are running on different blockchains. Again, they don't have to be running. I want to make this clear, and in fact, this is a point I wanted to touch on earlier. They don't have to run on different blockchains. The different blockchains is orthogonal to what we're talking about. So the question I was really expecting that is related to ties this up with consensus is,
Starting point is 00:50:12 so let's say we have different instances of these parallel processes running on different nodes. right and and you know and the first time you get a race condition right where you've got two outputs and one input or two outputs racing towards a blocked process waiting on a channel right but you're running that very same computation on nodes scattered all over the internet how does that thing ever come to agreement right like if if the winner of this race is different than the than the winner of the same race on a different node, then you can get double spin, right? That's the interesting question. And it turns out that we have an interesting answer. The consensus algorithm is guaranteeing that all the nodes agree on the winners of the races. And that's all it ever worries about.
Starting point is 00:51:15 It doesn't worry about anything else. So our knowing. of transaction is very, very crisp. So what you're essentially saying is there can be lots of people that are betting. And in order to resolve all of this thing, you need just this sliver of information, which is like who won which match. And this sliver of information is what we have to arrive at consensus. And once there's consensus on this information, everything else can resolve. Yes. The short answer to that is yes.
Starting point is 00:51:54 And what our chain allows is is for us some intelligent way to figure out what that minimal sliver of information is and just get to consensus on that sliver of information and leave everything else behind to be processed offline by the nodes. Yes, that's, that's, again, And there's nuances there, but essentially, yes. Essentially, that's correct.
Starting point is 00:52:23 And now notice that if you have a betting pool over here on, I don't know, the World Cup, and you have a betting pool over here on American football, right? The American football bets don't have to page in all the blockchain state that's associated with the World Cup, unless there's someone who has conditional bets that, you know, like I'm betting on the winner of this American football game depending upon the result of this World Cup game. But if I don't have any bets that cross these two betting pools, then the state can be isolated. Because the state is isolated, this group of contracts doesn't have to download the blockchain state for this. and vice versa.
Starting point is 00:53:17 So we're still in one chain, but these computations can be run without any of the overhead of knowing about this state over here or that state over there. Does that make sense? I mean, that makes sense for scaling. That example you gave earlier about
Starting point is 00:53:37 the two transactions happening in two different places, not having to know each other is probably like the best visual representation of scaling that I've ever heard. So thank you for clearing that up. So I'd like to come back to like a more practical question. As a one of the great things about Ethereum is how easy that community has made it to write smart contracts.
Starting point is 00:54:04 And there's a massive community and people building applications on Ethereum. And we were at devcon a few months. to go and we saw one of the things I saw which was interesting was so the the the stack starting to come together and development tools and things like that right because making making it easier to for just about anyone with like a limited amount of programming knowledge to write a smart contract application and that has sort of been one of the strengths of Ethereum is you know and one of the selling points in the beginning is like write a smart contract as easily as you would some kind of JavaScript code.
Starting point is 00:54:44 I presume that there's a very different standpoint, like, with our approach with the R chain and rolling, because these languages are perhaps not as easy to learn and to manipulate. Can you, from a practical perspective, as like a developer, someone who's building applications on this, how different is it to write a DAP, say something like the Dow on our chain than it would be on Ethereum? Ethereum makes it easy to write incorrect programs, right? But it makes it very hard to write correct ones. And that's actually what we saw. We've seen it now several times, right, with
Starting point is 00:55:33 the Dow, with the parity wallet. There's, you know, undoubtedly dozens or hundreds of contracts that are deployed to Ethereum that have significant bugs that just haven't been exploited yet. But it's deceptive, right? The simplicity of solidity is deceptive in much the same way that the simplicity of JavaScript is deceptive. Yeah, you can stand something up and it will sort of work, but it's going to have a lot of bugs because you don't really understand what you're doing. The model of computation is too confusing. And the thing about Rolang is that the entire language fits on an index card.
Starting point is 00:56:08 So you can show somebody the entirety of Roan Lang in about 15 minutes. And syntactically, it's far simpler than solidity or JavaScript or really any practical modern language. But it has all the power of those languages at its fingertips. And so it's really easy. And because it's compositional, it's really easy to begin with something that you can understand completely and thoroughly. And then build on top of that by building components that interact. in ways that you have solid intuitions about, right? This is very difficult, if not impossible, in modern programming languages that don't have
Starting point is 00:56:47 the benefit of the row calculus. And so we think that we're going to be able to teach people, you know, the basics of programming in the row calculus, you know, in about an hour and get them to the point where they can, you know, if they're experienced programmers, there will definitely be some new stuff and some habits to overcome. But, you know, our past experience has been that even high school students can learn this stuff in an afternoon and can become, you could get to the point where they can write programs that are, you know, on the complexity of sort of your standard ERC token contract. That's something that you could learn to do in a day. And we feel really strongly that the
Starting point is 00:57:26 inherent simplicity of rolling is going to make it really, really straightforward for people to learn, even though the model of computation. very different. That's very encouraging for me, what you just said there. I mean, I used to write, you know, Ph.P or JavaScript or whatever, but I don't have any, like, formal training. But that's, that's one of the things I thought that was,
Starting point is 00:57:49 I always thought which was interesting with Ethereum, is that anybody can, you know, go on some kind of, like, Coursera or online training course and, you know, spend some time and learn how to code a smart contract if you have some ambition and maybe a little bit of programming experience. Just sort of wanting to address there what the gap is between Ethereum and something like Rolling. But from what I understand it, the language is much simpler, but there are assumptions about how you write the code that are much different than on a language like that.
Starting point is 00:58:34 solidity. If you've written code in PHP or JavaScript, you've debugged problems that occurred because the model of computations actually implemented and what you think it implements are different. That's the feeling that you get when you're staring at a chunk of code and it looks like it does what you want and somehow when you run it, the program doesn't do at all what you wanted and you can't understand why it's doing what it did, right? That's what you're feeling right then is. is the gap between the syntax and the semantics of the language. I know a feeling. Yeah, right?
Starting point is 00:59:10 We all do. We mean like, geez, last 25 years of my life can be characterized by, like, levels in that feeling. So the thing about Rowling that's so amazing is that the semantics came first, and Greg did a great job of boiling the semantics down to the smallest possible, you know, set of rules. and then the syntax is built directly on top of those, there's a one-for-one correspondence so that your intuition about what the program's syntax says and what it actually does is very, very clear.
Starting point is 00:59:45 It's very hard to get confused about what your program's actually doing. And actually, first of all, that was really, really well articulated. Thank you, Nash. That was amazing. And just to amplify Nash's point, even if we're talking a strongly typed language that has been bashed on by industry and lots and lots of commercial code deployed on it like Java. The reality is that Java is a part of a family of languages where when you're staring at the code, not everything is in the code.
Starting point is 01:00:17 Not everything that results in the execution of that program is in the code. It's not in front of you. You also have to keep in your mind the stack and the threads that are active and a whole bunch of other stuff that's not on the paper. which means the programmer's mind is burdened. There's this cognitive load that's not supported anywhere on the page. And there are families of languages, the models of computation, the Lambda calculus was the first one of such,
Starting point is 01:00:48 and then the Pi calculus and languages like that were the second such, where the structure-function relationship between what you see and how it executes every single, everything is on the page. And so that's another thing that actually makes it easier for programmers. When you're looking at Rowland code, you don't have to keep something else modeled in your head. You can offload that down to the page. Just look at the page and reason about what's in front of you.
Starting point is 01:01:22 So it's actually easier rather than harder. I think maybe we could have spent another hour trying to unpack this. You know, perhaps we can have you again. I think we probably have you again on at some point. But before we wrap up here, we just talked at the beginning of the show about how this project was a cooperative. What kind of help are you looking for?
Starting point is 01:01:49 Are you looking, are you like hiring people? Are you looking for contributors? Like how can people contribute to this project? There are a huge number of ways in which people can contribute. Everything from helping with development, to standing up nodes, to helping us with the business development,
Starting point is 01:02:12 to writing applications on top of the platform, to helping us with governance, to helping us with analysis of various economic models. The surface area of the work to be done is enormous. And I want to say that, you know, it's, for me, what ties this all together is that we're in this all hands-on-deck phase anyway.
Starting point is 01:02:45 The reason I'm doing this is because the blockchain represents this fundamental shift in coordination technology for people. To date, people have had, you know, sort of two and now maybe an emerging third class of coordination technologies. There's financial instruments. Financial instruments allow human beings to organize themselves and coordinate amongst each other. Government or governance allows people to coordinate amongst themselves, right, to take care of each other in the planet.
Starting point is 01:03:18 Social media has emerged as a new class of coordination technology and the blockchain, is one of the newest forms. We need this new coordination technology because we are in an all-hands-on-deck situation. When you have someone like Neil deGrasse Tyson standing up on CNN saying, we don't have the technology to move all our coastal cities inland
Starting point is 01:03:49 by 20 miles in the next 20 years, we need to recognize that the situation is dire. we have some of the biggest named scientists on the planet like moving from their well-established careers into working on climate change because the problem is so dire but we need a coordination technology that's going to help us move faster
Starting point is 01:04:13 so the truth the context in which Rowland and Archine is happening is that we're in an all-hands-on-deck situation so I really want people to figure out what their strengths are and plug in in whatever way they can because we have to. Well, on that note, and that's a really good point and a really good note to end on, I think, but on that note, there is a governance forum that is happening soon. Can you tell us more about that? Yes. Yes. So the arching, the first archane governance forum is happening in Seattle,
Starting point is 01:04:52 on February 15th through the 18th. And it's open to any member. of the cooperative, and it takes you all of five minutes and 20 bucks to become a member of the co-op, just like REI. So please consider coming and adding your voice into how the cooperative is organized and run, and also be exposed to some other technologies for coordination that aren't just blockchain. Great. And I did want to mention, we didn't talk about this during the show, but if people want to really get a good solid understanding, I guess more on the technical side, but your talk at DefCon,
Starting point is 01:05:34 I was watching it, I watched it twice earlier today, and I thought it was really, really great. I didn't see it at DefCon, but, and the reason why is because you, you take this concept of, correct by construction computation,
Starting point is 01:05:47 and you sort of map it out in a visual way that makes a very, I mean, a lot easier to understand. at least for me, as I tend to understand things more visually. So I would encourage anyone to have a look at that talk. We'll put the link in the show description. Thank you.
Starting point is 01:06:03 Thank you. So thank you guys for coming on the show. It was very interesting to learn more about our chain. And great to have you back on, Greg. So we release episodes of Epicenter every Tuesday. You can find us on iTunes, SoundCloud, or your favorite. podcast app on iOS or Android. You can also watch video versions of the show on YouTube.
Starting point is 01:06:30 Follow us on Twitter. And we're doing something new right now. We are experimenting with a platform called Gitter, which some of you may know and use. So we're going to have a Gitter channel. There's not a whole lot of people in there right now. It's mostly just like me and Mayer and Brian. But we do want to try to build a community there,
Starting point is 01:06:51 you know, get people talking. and there's a channel there for feedback. So if you have a feedback about the show, you can let us know there and you can reach out to us there. And so you can search for Epicenter on Gitter or the simple way to get there is epicenter.tv.tv.com.
Starting point is 01:07:05 That's G-I-T-E-R. If you want to support the show, there's lots of ways you can do that. One of those ways is by leaving us an iTunes review. We really appreciate those and it helps people find the show. Or you can leave us a tip and the tipping addresses will be in the show of the description.
Starting point is 01:07:19 Thanks so much and we look forward to being back next week. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.