Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Amrit Kumar & Xinshu Dong: Zilliqa – A Scalable Sharded Public Blockchain

Episode Date: November 14, 2017

Scaling public blockchains such as Bitcoin and Ethereum is a long standing technical challenge. In this episode, we cover Zilliqa which has pioneered a design to scale public blockchain throughput wit...h the number of nodes by sharding the blockchain. Sharding has been hitherto proposed as a scaling technique by Ethereum. Zilliqa is one of the first projects to propose a concrete design with an operational testnet processing a couple thousand transactions per second across several shards. Our guests, Xinshu Dong (CEO) and Amrit Kumar (Crypto Lead), walk us through the objectives, protocol design and fundraising structure of this innovative project. We also briefly cover Zilliqa’s novel approach to designing a smart contract language and execution environment. Topics covered in this episode: Sharding as a scaling technique Different kinds of sharding, including state sharding and non-state sharding How Zilliqa works under the hood – consensus algorithms & transaction processing Zilliqa’s novel approach for a smart contract language Details of a new cryptocurrency, Zillings Episode links: Zilliqa Website Zilliqa Blog Zilliqa Whitepaper Prateek Saxena and his research group at NUS Singapore This episode is hosted by Meher Roy. Show notes and listening options: epicenter.tv/209

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, welcome to Epicenter, the show which talks about the technologies, projects and startups driving the crypto economy. I'm Meher Roy, and today we have a very interesting episode discussing sharding and a project that has a unique approach at sharding blockchains. We have on the show Zinshu Dong and Amritz Kumar from Zilika. Zinshu as the CEO of Zilika and Amrith is the crypto lead there. Zilika is an innovative blockchain platform which has a sharded blockchain already running as a private test net. Zinshu and Amrit, welcome to the show.
Starting point is 00:01:09 Thank you. It's our pleasure to be here. Thank you, ma'am. Thank you. So before we start talking about Zillika and what that project is trying to do, tell us a bit about your backgrounds and how you came to be involved in the blockchain space. Sure. So maybe I can start.
Starting point is 00:01:23 My name is Shinshu. I'm from the more technical side. In my PhD, I worked on cybersecurity for web browsers and web applications. So later I worked on the security for control software, control software for larger systems like smart grid, transportation, internet things, things like that. Of course, I started to develop a personal interest later on blockchain itself until, you know, predict Saxena talk to me again. Again, you know, I used to work with him very closely during my PhD time.
Starting point is 00:01:56 So after a few years, he talked to me again, hey, I started a company, you know, we're working on blockchain. Why don't we come and join us? So that's the time I really started to think seriously that I should, you know, really work full time on the blockchain. And I did so. After a few months, I joined his company and started to develop this scalable blockchain technology into, you know, commercialization, into deployment as a, permission to blockchain. You know, that was about, you know, more than one year ago. And that's how I started working working on blockchain. Of course, you know, this year we started this new project, Zilica. I'm very excited to bring that technology into a much larger scale as a public blockchain.
Starting point is 00:02:37 You know, that's very briefly about myself. Yeah, so for me, I have a PhD from India at France. And then I came over to Singapore. I started working as a postdoc under Pratik TSAEDA. And then I started discovering blockchain and there are different different problems and solutions. And my essential background is on applied cryptography and privacy. So again, it was the same kind of story. Prateek called me up and said, yeah, are you interested in developing something like Zilika? I said, why not? And then I joined the project.
Starting point is 00:03:11 So, yeah. Cool. So I think very few of our listeners would explicitly know about Pratik Saxena and the and his group at NUS Singapore. So I've seen a lot of papers from this group, including, so there was a paper called Demystifying Incentives in the Consensus Computer that posited that there was some kind of verification dilemma in a blockchain like Ecerium, and that translated through multiple different technological jumps became like Truebit.
Starting point is 00:03:50 Another project to come out of this group is like Loy's work, starting with secure smart contract languages and then ultimately decent class exchanges and Kiber network. So that is also, so Loy also did his PhD as far as I know from this group. And another project that I know personally is just this one, Zilica. So there's been at least like three interesting projects that have come out of this. this group. That's unusually productive for an academic group in any university, I would say. Yeah, yeah. You know, we're very fortunate to be part of the group as well. So, you know, the whole idea of Zilica, you know, is inspired by this original academic research paper,
Starting point is 00:04:40 co-authored by Loy and Pridic. It's called Elastico. You know, that's where all these high-level ideas of Shardy started. Okay, so let's drive down into Zillica. So Zillica is a sharded public blockchain with its own cryptocurrency called Zillings. But before we drive down into what exactly the way the technology works, let's talk a bit about scalability in general. So our viewers know about the scalability problems of blockchain because we have talked about that many, many times. The thing that is thought perhaps a bit less known is what exactly is sharding? We have this word, but is there like a good definition for it?
Starting point is 00:05:29 I don't know. I guess I would not try to give a very formal definition of sharding because I think that this term sharding was invented many years ago in the database community. But when, you know, people bring this idea of sharding into public blockchain to try to make the blockchain the more scalable. high-level idea of sharding is quite straightforward. Let's just assume we have a blockchain network of 1,000 nodes. So what sharding does is to divide this 1,000 nodes into, let's say, 10 shots, you know, with 100 nodes each. So then what it does is to process different transactions
Starting point is 00:06:09 in different shots, destroying set of transactions. So then you can achieve some level of parallel processing. This in itself improves the throughput of the blockchain by a large factor. So this is a high level idea of sharding. Of course, you know, to really work it out, to make sure this process is secure and it's very efficient, you know, that's where, you know, the complexities come. Like sharding, I presume, is like this technique that has been used at different sort of database designs across time. Right. And now we are bringing that sort of knowledge and those techniques from into the blockchain space, which has its own unique challenges while you shard the blockchain.
Starting point is 00:06:59 Right. So, like, is sharding a single monolithic technique or are there different kinds of sharding that are, that are, I think there are different possibilities or different variants of sharding. So number one, I think where most people talk about sharding, they actually refer to something we would call state sharding. So that's maybe the closest, you know, derived idea from the database sharding concept. So what state sharding does is essentially it will divide the nodes in the network into different parts. So when one node belongs to one part or one shot, that nose only stores data and information for that particular shot.
Starting point is 00:07:45 In other words, it doesn't know what's going on in another shot, what's going on outside the shot itself. So this clearly has several advantages. So number one, it significantly reduces the size of data each node needs to store and reduces the number of transactions each node needs to process. And it also reduces the volume of data that needs to be propagated across the network. Because you know, you just, you know, send some data to this shot. You don't need to send the same piece of data to another shot, for example. So these are all these advantages of state sharding. That's why this concept is very interesting.
Starting point is 00:08:28 But on the other hand, there are also many challenges to, you know, to do this properly. So number one is security. So if one node stays within one shot and then it doesn't actually understand, you know, those outside that shot doesn't understand things going on inside this shot. So if attackers just focus all the power attacking this particular shot, you know, over time, this may become an issue. So this is number one. And number two is about redundancy or resilience because instead of, let's say, a whole network of 20,000 nodes storing 20,000 copies a particular piece of information, now you only have, let's say, 500 or 1,000 nodes keeping that piece of information. And once one-third or half of those nodes have a problem with that information, either they are compromised or attacked or deleted or somehow just lost such information,
Starting point is 00:09:26 it's a problem because nobody else in this network understand. against this particular piece of information. So these are some of the issues, I see. If I may add to this, I mean, this whole idea of slate sharding per se, it essentially came from Bitcoin model, right? Where you have this UDXU model and you want to, if you want to know whether an output has been spent or is still unspent, you want to have to store and you have to store the entire history.
Starting point is 00:09:53 And that history can be very huge. So you need a way to kind of divide that history into chunks so that each node only has to store apart that entire history. So the story comes from this UDX or Bitcoin card model. Yeah, this is this stage sharding. Of course, we started this idea or this direction of possibilities where we're designing the Zilika's protocols. Our sort of conclusion at that time was it's very challenging to do that.
Starting point is 00:10:23 We would rather need more time to work on it. On the other hand, what Zilika comes, currently does is something we can call network sharding or transaction sharding. That means for each node, it still stores all the current state up to date, let's say, account balances for the entire blockchain network. So it's not state sharding. But on the other hand, we shard the processing of the transactions. That means when we have different shots, we will send different transactions to different
Starting point is 00:10:56 shots for processing. And eventually, all such information for the next block will be aggregated and still sends through to all the nodes. So you can see easily here, the pros are very simple. We basically avoid many of these security and resilient challenges in state sharding. But on the other hand, there are additional challenges in doing only network sharding compared to state sharding,
Starting point is 00:11:22 because every node we still need to store lots of information and still we need to propagate lots of information to all the nodes. So, you know, these are some of the places Zilica is, you know, doing the innovation. Okay, so, so when you're speaking, is in show like sort of imagination came into my mind, which is that we can imagine, we can imagine like any blockchain first, like like a blockchain, like Bitcoin. Let's imagine that as sort of, sort of like a, a government office, right?
Starting point is 00:11:59 Like, so, so there's this, there's this government office and they're like employees in that office. And all of these employees, they are just accountants, right? And these accountants are basically all the nodes. So these are the full nodes of the system. So whenever, like, you want to process a transaction, you basically like go to this building and you like put the transaction in. and all of these employees, they maintain their own book, and they're going to verify that your transaction is right, and they're going to add it to the book.
Starting point is 00:12:35 And the working in this sort of office, this governmental office is designed in a way that each employee will add the same number of, same transactions in the same order in that book. But that book is replicated across all of the employees. So if there are like a thousand employees in these, this building, each one of them has the same copy of that book. And at any instant of time, each of these employees is working on that, on a same set of new transactions to add on this, to this book.
Starting point is 00:13:10 They're verifying the same set of transactions, right? And so that's like Bitcoin, that's like Ethereum today. So each employee is redundantly doing that work and all of them are redundantly maintaining the same copy of this book. Now what Zillica does is like one step ahead. So what Zillica does is, okay, there's the same 1000 employees now in the Zilika blockchain. But the same 1,000 employees need to maintain the same copy of that book and that book contains all the transactions that happened in Zilica. But the advantage is that if there's like 1,000 employees, then we can have, say, groups of
Starting point is 00:13:56 100 employees in some way that are working on different transactions. So let's say employee 1 until employee 100 works on one particular set of transactions at the same time. Employ 100 to 200 works on a different set of transactions at the same time
Starting point is 00:14:11 and so on. So there are like 10 subgroups in this government office, each of them processing their own set of transactions. But each employee ultimately needs to keep track of transactions that not only their group has processed but also transactions processed by the other groups as well.
Starting point is 00:14:33 So that's sort of the Zillica model. And then you could have a higher level model which is like state sharding in which these 1000 employees are again divided into 10 groups of 100 each. They are processing their own sets of transactions. each employee is maintaining the books, each employee is maintaining the ledger or UTXA set or book, only for the transactions processed by their group. That's right.
Starting point is 00:15:04 So if I'm in group number seven, my book contains only the transactions processed by group number seven. And if my friend is in group number five, they only contain transactions process by group number five. But even though my book and my friend's book do not have intersection in the transactions we have processed. Somehow there's a way of making sure that there is no double spending. Like I'm not processing a transaction and he's processing another transaction and they couldn't conflict with each other. So that kind of thing, which is the ultimate
Starting point is 00:15:39 in sharding would be state sharding. Yes, yes. And Zillica is not state sharding. It is that intermediate level where I as a bookkeeper need to keep track of all of the transactions, other groups apart from mine are also processing. But per se, my computational cost in verifying transactions is limited to those processed by my group only. Exactly. So I think this is a very good analogy. I just want to clarify a little bit on the Zillica motto here. So I think state sharding is very interesting, but you know, there are many challenges. We're discussed security redundancy and this cross-shad communication. Basically these are main challenges we should work very hard on to resolve. But at this stage I still cannot know for sure
Starting point is 00:16:31 whether state sharding is the ultimate objective for sharding for example. Because if you look at Zilika model, you know all these accountants process different employees transactions and then eventually every accountant still be updated on every other accountants processing, a process the transactions, right? But this is the current implementation of Zilica, I would call. But the design of Zilica is not limited to this. So for instance, to address this issue of, you know, excessive storage requirements. So basically, each accountant needs to know every single transaction, right? We can reduce that because when we say we don't do state sharding, that means every accountant needs to know the balances for every employee's. But on the other hand, it is not necessary that every accountant
Starting point is 00:17:28 also keeps a precise record of all the transactions. There's a difference. If you look at like Ethereum's account-based model, you just need to keep the balances. You don't need to know all the histories. Most of the applications, as I see, only need to understand the final balances, but some applications will also need to, you know, access all the history, like ether scan, for example. So in that sense, we can, we can, you know, reduce the storage for every accountant by applying another mechanism, something like DHD, for example, you know, distribute hash table to allocate, you know, which node was stored, which historic data. So that's only about the historical, you know, you know, transactions. But,
Starting point is 00:18:13 But this is sort of off the conal to the fact that every accountant still knows the account balance is for all employees. That's just one thing I want to add. So yeah, that's super important. So basically like what you're saying is like even though if I'm in like group one and I need to know something about group seven, there's ways in which I can reduce the total set of knowledge I need about what went on in group seven, group eight. So instead of knowing all of the transactions that happened, I could just know the state
Starting point is 00:18:52 of the ledger after those transactions were processed. Exactly, exactly. So like one of the, one of the like doubts I have in my mind, and I think this is a doubt in many people's mind is, if you look at the blockchain space today, there's two kinds of projects broadly. there's like projects like Cosmos and PolkaDot that are like promising interoperability and now we see just the beginning of projects that are actually offering sharding so like Zillica is probably the first at least first we have interviewed what is the difference between
Starting point is 00:19:30 interoperability and sharding yeah this is a very good question you know to me uh you know i can't i can't really join an absolute line between interoperability and sharding, especially if you look at things like plasma, for example. Like you have a main chain and you have side chains. So in my view, usually when we say sharding is still about sharding within one blockchain. So it's one single unified blockchain. But yeah, I mean, whether over time we can, when the interoperative party is very mature, Whether the boundary is still very clear or not, I'm actually not very sure.
Starting point is 00:20:13 Yeah, I mean, I completely agree that, you know, you have different problems in the blockchain space. And different blockchain technologies, they come up and they solve different problems. Now, the solution is how do you unify them, right? You can't have one million blockchains and standing all alone. You want to wait so that one blockchain can talk to another blockchain. And this is where this interoperability comes from. And we definitely need a technology like this so that one one blockchain technology
Starting point is 00:20:41 can talk to another blockchain technology and probably have, you know, take benefits from the other technology. But they're completely complementary things, right? Solving, let's say, scalability problem and designing a protocol that allows you to interoperate or talk, communicate with other blockchains, that are kind of orthogonal problems.
Starting point is 00:21:00 So what do you mean when you say they're orthogonal problems? Which means that they solved two different problems. Blockchers which address scalability or let's say throughput, they want to address or they try to address, how can you increase, what's the technology that you should imply, what's the protocol that you should employ, so that you can increase the throughput of that blockchain.
Starting point is 00:21:21 So this could be a blockchain that tries to solve scalability. You could imagine blockchain such as Zcash, which are mainly focused towards privacy. They want to ensure that whenever you do a transaction, it's private, which could be, let's say, hiding your amount, hiding who is the sender, who is the receiver, who use the recipient and all these kind of features.
Starting point is 00:21:39 Now, both these, you know, blockchains, they solve different problems, right? One is solving scalability, the other one is kind of attempting to solve the privacy part. Now, you need something in between that can allow you to talk, you know, try to communicate from one blockchain to the other one. So that's another, this another blockchain will be solving another problem, which is very, and all three are very crucial for, you know, for the blockchain ecosystem to work in a really I swear, but they all solve different problems. It's like we imagine this government office, this government building with a thousand employees
Starting point is 00:22:10 working in it. And scalability is, okay, we have a thousand employees. How do we get them to do the most work securely? Right? Yeah. And interoperability is sort of the problem that, okay, let's say we have like not one, but five different government offices each having a thousand. employees. How do we make sure like that we can have things where I can submit a paper to this
Starting point is 00:22:42 first office and then you can shuffle it around between these two different government offices and do something which like sort of allows these offices to collaborate. So interoperability is like a postal system between these offices or these government buildings. It's more like a postal system a good postal system whereas scalability is getting the employees in one building or in one office to be able to do more work than what they can today. Yeah, yeah, that's how I understand it. Of course, you know, the difference between interoperability and sharding is really the difference between, you know, whether this is either one common body or five government bodies. Okay, okay, makes sense.
Starting point is 00:23:32 So let's go down to like how Zillica achieves this, this level of sharding. So could you give us a broad overview of how the network is designed? Yeah, Zilica is, you know, it's slightly complicated, but I think, you know, it's also very based on several important building blocks. So number one, we need to make sure not every node can create arbitrarily many identities, because, you know, if you can create arbitrary many identities in the consensus process, if you oversimplified as a voting, you're outvote for the good guys. So that's why we need to establish some way to sort of that people only create proper identities. You know, that's something we do we don't solve.
Starting point is 00:24:23 So that's where we have approval work. So people have to do a proof of work before they can enter into our consensus protocol. And okay, now the question comes, how do we do this? How do we sort of enforce a proof of work? Who will check? Who is running proofwork correctly or not correct? So that's why we established something called a directory service committee. This is just one name we give.
Starting point is 00:24:49 Basically, this is an overarching layer. So we all have no competing proof work to get into this particular overarching committee. So once this committee is formed, this committee becomes the body that will check other nodes. So when other nodes doing proof of work, it will check whether this is correct or not correct. And then those notes who have done correct proof work will be allowed to participate in the consensus process. And this overarching directory service committee again is the body that sort of devise all this, you know, good nodes, nodes who have passed DOW into different shots.
Starting point is 00:25:33 And then this overarching committee also decides how do we send different transactions to different shots for processing and eventually aggregate all the process transactions into the final block. So if we go back to the government and accountants model, so this is like the top governing body. The selection criteria are also higher. And then once this government body is selected, it can, you know, sort of allocate different accountants to different departments and decide which department processes which sort of employee data. So this is a high-level design of the network. One key thing we have to make sure, you know, this is done properly is that this so-called directory service committee has to be decentralized. You know, otherwise it's very easy.
Starting point is 00:26:25 We just pick five or even 100, so-called super nose. We just trust them, let them decide. That in our view is not ideal because people can keep their power attacking these nodes if they are static. And this nose may be corrupt in which way. So this Directorate Service Committee has to be decentralized. So the election of this committee itself is also by proof of work. So that's fair to everyone. And we keep updating the composition of the committee.
Starting point is 00:26:55 you know, to make sure this is a dynamic committee. It's not like static forever. So this is a high-level idea of Zilica's network design. So would you like to add something, Amrit? Well, I mean, as she just said, you know, you want to have, you want your networks. Imagine your network, right? You want to divide that network into subparts. But you need some, you know, somebody or a group that is going to control that in a way that
Starting point is 00:27:25 it's going to say, you know, you go to this shot, you go to this shot, you go to this shot. So you need a body that's going to, you know, run that thing, right? And that is the de-is-covering. And as you should say, you don't want it to be one person because then you can attack that. You want it to have, because this body will have the right to say, you know, this, this node is going to be assigned to this shot. So you don't want to have an app, you don't want to give one person absolutely try to do whatever he wants. And that's why you need to have a sufficient number of,
Starting point is 00:27:55 notes in that body, in that committee, and then you want to elect them in a fair manner as well. So these are the crucial points that you have to do when you want to do sharding. So apart from the directory service committee, like, so we have these groups that are that are processing transactions and this directory service is sort of the administration that allows sort of these these groups to even coordinate and these groups to even form process transactions and then maybe we even coordinate. So that's the directory service committee that's in the sort of in the center. And then how are these groups?
Starting point is 00:28:37 And then you have groups that like process transactions. So if I'm a node, if I have a new node that is like joining this network, basically like a new accountant or new employee coming into this network, I have the choice of either trying to go and become a, member of the directory service committee or only become a member of one of these groups or become try to become members of like one group and the directory service committee right yeah yeah this is great understanding okay okay so so so tell us like like how how I can enter the directory service committee or how I
Starting point is 00:29:17 could let's say let's say let's take the earlier example is like there's a thousand nodes with like 10 shards or 10 processing groups each with a hundred nodes. So if I want to let's say join chart seven or group seven, how can I do that? And then if I want to join the Directory Service Committee, how can I do that as a new node? So in general, a node cannot choose which shot or which group it wants to join because that sort of decided based on some randomness in the you know, proof of work result it generates. It's designed that way. Otherwise, if they can choose which shot I want to join, I can, you know, group with all these bad guys. Let's all go to shot seven
Starting point is 00:30:03 and compilions that shot. That's a high-level idea. So what happens is at certain intervals, let's say at every two hours, roughly two hours, it's based on block time, of course. Let's say roughly at every two hours. The existing Directory Service Committee will announce and open competition. It will announce it now there is a new proof of work scheme. Everyone can participate. And then any node existing in this network or outside, that's fine. Any node can just join this competition and try to solve that puzzle of this proof of work. So the Wigner will be sort of agreed on by existing members of this Directorate Service Committee. And then it will join this committee. By doing that, the committee will
Starting point is 00:30:51 also ask the oldest member in the committee to leave this this is the way how it's updated so the membership of the committee is like churned or changed every two hours and then the process for like entering the committee is to solve this proof of work so yes let's say like in two hours there's this proof of work so i participate in that puzzle and if i solve it i become part of the committee and then the oldest member of that committee will end up up leaving yes yes right imagine this is a chain this is this is a blockchain as well so whenever there's a winner of that proof of work that name will be added into a new block and the block is appended to that chain and then you know if you go back to
Starting point is 00:31:40 along the chain you then you can remove the oldest block so what happens in those two hours so So let's say like, okay, I'm a new node, I ended up, I succeeded in joining the DS committee. And now for the next two hours there's not going to be another puzzle or another node joining us. So the set of the DS committee nodes is now known for two hours. What do we do during this period? So two hours is just an example, you know, we can do it every 30 minutes for example. So this is the interval where we'll sort of elect a new member of the DS committee.
Starting point is 00:32:17 And then, you know, at that time, after we, you know, elect a new member into the committee, the DS committee can call another round of proof of work to sort of incorporate new notes. And this process, of course, can be done more frequently. You know, it can be down like, let's say, every 20 minutes as well. So these are parameters who can tune. But basically, the DS committee can announce another proof of work scheme to ask new notes to join. They were not joining the DS committee, but they were joined into that groups. for doing consensus and process transactions.
Starting point is 00:32:51 So this is another pro-for scheme we leverage on Zilica. And this also, you know, on the Stambini, this is also much easier because it's not that competitive. In the D.S committee election is very competitive. Everyone is competing for that work position. And this time is not. For example, we are accepting 20,000 nodes. So as long as you are, you know, you are a reasonable computer, you should be able to join.
Starting point is 00:33:16 And then, according to the. result of your POW, you will be sharded into respective committees. And this logic for sharding is decided by the D.H committee. Amrit, anything to add? Yeah, so there are two points here, right? So the basic idea is that you want to elect members in the shard, and you want to elect members of the committee. Okay, there are two points.
Starting point is 00:33:40 Now, there are two ways of doing it. One is that you elect all of them together, right? And how could you do that? You say that everybody is going to solve POW. and everybody's going to submit their POW. And you could say, look, look, I have received all these POWs. I'm going to sort them out. And I will take the smallest one, let's say,
Starting point is 00:33:57 because there's randomness in POWs, so, you know, you cannot be, that cannot be expected. So you can say that, look, I'm going to sort all of that. And I will take the smallest one, and that will be go to the DS1, and the rest will go to the shot. There's one way of doing it. We said that, you know, we want to give freedom. So we want to have some freedom.
Starting point is 00:34:14 So we said, we are going to double these, these two elections. So we would say first elect the DS people, the DS member, and then ask them to elect the shard members. Okay, so that's why we have one POW to elect the DS committee member and second POW to elect your SHOD members. And that is that if you decouple it, then you have some freedom in saying that, you know, second one can be done more faster than, you know, more frequently than the first one and so on so forth.
Starting point is 00:34:42 So that gives you freedom about deciding how frequently you want to do, you have to how frequently you want to elect the members in the chart and how frequently you want to elect a member in the committee and that's why we dig up out the proof of work okay okay so so basically like if i'm a new node i can first become a member of the chart by solving this p o w and then like this this uh ds committee can last for for let's say a longer duration and in these in this longer duration like there are like shorter durations where there are where there's sort of elections or puzzles that are going to determine who processes and who belongs to which shard and that is a separate proof of work and i and i could conceivably participate in that as well yes and like let's say be allocated to shard seven and then i go and process transactions inside yeah short seven and after the next run maybe you are reshuffle to shab five so okay so in in the secondary election it's like i'm in shard
Starting point is 00:35:47 chart 7 first and chart 5 then shot 3 then again back to chart 7 chart 9 so I'm being I'm being shuffled around and not only me but like all of the nodes we have been we have been shuffled around really well and then if I have in the DS committee after let's say a certain point of time I will become the oldest member of the DS committee and then I will have to exit that committee if I want to rejoin it I will have to win the lottery for that DS committee again yes so for example the lottery for the DS committee once might entitle me to be there for quite a long while.
Starting point is 00:36:24 While the oldest nodes keep getting churned out. Right. I might be there. Okay, so that makes it. So let's say like now I'm assigned to chart number seven. So and then there are like a hundred other nodes that are assigned to chart number seven for the next 20 minutes. How do we process transactions in that chart? How do we come to consensus amongst these 100 nodes?
Starting point is 00:36:50 Yeah, so what happens to that? Let's say a user creates a transaction, and it goes to the network. So let's suppose for the timing that it ends up in that shot. And why would happen is that the transaction has a sender's address, right? So what might happen? So let's imagine that you have only two shots for the timing,
Starting point is 00:37:10 just to simplify the scenario. So we have to only two shots. And how this transaction we go through is that this center address will be checked. And let's say this last bit is 0, could be 0 or 1. And if it's 0, it will go to the first shot. If it's 1, it goes to the second shot. Now, if you have more shots, of course,
Starting point is 00:37:29 you can do a modular or it's met again. Then you could end up in a specific shot. So a user sends a transaction, it goes to a specific shot depending on the center's address. And then there will be a consensus protocol within that shot for that transaction. So basically like this is a way of automatically preventing double spend in some way because if I'm a sender of a particular transaction my transaction can end up only in one shard and it cannot go to two shards because my address is sort of known and that address is going to the address triggers a deterministic computation which sort of
Starting point is 00:38:16 gives the output as like which shard this transaction belongs to. Exactly. So if you're sharding with respect to, I mean, if you're assigning the transaction with respect to the sender's address, you have this advantage, right? Yeah. But let's say if you don't do that,
Starting point is 00:38:30 if you instead do with respect to the receivers address for some reasons, you won't be able to prevent double spend. So sharding with respect to senders address, this automatically gives you a double spend kind of prevention. So this is like a transaction partition. scheme in some way, right? So it's like the government office these transactions are coming in and we need to determine which group they are going to go to.
Starting point is 00:38:56 And this is the simple heuristic. Are there other heuristics you considered or this one was the clear winner? Well, I don't remember if you tried anything else. Yeah, I think it was so obvious it gave you a pretty nice solution. Why would you bother about thinking anything else? Okay. So what is the consensus algorithm inside one shard and like what are sort of the performance characteristics of an individual shard? It's largely a beast on the practical presenting for tolerance, so PBFT. But you know, we understand PVFT performance very efficiently for a small size network, a network like 100 nose, 200 nose. it can generate a very high throughput.
Starting point is 00:39:47 But on the other hand, where you have a larger network, a network of thousands of nodes or tens of thousands of nodes, then the performance of PBFD will deteriorate. This is in contrast with Nakamato protocols where the throughput is very stable, whether it's large network or small network. But BBFD has several advantages. So the main advantage we like about PBFD, it gives finality.
Starting point is 00:40:15 So that means once you agree on the blog or a few transactions, you want to accept it, that's done. There's no way you can rewrite that history. So what we do is we leverage from some of the existing literature to use collective signing or schnoisse signature to address one particular performance issue in PBFD. Basically in PBFD is all about talking to each other, saying, I have seen this, you have seen this, okay, I have seen enough people telling me they have seen this.
Starting point is 00:40:49 So when they send such messages, they have to use this digital signature to say, this is really from me. And otherwise, malicious nodes can just craft messages which seem to be from all the other nodes. That's why you need digital signature. But then when you have a very large network, like say, 1,000 nodes, the size of the digital signature will become a performance bottleneck. when you send these messages across. So that's why we leverage your SNO signature to reduce the size of a signature from, you know, we call it overband.
Starting point is 00:41:23 That means it grows linearly to the number of nodes in the network to a constant number. That's a big performance benefit we obtain from there. And Ambrador will tell us more about what are the issues with directly leveraging the SNO's signature. there are also security issues that we have to address with additional steps. But, you know, that's one idea. And another idea we try to develop in this efficient consensus algorithm
Starting point is 00:41:54 is that we play around different network topologies. So you can have a random typology, you can have a tree topology, which is an original proposal of leveraging strong signature, and, you know, a star topology. And eventually we choose the, start topology as we think this is the most efficient way to send messages across. So there are different aspects we had to enhance on PBFD. And Amri can talk more about this, no, signature, but just before that, I would like to
Starting point is 00:42:28 highlight a good point on finality, right? So if you compare, let's say consensus protocol a la Nakamoto and BBFT or BFT, classical BFD kind of consensus protocol, but BFT or BBFT, they give you a finality, which means that if a block, a set of transaction, goes into the blockchain, that's final. You don't have, you don't need confirmations on top of that. These have many advantages. One is that because it's your finality, you don't have to care about confirmations because
Starting point is 00:42:59 it's stuck, right, the moment it goes on the blockchain. The second guarantee that you have is that you don't have to keep the previous history, right? If you look at the Nakamoto kind of constructs protocol, let's say, a node does pure W end submits a block, you don't know whether it's going to be the final block until you receive a bunch of confirmations. So you have to store that block for a while. While in case of PBFT kind of consensus protocol, the moment you agree on a block at its final, you don't have to store blocks that, you know, that were received previously. You can just get your state, update your state and then you're done. So it also tells you to some extent that state sharding is not that important when you have finality.
Starting point is 00:43:42 Great point, great point. So the way my imagination is now, so let's say like I'm now a user of the system, I'm not a node, and I'm sending a transaction to Zinshu. So I create my transaction. It looks like a fairly standard transaction. So I'm assuming I'm in my head it's like any same Ethereum like transaction. There's a gas price and it's like it's like my account. number, amount, gas price, like code, stuff like that.
Starting point is 00:44:17 And like data. And so when I send this transaction to the network based on my address, this transaction will be allocated to one of the shards. Inside that, once the nodes of the shard listen to my transaction, they'll participate in a consensus algorithm, so which is like practical Byzantine fault tolerance space. So this is the same as like things like constant. similar to things like cosmos or what they're doing and then a block is created and that block has my transaction in it and as long as soon as I see a block in that
Starting point is 00:44:54 chart with my transaction in it I'm done for for all intents and purpose as a user I can think that as a guarantee that my transaction is confirmed is that correct at the high level is something like that but but you know there are some there are some potential issues accepting just whatever that comes out of the particular shot because you know we still need some verification from from the DS committee because when you see this this block how do we verify how do we verify that more than two-thirds of the nose sort of agree to this you know sort of we call it micro block you know that that is one more
Starting point is 00:45:33 step we do in Zilica so what happens is that the moment you create a block at the short level it kind of gets to the DS level and then DS level, so every shot will propose a block, we call it microblock. So the first shot will give a microblock. The second shot will give its own microblock. And then they will all go to the DS committee. And then this DS committee will aggregate these two microblocks into what is called a final block. And then final block kind of gives you the finality. So there's another one level up. And the DS committee, once it collects these two microblocks and wants to put
Starting point is 00:46:12 them in the block, all of the nodes of the DS committee are going to verify all of the transactions in these blocks individually. And they do not verify the transactions again. Otherwise, you know, the DS committee will become the bottleneck. It will defeat the purpose of sharding. All it knows do is it will verify that enough signatures have sort of validated the correctness of this transaction from that particular shot. So basically if I'm a DS committee member at that point, I'm like, okay, I receive this micro block from shard 2.
Starting point is 00:46:49 Okay, I'm going to check which nodes are assigned to Chart 2 in the past. Okay, I assigned these particular 50 nodes to that shard in the past. Then in this block, do I have two thirds of the signatures of these 50 nodes? Yes, yes. Takt, tach, dot, verify. Are the signatures correct? Okay, there's more than two-thirds signature. These signatures are correct.
Starting point is 00:47:09 Okay, so I'm going to accept this microblock. So that's how I accepted the microblock of shard 2. Then similarly I accept the microblock of shard 1. And then I say, okay, I am in a state where I say in principle I agree to including the microblocks of shard 1 and 2, like these two blocks into the blockchain. And now then I'm just one DS committee member. and the other DS committee members are having like similar threads. So how do the DS committee members now come to consensus on whether these two blocks are finally?
Starting point is 00:47:48 They actually run another round of consensus protocol. So to make sure the final block is also final, you know, it has a finality. And that is also a PBFT. Yes. Okay, okay. So basically now like a user, from the user perspective, I send a transaction to Zinshu
Starting point is 00:48:11 and I have to wait that my transaction gets confirmed in let's say Shard 2 and then the block of Shard 2 gets confirmed with the DS committee so I have to wait for these two confirmations and these two confirmations have like strong finality so once the block is made in the Shard and in the DS committee dang dang I see these two blocks
Starting point is 00:48:33 now my transaction is confirmed and Zinsu can basically give me the car or whatever I'm looking to buy. So this is a very strong security. On the other hand, for applications, you can use some heuristics. For example, where you see your transaction is accepted by Wang Sharke, you may believe that 99% of the chances that this transaction will eventually go to the final block.
Starting point is 00:49:04 You can make assumptions like that. And for some application, let's say payments, it's fine because you just need to have some reserve to, you know, rectify some of the 1% exception of cases. So that can give you sort of a very much shorter latency. Cool. Yeah. So at least I have a rough idea of silicon, a better idea of how it works. Can you tell us like what are sort of the scale? scalability advantages of a design like that.
Starting point is 00:49:39 So let's say we start with like two shards, each with a hundred nodes or something. Let's start with like one configuration. And as we add nodes, what happens to the throughput of the system? So we are creating shards to ensure that you can do transactions panel as the high level idea, right? Now, you need to have more shots. You want to move, you know, if you want to
Starting point is 00:50:06 actually better parallelizations right so let's say if you have only two shots you can do two times the processing if you have four shots you can do four times processing if you have 10 shot you've got to 10 times processing so ideally there are two parameters here one is how many numbers of shot be you want and what is the size of each shot right there are two parameters essentially well roughly speaking the more in I mean if you increase the number of shots your your throughput will increase and that is one really a big benefit in Zelliga which is not the case with many blockchains that currently exist, which means that the more nodes you
Starting point is 00:50:39 join your network, the better throughput you will get out of it. This is, and that's roughly linear, I would say, in the number of charts. Now if you look at the other metric, which is how many nodes you should have in the chart, that's not the very crucial point for many reasons. One specific reason is security, right? So imagine a chart where you just put one single node. Then it becomes a centralized infrastructure, right? this chart gets to decide which transaction to accept which transaction to reject.
Starting point is 00:51:12 So you want to have more number of notes in each chart. It cannot be just one. Now the question is what is the ideal number? Okay. I mean there are some terms that tell you that I mean the more the more notes you add in the in the in the in the in the in the shot the more secure you become against Byzantine adversaries. So I mean it gives you a probability. So you know more notes you add the probability will decrease exponentially. So we came up, so if you look at that formula, you can, you will have this exponential kind of curve and we'll tell you that, you know, roughly 600 nodes gives you a
Starting point is 00:51:46 one over million kind of probability. So you'd have roughly around 600 to 800 notes in each chart to guarantee security, but if you need more, if you need better parallelization, you need more shots, more of such numbers. So sort of the basic problem is, uh, It is almost a problem of sampling and like statistics. So I don't remember the name of this exact problem, but the problem is something like you have, I don't know, you have 10,000 instances of like, I don't know, you have, let's say we talk about people, right?
Starting point is 00:52:27 All of us have different heights. And let's say there's like a group of 10,000 people. Now, out of this group of 10,000 people, people, I'm going to select, let's say, like subgroups of 100 people. I'm going to sample groups of 100 people. Then if the average height of this group of 10,000 people is, I don't know, 1.8 meters. And I'm sampling, like, I'm taking out only 100 people and measuring the average heights of these 100 people.
Starting point is 00:53:02 Then there is actually a chance that I might come across groups. which whose average height might be two meters, right? Even though the population of those population average of this to 10,0001.8 meters, randomness might ensure that I might select a group of these hundred people and the average height might be two meters. That can happen, right? So my sample might not be representative of the larger group if the sample is too small. And as the sample size becomes bigger when I go from 100 to 200 to 300 to 400, I will become more and more representative of the bigger group.
Starting point is 00:53:41 So in some senses I feel the formula you're referring to, the exponential curve you're referring to is a reflection of this fact that we are assuming at some level that once the nodes solve proof of work, that there is a certain fraction of Byzantine nodes inside them. But we have like, let's say we have 2,000 nodes and some fraction of it is Byzantine, 20% of it is Byzantine. But when you want to compose a shard, we want to sample this group of 2,000 nodes, create a subset of it. So if we create a subset with only 100 nodes, there's a higher chance that if, say, 200 nodes are Byzantine inside those 2,000.
Starting point is 00:54:25 If you're sampling groups of just 100 nodes, there's a high chance that we might encounter a group in which, like, 90 nodes are Byzantine. like we might create a shard with 90 Byzantine nodes and 10 honest nodes. But if we increase the size of the sample from, let's say, 100 to 500, then there's, we are somehow like, because we are sampling a larger set, now we're taking a larger set out of this 2000. If the bigger set of 2000 is majority honest, it increases the chance that my set of 500 is also going to be majority honest. It's something like that, right?
Starting point is 00:55:01 Right. It's very big. You think about in the extreme case, if we have 1,000 nodes and then, you know, we only have one shot, let's say we just make an assumption that this original 1,000 nodes only have, you know, less than 1,3rd are presenting. And then it's actually 100% true that this shot has, you know, less than 1,000 of the nose are presenting, right? That's an extreme case. We don't need 100%. We need 99.999%, you know, something like that.
Starting point is 00:55:31 So that's what we do. Okay, so the rough heuristic you have derived from something like this is a shard or like between 600 and 800 members. Yes. And then as nodes are added, we can increase the number of shards in some way. Okay. So as I'm aware, like Zilika already has a private test net running, right? Yes, yes. So tell us like what are sort of the performance characteristics of, of, of,
Starting point is 00:56:02 this private test net? So, you know, at this stage, it's especially running on AWS EC2 Cloud. So we basically host a particular category of instances, virtual machines, every one, virtual machine is running as a node inside our test net. And then, I think two weeks ago, two to three weeks ago,
Starting point is 00:56:25 when we scaled up to about 3,600 nodes in our test net, we could achieve, we could achieve a peak capacity of close to 2,500 transactions a second. So, you know, the interesting thing is when we compare the results with the 3,600 nodes to, let's say, 1,000 and 200 nodes, we really see a linear curve, you know, in terms of throughput growth. So that basically sort of validated our original hypothesis that our throughput or linear glow. That's interesting. And to me, 3,600 nodes is still a very small network compared to today's popular public
Starting point is 00:57:06 blockchains. And we really want to take this as a start and we want to further grow that. Our projection is when we have 20,000 nodes, we should be able to reach about 8,000 to 10,000 transactions a second. And this is not to say that it's as straightforward as, you know, tomorrow we just get more money and expand our, you know, Tesla is not going to be that straightforward. I think we expect more sort of technical innovation we have to do along the way to achieve that. Okay, cool.
Starting point is 00:57:40 So I think so one of the final questions I have, and there's a lot to talk about the sharding model, certainly. It's like complex technology, but we have to cover the other interesting part of Zilliga, which is the smart contract language design. you have limited time. But before we get there, what is the incentive structure for all of these nodes, including being a DS committee node and being a node processing transactions in this chart? Okay, so we should look at a little bit about why incentive structure should be different in the first place, compared to, let's say, Bitcoin or Ethereum. And the reason is the consensus protocol.
Starting point is 00:58:22 So if you look at the Bitcoin or Ethereum's Nakamoto consensus protocol, there is one leader and he proposes a block. So he kind of does the hard work, right? So he gets the reward. Now, in a PBFT kind of setting, notes which are there in the chart, they all together work, they sign transaction, they do all together work, and then they propose a block. So the point is, you know, you have to be, you have to do different, right? It cannot be just, just that way, the classical way.
Starting point is 00:58:51 So what we came up is that if you want to give a reward, let's give reward to the leader in the DS committee and the leader in each chart. Okay. So for every microblock proposed by each chart, you will take the reward and then you divide into these people. So all the leaders across charts and the leader in the DS company. Now what you might go with this model is that what you might do with this model is that after, particular period of time you could change the leader in each chart, right? So this is how pbfg works. So there's a leader and then you can replace that leader with some other leader. And then you know you can eventually be able to reward every single member in the shard
Starting point is 00:59:36 because every single member will eventually become a leader. So that's the incentive incentive model. Ah okay okay so yeah so only the only the leaders of these blocks are are able to get rewards. So if I'm a new node joining in, if I solve the proof of work become member of the DS committee, for the first few blocks after a member, I might not be the leader and I'll not make money at that time. But then eventually it's like a round table, it's like it will circulate back and I will become the leader and when I create a blog, I'm going to make money as a member of the DS committee. And similarly, if I have a new node joining and I end up being assigned to Shard 7,
Starting point is 01:00:22 may not make any money because other people are proposing blocks, but then my time to propose a block is going to come and I'll be the leader, and at that point I'll make money. So this model has one advantage is that you will have low variance, right? So in Nakamoto kind of consists of protocol, you're kind of competing with everybody. So, you know, after a few, you know, few time interval you will you'll probably have a chance to get the become the leader and then you will get it we'll get the reward here it's different because the moment you get you to a shot and if you stay there you will have you will have rewards and it's guaranteed
Starting point is 01:01:00 very interesting so in terms of like this whole sharding scheme um what are the largest points of uncertainty for your team like things that like design choices or places where you don't understand the choices you've made and things might go wrong. That's a very good question. I think at the high level, I would think this sharding scheme is relatively sort of validated. I would say, you know, the most challenging part is when we scale further, I think, you know, now it's $3,600,000, I don't think, you know, double that size will be, but if we really go to something like 15,000 or 20,000, so we may run into some bottlenecks,
Starting point is 01:01:54 which are not obvious at this moment. So that's something we have to really do with the experiment, because many things can become bottleneck. Like validation or transactions can become a bottleneck, sending the microblocks or sending the final blocks, all these things may become a performance issue. So that's where we need to experiment and then start it further, try to figure out a better way to do things.
Starting point is 01:02:18 So that's my expectation. Amri? Yeah, so it's very similar to the point that you discuss, which is you start with the smallest shot that you can have, let's say with 600 notes, because going below that, you will have security risks. So you start with the shot, and the more notes come in, you keep on increasing the number of shots. At some point of time, you'll have to come back and say, you know, now I probably have to increase the shard size
Starting point is 01:02:45 and I should not increase, let's say, a shard number, for instance. So it's not very clear right now when and at what extent we should make those choices. So it's like the trade-off between like sort of having larger shards and higher security. So the trade-off between like security throughput expressed as selecting the number of shards and nodes in a shard. Like you're trying to optimize both security and throughput.
Starting point is 01:03:16 And the levers that you can pull are the number of charts that are going to be there in your network. And the nodes that are going to be there per chart. So how do you tweak these two parameters to get an optimal of security and throughput? And the other point is how many nodes can be really support? That's not the question. We cannot support, let's say, one million notes. It's not. It would work.
Starting point is 01:03:39 For any network, it won't work. If you look at Bitcoin or a city, if they have one million notes one day for some reasons, then you will have to propagate each block to the entire network and that will take time. And if you don't give, if interblock time is not sufficiently large, then you will have folks. Because, you know, some people won't be seeing the right block and then they'll be mining on a different block. And then, you know, you will have folks. So what would be the maximum network side that we should support is not very clear at the moment. So Zillika's other big innovation could be in the paradigm of how smart contracts are executed inside Zilika and how they are programmed.
Starting point is 01:04:25 So give us an overview of your work and aims in that space. So we would like to have a smart contract language. First of all, this is the design principle. And that's why we started with something new. And the idea is that we want to have a language that is easy to reason about, considering the fact that we have all kind of issues with different kind of smart contracts. So we need a language that allows you to reason formally, which you could say, you know, if you run this contract,
Starting point is 01:04:55 you're only going to see this, this, and this kind of behavior, not something else. That's the high-level idea. And how would you do that is tricky. And the reason is that if you want to do, if you want to give very strong properties, If you want to prove very strong properties about your program, it cannot be a very, you know, it cannot be a very full-fleshed language because then it becomes very complicated to reason above.
Starting point is 01:05:18 So you kind of, you want to have a language that's kind of, you know, it's not tuning complete, for instance. It could be restrictive set, but it's, you know, but which would allow you to have formal proofs, but it should be expressive enough, that should allow you to, you know, design interesting contracts. This is, this is the high level principle. that you want to follow.
Starting point is 01:05:39 And so you're working like smart contract languages, is it like early or do you have some some kind of specification and implementation of a language? So we don't have a specification right now. We are currently writing it. By the end of this month we will have a formal document which would say what exactly a program in our smart contract would look like and what kind of properties you could prove on those kind of contracts. So one interesting thing is, you know, we also want to balance between, you know, the security
Starting point is 01:06:14 of the smart contracts and sort of, sort of programability. So if we ask programmers today to write that kind of automata style language, it's very tough, you know, the learning curve will be very stiff. So that's why we will try to develop solidity like syntax as well for programmers. So they can still write smart contracts in the familiar language like today. And we will have a compiler and sort of compile that kind of syntax into this actual language for a smart contract. And of course, we will not be able to support 100% of solidity syntax. And we may add on a little bit, but it's largely like that.
Starting point is 01:06:58 So I almost feel like we need a separate episode on the smart contract language when you're further along. Yeah, we agree. Yeah, that's a topic, you know, by itself. Yeah, why is it? Because like, yeah, I think we are towards the end of the recording. So I think we'll let's schedule another one for, specifically for the smart contracting execution environment and languages. But before we end up for today,
Starting point is 01:07:27 so Zillikari network has this own cryptocurrency, which is Zillings. I like to know your plans about Zillings. and when do you think people will get zillings in their own hands? Yeah, we're issued zilings as a utility token. So that means, you know, holders of zilings will be able to send transactions, wrong smart contracts on our platform. And we are planning to run a public contribution, a token generation event.
Starting point is 01:08:00 So the exact date has not been fixed at this moment. So it could be as early as, end of November, it could be as late as, you know, let's say, early jet. So, you know, that's the rough timeline where we're still thinking about this moment. At this stage, we're running an early contribution round. So some people really show very strong interest to our project. And then, you know, we try to work with them to get their support in terms of funding, in terms of mining, and in terms of application development, you know, various things.
Starting point is 01:08:31 So that's where we are in terms of fundraising. Do you have any details on like what kind of emission curve Zillica will end up having? How many tokens will be issued in the public contribution and then how will the network go from there? So I think I can give a rough idea on the token allocation and then Amrit can elaborate further on the emission specifically for the mining rewards. So, you know, as we already mentioned, Zilica's technology has a feature. We have more miners in the network, we'll be able to process more transactions every second. So that's why in designing our token structure,
Starting point is 01:09:14 we give miners a slightly heavy pie. So basically, we plan to give at least 40% of our total token supply to mining rewards over, let's say, 10 years of time. And then, you know, we will give our supporters either in this early contribution phase or in the public contribution phase, about 30% of our token supply.
Starting point is 01:09:35 And then the rest 30% token supply goes to this company, Unchran, which sort of developed this technology for the past two years. And Zilica research will be the entity going forward to, you know, sort of spearhead this I&D of Zilika and try to engage the community to, you know, promote Zilika to application developers to build ice output apps on top of that. Basically, that's the entity going forward to lead Zilica. And then some of the tokens will also go to funders, advisors, agencies who are helping with us. That's the rough idea. And I think I'm really came to the one. Yeah.
Starting point is 01:10:16 So I just want to add a couple of sentences on the emission curve. So of course, I mean, as Shincher said, we need more miners in the beginning, because our network is powerful. And it will give you high throughput only when you have enough minus in your network. And to attract those miners, we'll have, Our mission curve will be kind of, a large portion of that will be in the first few years, let's say first four years, so that people can come in and mine it, and then it will kind of exponentially decrease over the next years.
Starting point is 01:10:46 Because the idea is, you know, over time, there will be a high volume of transactions, so the transaction sort of gas fee will pick up as the majority of incentive for miners. That's super interesting. So, yeah, so something like 30% for, for the company and Zillica Research. 40% with 40% of the token supply over 10 years for the miners, with it being front loaded in the short term. Like we can earn more in the first few years
Starting point is 01:11:19 than you can after like five years and later. And 30% to people spread across like two contribution periods, like an early contributor, which is taking more risk. And a later contributor who is probably taking less risk when much of the uncertainty has dissipated. Okay, that sort of answers the question and it makes sense. Cool. So with that, I'd like to thank both of you, Zinshu and Amrith for being on the show. It was great to have this conversation with you.
Starting point is 01:11:55 Thank you, very much. It was really great. Yeah, lots of interesting discussions. Thank you. and I'd also like to thank our listeners for tuning into the show. Episendor Bitcoin is part of the LTV Network. You can find all of their shows at Let's Talk Bitcoin.com. At Epicenter, we release episodes every Tuesday.
Starting point is 01:12:16 You can subscribe to our show on iTunes, SoundCloud, or your favorite podcast app for iOS and Android. You can also watch the video version of the show on our YouTube channel at YouTube.com slash Epicenter Bitcoin. If you're a loyal listener, enjoying the show. We'd like to ask you for a favor. iTunes reviews are really important for podcasts and help new people find the show. We'd like to have more iTunes reviews and would appreciate if you could leave a review on iTunes.
Starting point is 01:12:48 And of course, you can send us a tip to the address in the show description if you find we produced valuable content. So until next time, it was great having you, Jinchu and Ambr. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.