Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Alex Gluchowski: zkSync - A new Era for EVM-compatible zk rollups

Episode Date: August 4, 2023

Ethereum scaling solutions often resort to tradeoffs, sacrificing security or decentralisation in favour of scalability. However, zk rollups hold the potential of increasing throughput, while also inh...eriting the layer 1’s security. This is achieved through zero knowledge validity proofs, which are published on Ethereum mainnet. The final hurdle remains the sequencer decentralisation. zkSync was designed around EVM-compatibility, offering custom scaling solutions through its hyperchain architecture.We were joined by Alex Gluchowski, co-founder & CEO of Matter Labs, to discuss the zk rollup landscape, its bottlenecks, and what makes zkSync stand apart as the most popular rollup.Topics covered in this episode:High-level overview of zero knowledge proofsZK rollupszkSync EraZK ecosystem taxonomyRollup performance: bottlenecks & tradeoffsBridging between zkSync hyperchainsData availability. Validium vs. VolitionGovernance & security layersSequencer decentralisationEpisode links:Alex Gluchowski on TwitterzkSync on TwitterMatter Labs on TwitterThis episode is hosted by Meher Roy & Felix Lutsch. Show notes and listening options: epicenter.tv/507

Transcript
Discussion (0)
Starting point is 00:00:00 This is Epicenter, Episode 507 with guest Alex Kloofsky. Welcome to Episena, the show which talks about the technologies, projects and people driving decentralization and the blockchain revolution. I'm Brian Crane and I'm here with Federica Ernst. And today we're going to speak with Alex Lukovsky. He is the co-founder and CEO of Matterlabs. Matterlabs is the company that's building ZK. Singh, which is one of the most exciting and sort of largest ZK roll-up technologies that's, you know, looking to scale Ethereum by kind of maintaining all the Ethereum's trust assumptions and, you know, bring freedom
Starting point is 00:00:54 to people all over the world. So thanks so much for joining us, Alex. It's my pleasure. Thank you for having me. So we had you on like two years ago. And, you know, a lot has happened since then, including the ZK space, having gotten, you know, lots of stress. lots of interest. It's still an area that's, you know, hard for a lot of people to sort of understand and wrap their head around, even people work in crypto. So I felt like maybe we can start with, you know, just a very brief recap of, you know, what are ZK roll-ups and, you know, why is ZK such a great technology to scale blockchains?
Starting point is 00:01:35 Absolutely, I think that we try. So with blockchains, you know, like the, yeah, we really observe the revolution the cost, starting with Bitcoin and Ethereum, taking it to the next level, smart contracts and all the programmability of money, interactions, value. Like, really, this is, to me, it's
Starting point is 00:01:55 the continuation of the internet revolution, but it's a quantum lib. It's a jump from Web 2.0 to Web 3.0, like adding value to the internet on transaction level. The problem is that the very same properties that
Starting point is 00:02:11 make things like Bitcoin and Ethereum decentralized blockchains valuable, they also lead to the difficulties in making it available to a lot of people. There are some key things, in my opinion, that we can distill this value to. Among them are trustlessness,
Starting point is 00:02:31 permissionlessness, openness, absolutely inclusivity of these networks. And to achieve that, in the blockchain world from the early days, embrace the maxima of don't trust verify. So essentially everyone has to verify all the transactions, all of the activity that's happening on chain. And you can think of blockchains as the social economic systems, but in the essence, what's happening under the hood, those are just computing systems. So Ethereum is really, Ethereum started with the narrative of being a world computer. And it's what
Starting point is 00:03:06 Ethereum really is if you look at it from the computer science. So I, I'm a very, I'm a lot. So I That means everyone has to redo all of the computations for everyone else, which leads to quadratic complexity of communication, storage, computation requirements. And it's just infeasible to bring it to the work. You know, like when you are scaling the Internet and adding a value layer onto the Internet, you can't rerun the computation of all the other servers, of all the other, like, redo everything that everyone else has to it.
Starting point is 00:03:44 So, like, this is a fundamental limitation which people have tried to solve with different trade-offs, always leaving some parts of this value proposition severely damaged. Like, either you give up decentralization or you give up security or you give up some other important properties. And it was not until zero knowledge proves, peer that we found a fundamental solution to this. So, like, we came up earlier, the community came up with some really ingenious ways to
Starting point is 00:04:17 make this trade-offs in a limited way. We experimented with things like state channels, payment channels, plasma, which then transitioned into optimistic roll-ups. All those things were important steps on this journey, but the ultimate destination is zero knowledge proofs. To explain why, zero knowledge proofs are specifically, like more precisely, succinct zero knowledge proofs, or snarks, are in fact, they would be better called like proofs of computational integrity. They allow you to verify arbitrary amount of computation very cheaply. It doesn't matter how much original computation you do, how much it would take
Starting point is 00:05:06 you to naively recompute, you can let someone do the hard work for you and then only present you with a final proof, which is going to be a short file, like one kilobyte or maybe a few kilobytes of data, that you can process using very simple arithmetic operations and come to back to result, whether it's true or false.
Starting point is 00:05:28 And the beauty of it, you can combine various zero-knowledge proofs recursively, so you can do a lot of computation in parallel, and then like verify them produce a proof that you verified some proofs, then verify these proofs of proofs and so on
Starting point is 00:05:46 until you get to this one single proof which attests to the integrity of all the computation that you manage to back in there. And then you settle this on something like Ethereum is a global settlement layer, a global layer of consensus where we all agree, okay, this is the last state. And this enables us to scales blockchain,
Starting point is 00:06:06 basically indefinitely. And just let's say if I'm to verify all of these proofs, then I need to know what is being proved
Starting point is 00:06:18 or no. So if there's like a huge chain of all kinds of computation, I still need to know like, you know, all of the things you are computing,
Starting point is 00:06:25 even if I don't have to do the computations. You don't have to know all of these things. You only need to commitment. Like in cryptography, in Oracle blockchains, we have a really nice primitive called commitment
Starting point is 00:06:42 where you can have a single hash that is a fingerprint of multiple things. Like usually we pack it in the Merkel tree and you have a lot of leaves at the bottom of the Merkel tree and then you have one hash, the root hash of the Sparkle tree, which will change, like which is an unambiguous representation of all of the data that is, committed in the stream. If you have to change one of the leaves, this hash will necessarily change, and it's really, really hard computationally intesible to find, like to fake it, to find some hash
Starting point is 00:07:15 that will correspond to the set that you want. So in this regard, you don't have to know all of the things that's happening computationally there. You only have to know that whatever you are very fine has a subset which is of interest to you. So, like, I was recently thinking that the best way to describe zero knowledge proofs in the blockchain context would be to call them not zero knowledge proofs, but like partial knowledge proofs, where you can only look at things that are important to you, but you still have the full picture and you know that everything else that you currently don't care about is also still correctly verified. So a good example of this to intuitively understand this. And you can calibrate me to let me go deeper into tech or more high. level in this interior. But an intuitive understanding for people out there would be, imagine you're receiving a payment on PayPal or Venmo or your bank account. You will see that your account
Starting point is 00:08:19 balance has increased. You might want to see who is this payment coming from. And you don't care about all the other accounts in the world. You still want to be sure that all the other accounts at least in your bank are correctly managed that all the other payments are done with high integrity because if that's not the case maybe your account is increased
Starting point is 00:08:42 by all the other accounts are increased by $1 million and so the bank is really insolvent and it won't be able to pay you this money when you go to the shop with your credit card you won't be able to make the pay. So you don't care about those computations but you still want them to be
Starting point is 00:08:56 to be correct. So like zero knowledge proves would allow you to verify the integrity of all the other payments without having to care about them. And the way we implement zero knowledge proofs in the world of blockchains, the way we apply them to blockchains today on Ethereum is by building ZK roll-ups. And so we can talk about what ZK roll-up actually is.
Starting point is 00:09:19 Yes, let's do that. So in a ZK roll-up, who produces the ZK proofs and kind of what's the mechanism end to end? Sure. So a Zicierrolop is a layer two scaling solution. So instead of transacting directly on layer one, on Ethereum on the main net itself, we say, let's create a parallel blockchain that is going to process transactions completely separately. So like we will have someone, some entity or maybe decentralized body of entities that will accept transactions and will sequence them in blocks. We'll call this body sequence. can be centralized run by one server, can be decentralized run by consensus of multiple validators. Doesn't matter. Let's just assume we have this one blockchain.
Starting point is 00:10:09 So the sequencer collects transactions, pack them into blocks, verifies that they're valid, like trust. If the blocks are invalid, they won't be able to produce the proofs. And after the blocks are complete, they do two things.
Starting point is 00:10:22 One is they compute the zero knowledge proofs for all the transactions in the block. and they produce a final proof that this block is complete. To make it practical, it probably requires still recursive proof generation. So we will split this block into small, many small, multiple chunks. We'll produce the proof for each of the chunk. Maybe we'll move heavier operations into specialized zero-knowledge proofs that are more efficiently verifiable than generic purpose transactions.
Starting point is 00:10:51 We'll produce the proof of that. Then we will aggregate all these proofs together in one single proof of the block, which contains all the logic that verifies all the logic that we need. Transactions were done correctly, that the users authorized and were signatures that smart contract logic executed correctly, and then all the hashes, like basically all the computation, right? We have this one proof. And then this proof is submitted on layer one along with the new root hash-fellis block. So we don't publish the entire state, we don't publish all the transactions, we just say,
Starting point is 00:11:25 here is the new state, here is the new commitment to the state, the new route hash, and here's a proof that this newer root hash is indeed a valid transition from the previous route hash, previous commitment to the state, which is recorded on layer one to this new route hash, and layer one, the smart contract on Ethereum, can actually verify this proof, come to conclusion objectively by nature of cure math verification that the proof is correct and make the state transition. And then we need the second thing. We need to make sure that even though the state transition
Starting point is 00:12:00 is now verified on layer one and the transition is made and we have the new root hash of the state, everyone else knows what actually happened in this block. It was specifically with regard to what is the new state of all the accounts in the block? Because if people don't know it, if external observers, the users or the other validators don't know what changed in this state,
Starting point is 00:12:28 they will not be able to do anything with it. Like we will enter a state which is committed on Ethereum that no one except for whoever made this transition can actually process. I cannot prove to you that I have money, I cannot access this money, I cannot withdraw, I cannot transact on this chain. So it would be like a frozen state.
Starting point is 00:12:49 So in order to solve this, we have to publish something to the users, to everyone who wants to observe. We need to publish some piece of information that will allow them to reconstruct the state or to reconstruct the changes that happened from the previous known state. So there are two ways to do it. One is you publish all the transaction inputs and you just make it available to everyone. And then people can recompute these transactions and reconstruct the state, which is something the optimistic roll-ups do. And the second approach is that you publish the actual differences
Starting point is 00:13:26 for each storage slot that has been changed in this roll-up block, you publish the new state of the storage slot. Either way, the observer now can reconstruct the state and they can work with the new block. But the trick is we have to publish it on some really strong censorship-resistant system. and the most censorship resistant we have is Ethereum itself. So we kind of abuse Ethereum network
Starting point is 00:13:54 to make this data available. We call it the data availability and the ultimate vision for Ethereum to be the settlement and the data availability layer for roll-ups, making roll-ups the center of Ethereum roadmap and really the place where most of the activity on Ethereum will happen.
Starting point is 00:14:14 Cool. There's a lot to unpack here. Let me maybe recap this. So basically from a technical point of view, what a ZK roll-up requires is full-forward. So you need regular checkpoints on L1 that can't revert. You need a proof-off correctness from checkpoint to checkpoint. You need data availability on L1, either directly by call data or kind of in the dank-sharding world in the sidecar blobs. The first thing is you need forced inclusion. and basically that the next checkpoint is only valid if forced inclusions are probably part of the next checkpoint.
Starting point is 00:14:54 So we will go into this in just a little bit to kind of discuss kind of the recent developments in kind of the validity and world and so on to kind of look at the entire spectrum of shades of L2. but maybe before that, let's quickly talk about the newly launched ZK Sync 2.0 because that came out recently. So now that we kind of know how ZK Roadups function
Starting point is 00:15:28 theoretically, let's get down to the meaty part. So what's new with ZK Sync 2.0? Well, ZKSync 2.0, which we call ZK Sync Era, is not such a recent development. We launched it half a year ago, Life and Manda.
Starting point is 00:15:45 And it was the very first ZK AVM, the very first ZQ roll-up with generic programmability that could execute contracts written in solidity for AVM. So you could take a contract that works on Ethereum and you just deploy it and it works out of the box in most cases.
Starting point is 00:16:06 All the tooling works, or like not all the tooling works, but like, the critical pieces work, like the Web Free API, the testing, the deploying, the access to it, like logs. Everything follows the EVM programming model. And, yes, since then, we experienced a lot of growth on the platform. And it's, in fact, now the most popular L2 on Ethereum.
Starting point is 00:16:34 We had more transactions in the last 30 days than any other L2. I think 25 million transactions with arbitrage involving with 24 and optimism is 16 million and everyone else way below. And it's currently the third L2 by TVL and it's also growing, fluctuating, but the DFI component is growing very strongly and we have more and more projects launching on ERA. So era, yeah, it's a big step for Ethereum. It's a big step for Ethereum. It's What people were waiting for many years and thought it would take many more years to arrive in the full form. Let's kind of look at the taxonomy of different ZK roadups, right? So basically it's a space that has grown, you know, leaps and bounds in recent years.
Starting point is 00:17:29 And even on this podcast, we recently interviewed Jordy and David from ZKEVM at Polygon. We spoke with Ellie from Stark, with Zach and Joe from Aztec. But there's also other people we haven't had on the podcast yet, like the scroll team, the linear team and so on. Do you have a taxonomy in your head for these different ZK roll-ups? So basically, what kind of buckets do they fall into? Vitalik came up with this post introducing different types of ZKATMs. I'm not sure it will be relevant in this. in the midterm, like it's probably relevant, still relevant now,
Starting point is 00:18:10 but we're in a very early experimental phase. So in that post, he broke it down into essentially degrees of closeness to the original Ethereum specification. Like how far do we deviate from pure native wear one EVM? Different ZKABMs have, you know, like some of them embraced, the bytecode native approach, some of them embraced. compiling, some are somewhere in between. Some of them are trying to be as close to layer one
Starting point is 00:18:44 as to replicate the full blocks of layer one and let storage be kept in exactly the same format. So, looking at this, I think this classification is going to disappear in the short future, because the productivity of zero knowledge proofs is accelerating still at a really high pace. And so the performance characteristics will allow us to basically verify arbitrary computations. We will be able, in a very short term,
Starting point is 00:19:21 we will be able to run like ZKABM, like specialized programs compiled from solidity to a ZKVM, which is optimal for proving, or prove the bytecode native EVM, and like maybe even have storage proofs for for the exact same format as Ethereum. Or we will be able to run like native code written in Rast or C+++, all of that on the same platform. So you as a builder will have a choice of what type of computational environment you want.
Starting point is 00:19:59 For some applications you might want to run bytecode compatible EVM, There are use cases like you want to deploy the address, the contract which has exactly the same address as on all the other AVM chains. So you have no choice but to deploy a native code there. But for some other use cases, you want 100,000 TPS decks optimized for very specific operation. You can't really do it in EVM. you can do it because your sequenceer is going to be the bottleneck. So you will probably write a specialized app-specific application in Rust that just does that,
Starting point is 00:20:45 and you might want to deploy it as a ZK Rolla, because you still want all the benefits of interoperability and security that you derive from ZKROLO, but you will probably not do it in EBA. And you still want a platform that enables you to incorporate all of these designs. And so I think the real taxonomy is going to be this architecture of interoperability between the chains. So it's something we recently came up with the publication of the ZK stack that allows you to deploy your own chain. And the architecture of hyperchains and hyper bridges that connect them will allow you to have this different types of infrastructures deployed in the interconnected way. So I think that's going to be like one major classificator,
Starting point is 00:21:33 like how different raw ecosystems approach this application-specific design and interoperability between them, like whether or not they can make it seamless and native. And the second big classification parameter that I would pick is the treating of data availability. Do you publish the transaction inputs or do you publish the state divs? In what way? Do you enable volition or is it the same?
Starting point is 00:21:59 single data availability model, you can only be a ZK roll-up or only be a Validium, this is going to be a big important difference. So those two things, I think, are much more than the degree of compatibility, because the compatibility is going to be solved in the first. Let's talk about the first complex of questions first, and then kind of the Validium kind of spectrum later. So we had Elion recently, and he was very adamant. that ZKEVMs are not performant enough.
Starting point is 00:22:33 And basically they don't offer, basically in terms of kind of how much computation you can do, he says that Cairo is doing much, much better. But it sounds like you're saying that in principle, you can kind of mesh these two approaches. Did I get that right? Or did I misunderstand? This is absolutely correct.
Starting point is 00:22:52 Yes. So I agree with earlier that EVM is not the most performant platform, especially for sequential operations. so you can, you could construct an EVEM that verifies transactions in parallel. And then even though the performance of these transactions is not at the limit of what the current compute is enabling you, kind of don't care because what you care about is the cost per transaction. And if the cost is much, much less than the what the user is willing to pay, think of like payment applications
Starting point is 00:23:29 like if you're or you know some important some trading where your margin is like some a few dollars per transaction but you are only paying 0.001 cent you probably don't care like what you care about is security you care about interoperability and you care about time to market
Starting point is 00:23:47 and if time to market is important you probably and you're building smart contracts you want to tap into a reach existing ecosystem. You want to be building on something like solidity that has a lot of libraries, a lot of frameworks, a lot of tooling, a lot of people who know how to build it, because it's already hard to hire solidity developers. How hard would it be to hire people who have to learn some specialized language? So you want this reach broad ecosystem to be building your staff on. However, for other
Starting point is 00:24:21 applications, for like, what's something that, like, this 100 KTPs, uh, exchange, you absolutely need the ultimate compute. You want to bring it down to you probably, maybe you don't even want to run it inside a virtual machine. Like, no matter if it's Cairo or EVM or whatever, maybe you want like really like a bare metal implementation of your specific role-up that can sell with transactions really, really fast running, like, because all of them are sequential, because they are trading on the same trading pair. you want to run them as fast as possible
Starting point is 00:24:59 what the processor enables you. So you get to this ultra-high frequency trading with tens of hundreds of thousands of pups. So yes, I believe the future is with this differentiated spectrum of approach. Cool. That's very helpful. I want to understand this like a little bit more. So let's say you have the EVM today on Ethereum,
Starting point is 00:25:23 right? with the EVM basically, you know, there are some performance limitations, you know, I mean, for example, one is because of the consensus, you have to, like, propagate the blocks. Like, you know, it requires a certain amount of block time. You can't make them too massive. Another thing is maybe the kind of computation of the state in the EVM. Now, we have the ZKVM here. What becomes a bottleneck here, right? because you have a single sequencer that you send a transaction to
Starting point is 00:25:54 is basically the bottleneck, the speed at which you're able to create the proofs for all the transactions? Or, I mean, first of all, how does the throughput and scalability of, you know, one single CKVM compared to Ethereum main net? And what are the bottlenecks there? There's a really great question. So we will have a couple of bottlenecks. The first intermediate bottleneck is going to be the speed of your sequencer.
Starting point is 00:26:27 The base at which you can accept and execute transactions and compute the block, intermediate block results and block hashesives. This does not depend on the, you know, like whatever, what zero knowledge proofs or fraud proofs you're using. It does not depend on data availability. It's just basic computation and networking. And it depends on the architecture of your chain. some chains will be decentralized,
Starting point is 00:26:53 and so you will have to tap into consensus, and you have to also make sure that your sequencing layer is fast enough, and your latency is good enough for your application. Some other chains might even be completely centralized with a single server that can respond in like 20 milliseconds time. And for some use cases, like for this super high frequency trading, this would be the case. they will likely prefer this.
Starting point is 00:27:20 But they might still want the full validity and security derived from Ethereum and open, you know, being not an isolated chain, but a part of the bigger ecosystem with all this rich liquidity. So that's your first bottleneck. And as you can see, like the various tradeoffs
Starting point is 00:27:37 lead to various different switches. Your second tradeoff or bottom lock is going to be your, the data availability, how you store data availability, how you manage data availability. If you are a ZK roll-up, then you are competing with all the other roll-ups on Ethereum, ZK and optimistic
Starting point is 00:27:56 for the block space, because the block space of Ethereum is limited. It's kind of a zero-sum game. If some blockchain, if some roll-ups will get more data, there will be less data, data availability bandwidth, available to other overlaps. So, which will lead then just to higher prices for this data bandwidth.
Starting point is 00:28:16 So this is a big problem, and the only way to solve it in the short term, to make the throughput really unlimited, is to use external data availability, like off-chain data availability. So you will still have ZK roll-ups, and maybe every chain should have a part of its account on the ZK roll-up where all the data is published on Ethereum and they enjoy high security, but for the accounts that do not need high security or are willing to take some risks into account, you want to publish data availability externally. With zero knowledge proofs, it makes a lot of sense.
Starting point is 00:28:56 It makes a lot less sense with optimistic proaches, which we can discuss why. But for Fosicator or it works. So if you can go and you have this kind of elastic extension of data availability bandwidth, it, then this bottleneck is going to solve for you. And the third bottleneck we have is the actual ability to generate zero knowledge proofs. And this is the least of our problems, because the proof generation is, they're relatively cheap. And we can do it on very broadly available hardware.
Starting point is 00:29:32 So last week, we announced an upgrade called Bujan, a new proof system that is going to be embraced by ZikiSync era. This is a proof system we've been working on and off for a couple of years. It's based on the construction called Redshift, which is just a combination of Planck and Plunkish authentication and Fry polynomial commitment. And the implementation we have today
Starting point is 00:30:01 only requires six to 16 gigabytes of RAM, depending on our configuration, you can choose parameters. Like, let's say 16. 16 gigabytes of RAM on a GPU is basically consumer grade hard. You can do it on gaming machines, you can do it on any cloud provider. They have plenty of GPUs available
Starting point is 00:30:21 for machine learning for other purposes that you can utilize. So we will be able to prove all of the world's webbid transactions with ZK using something like this. So that is not really a bottleneck. So data availability and the the sequential throughput
Starting point is 00:30:39 or the sequencer of throughput are there really two bottlenecks and we can talk about the solutions. The solution to the sequence of throughput is to have many chains. There is no alternative to this. We will not be able to handle all of the world's value transactions
Starting point is 00:30:56 on a single monolithic chain because it's just physically infvisible like you cannot imagine all of the world's internet servers running on a single server or even on a single data center. That doesn't scale. You need to be able to add more and more on demand.
Starting point is 00:31:13 It also won't work because different, no matter what configuration you take, you're making some trade-offs. Latency versus decentralization. Like if you are a decentralized consensus, then you will naturally be able to handle less throughput and less like... When you say,
Starting point is 00:31:34 decentralization in this context. What are you talking about? Decentrization of the sequencer. Okay. If you want to decentralize the sequencer as opposed to one. Okay, that makes sense. Yeah. Yeah. So one validator will be able to handle a really high load with very low latency. If you want to have hundreds of validators, then you naturally have to make the data available to all of them. You have to reach consensus, which requires at least two rounds of communication. between all of them, you probably want them to be, like, broadly, geographically spread,
Starting point is 00:32:08 not in the same country, not on the same continent. So, like, the communication loop becomes larger. Like, you're not getting anywhere. Like, you will be in the region of, like, one second, order of magnitude of one second, maybe half a second. Because then it's basically, you run something like tendament or something like that. Correct. Tenement, the hot stuff, like, you need, like, even with the newest modification of hot stuff,
Starting point is 00:32:30 you need two rounds of consensus, two rounds of messages, two rounds of messaging for the consensus, which means you have to send the message twice between different continents. And you're just limited by the speed of light and the speed of communication, those networks, which will determine the latency of your consensus,
Starting point is 00:32:49 which you can eliminate if you're using a localized data server for the centralized sequence, right? So those trade-offs are impossible to make once and for all one-size-fits-fitting. it's all. So you will need multiple chains. And so the question is, how do we incorporate multiple chains in an architecture where they can still seamlessly, trustlessly, and capital efficiently communicate with each other? And this is where the hyperchain architecture comes into play. This is why we've been working so hard on making this architecture available in the ZKSync network.
Starting point is 00:33:27 and we would love to make it available to other roll-ups as well, but unfortunately the way Ethereum is architected, it's not going to be as seamless between different roll-up ecosystems. You will be able to pass messages between, say, ZK-Sync, like one of the ZK-Sync chain hyperchains and like one of the stock where stocknet itself or one of the L3s. So there will be some degree of trustlessness. but it's not going to be as seamless in terms of assets movement.
Starting point is 00:34:02 To move an asset from one roll-up to another, you will either need to use external liquidity, or you will have to go through layer one and actually pass this asset from one, like bridge contract on Ethereum, which belongs to Ziki Singh, to the bridge contract on Ethereum, which belongs to some other roll-up, which will make Ethereum layer one itself the bottleneck of this bridging.
Starting point is 00:34:27 which will not really scale for, like, we're talking about millions or billions of users. But within the ecosystem, with inside the Zikisink hyperchain network, it will be possible to arbitrary degree. So, like, the way you can imagine hyperchains is just like, like, you have, like, domains for email. You have Brian at Epicenter or Gmail or whatever.
Starting point is 00:34:52 I have, like, Alex at Matterlabs.com. you can send an email from any address on any domain to any other address on any other domain and you don't have to care about it. You don't, you know, the communication is the same. You just do one click and in a few seconds or minutes the message arrives on the other side. And it's end-to-end encrypted, like, you know, it's trustless.
Starting point is 00:35:18 How does this work technically? Because basically, one ZK sync, chain would have to know the state of the other chain to actually make this happen, right? So basically it's kind of like the old problem of kind of having smart shots and how they communicate, right? Yes. If you want to learn like real technical detail, like deep technical details on how it works, I highly recommend this Slash paper where the Slash team has done an extensive research and
Starting point is 00:35:49 documented really well how this will work. But like the high level, yes, the two chains. chains that transact between each other need to share, need to have access to each other's state. So this is already possible with all roll-ups on Ethereum. The moment you finalize a state on one roll-up, you can use storage proofs to access the state of any other roll-up, because you can go, you can best, you can imagine that Ethereum kind of unites all of the states of all the roll-ups in one huge Merkel tree, where Ethereum-state tree is
Starting point is 00:36:25 the top of this huge vertical tree. So what you do is on the one chain you commit a message with a destination of some other chain
Starting point is 00:36:38 and in this message you say here I am burning let's say 100 eth on this chain here like the smart contract takes care of this and so like I promise
Starting point is 00:36:53 I like I I burn it in favor of this destination on some other hyperchain within some period of time. And you make this commitment, you store it in storage. The other chain can then read the storage and say, oh, I see that this amount was burned. And here is a very, very important part. Because both chains run the exactly same subset of circuits,
Starting point is 00:37:18 the exactly same subset of like cryptographic enforcement of the rule, I can trust this other chain with a message that this commitment that this action has actually been performed, that they actually burned this hundred if. I can trust it blindly because that chain is like
Starting point is 00:37:39 its validity is enforced by exactly same circuits as my own chain as myself as as I myself as a chain. So like for me I can trust that chain as much as I can trust myself. So I can easily mint this hundred eth on this chain like through some system counter
Starting point is 00:37:59 that has access to like minting and it's probably going to be like a part of the of your bridging like you know your tokens will have to be aware either you use system contracts for tokens or you use specialized tokens that know about this functionality can mint this assets and then you can use this assets natively
Starting point is 00:38:18 so like you need to be part of the same state you need to have the same circuit so they can trust trust each other. And then the third key component of this architecture is that all of these chains have to share a single bridge on Ethereum that holds assets for them. Because if you don't share the same bridge for that holds the assets, then you can kind of like trust the other chain. You can be sure that they burned some like 100 eth. And you can mint it yourself inside your chain, but you will never be able to withdraw this hundred because they don't belong to you.
Starting point is 00:38:57 They're not locked in your bridge on this area. Is that last one the main reason why you wouldn't be able to get the same kind of convenience when it comes to bridging to Starknet or some other thing? Precisely, yes. Like you're like in order
Starting point is 00:39:13 so like there would be a question if you can trust Starknet contracts because it's a separate ecosystem that is managed by entirely different governance. You might like, you know what, you will have to write your specialized contract, the custom user contract that says, I trust Starknet.
Starting point is 00:39:29 Maybe other contracts don't, but I do trust them. So, like, I can believe that this message is real. But then, like, so that's actually the first problem. Like, even if we, if we could manage the ownership of this 100th, this contract would not be able to persuade the system contract that we should mint 100th, because the system contract does not know if Starknet is not compromised by their governance.
Starting point is 00:39:57 Because the system contract basically is able to verify the computation that's done using specific circuits and Starkness as different circuits so it like cannot directly verify that. It's just a different system not governed by the same like upgradability pattern by the same code. So like the system contract can only trust other system contracts of trusted origin. cannot trust users code. Because if I can deploy arbitrary smart contract which says, mint me $1 million,
Starting point is 00:40:29 trust me, bro. It's like, it's honest because it's kind of from the other chain. The system contract will say, you know, like maybe, maybe no. So it has to be a message that comes from the other system contract that actually burned this 100th.
Starting point is 00:40:45 But yes, but then the second part is, you're right. Like, even if we, if we solve that, like on the system level, we make an agreement, we say like, Zikising trust, Tarkner, Thakena trust Zikiseng, we still cannot pass the assets because they are locked in different underlying bridge comments. How does the hyperchain know which other chains are in the hyperchain architecture and to trust them?
Starting point is 00:41:11 Right. I mean, they kind of need a hyperchain ID that kind of says, I'm hyperchain 72. You can trust me. But is there any way of kind of faking that or do you need a centralized register? how do you go about that? You have a smart contract on Ethereum that will hold the miracle tree
Starting point is 00:41:30 of the of the hyperchains. Let's call it the shared proofer or the shared verifier. In this miracle tree, all the state changes will only be authorized if they are coming from a trusted
Starting point is 00:41:46 hyperchain. And by trusted, I mean from a hyperchain with the circuit circuits that the Prover Contract knows. Because the Prover Contracts always verifies the proofs against some verification keys.
Starting point is 00:42:02 Like, when you verify a proof, you need to know what you verify. You don't, like, you need to have this commitment to the circuits, which we call the verification key. It's a small key, but it unambiguously identifies certain cryptographic program.
Starting point is 00:42:19 One circuit always produces one specific verification. So this breach contract will have one single verification key for all the hyperchains. Any hyperchain that wants to make a state transition must support this verification, which means
Starting point is 00:42:37 that they are enforcing all the rules the way we all agree that will enforce. And this is also how you make sure that the tokens on each hyperchain are actually commensurate, right? Because basically I could be a evil, hyperchain and could say, look, I have 100th, I will burn it, I will reissue it on hyperchain 19, and then basically I burn some sort of fake ether and kind of get real ether on hyperchain
Starting point is 00:43:03 19. Absolutely correct. So they have to share the circuits, maybe not all the circuits. The hyperchains are actually highly customizable. They are fully sovereign. You can choose your sequence, you can choose your data availability model, you can choose your extensions, whatever you want, but there must be some really small subset of circuits that enforce this integrity, that enforce the asset treasury that really everyone cannot mess with anyone else's assets. So this part must be common, and this is going to be the critical part to verify in this stage transition. I think I get that part now.
Starting point is 00:43:44 Let's maybe back up a little, kind of to the four requirements of AZK, roll up. So as we talked about that, you kind of, you need regular tech points that can't revert, you need a proof between checkpoint to checkpoint, then you need data availability. And the reason why you need data availability is so that if there is no data availability, there's like some sort of secret transaction that I can kind of include in the block, other people can no longer calculate the current state of the block and can no longer build on it. And thereby I can effectively brick it, right? I can kind of, I can't steal funds.
Starting point is 00:44:22 Yeah, but I can make sure that no one else can do anything because I have the secret transaction. This is why you need the data availability. There are philidiums, which are basically ZK roll-ups without data availability. And this is kind of the
Starting point is 00:44:39 direction that cello is going as well as polygon and so on. How do you think about this spectrum of el-chun is. And do you think maybe there's even a way of kind of doing data availability optimistically? So that basically don't actually, because basically if you look at how much you spend on the checkpoints versus on the data availability, maybe you can give us the exact numbers for ZKSync.
Starting point is 00:45:07 But almost everything actually is for data availability rather than the checkpoints, right? Absolutely correct. So I'll start with this last question indeed. the absolute majority of the costs is going towards data availability from, like, we currently have 30, 50 cents per transaction depending on fluctuating gas price in Ethereum, across all roll-ups. The bulk of this cost is going to data availability, and it's not going to change even after EAP 4-844, which will hopefully bring the costs down by a factor of 10x, maybe 20x.
Starting point is 00:45:45 But even the remaining one sense, is going to be the majority of the cost, because the zero knowledge pro's computation part is tiny. It's like 0.0 something. It's hard to calculate exactly because there are many components in the system. It's really, like you can benchmark something in vacuum, but in detection from the real system,
Starting point is 00:46:07 it's not going to be indicative. But we know for sure it's like a... So correct, the data availability is more. So if you want to lower the costs, you have to seek this external, data availability solutions, like they're building something like validiums or volitions. Talk about that in a second. Let's reason about their L2s.
Starting point is 00:46:30 I think we need a good definition of what an L2 actually is. And I've not seen a strict definition adopted by the community. We kind of know it intuitively, but not precisely. I think a good definition would be an L2 is a chain that derives its security from underlying layer one. Like security in various senses. Like it could be live, this could be security of funds,
Starting point is 00:46:56 the retrievability of the funds eventually, and so on. But like you need to derive some security from L1. If you don't derive security, like some significant security from L1. So a side chain, for example, is not an L2 because your security entirely depends
Starting point is 00:47:15 on your set of validators. they can do whatever they want with your money. If they could loot, they can steal, they can freeze, whatever. With ZK roll-up, the security is 100% direct from layer one. You can always guarantee that the users will eventually be able to withdraw all the funds no matter what the validators, if the contracts are immutable, if they don't have the right to arbitrary exchange contracts, and of course if there are no box in the implementation,
Starting point is 00:47:43 which is a significant risk, which is not negligible, but let's assume those two factors for now. But in this case, Zickey roll-up is like ironclad security is exactly the same as Ethereum in terms of its properties. The Validium is in between. All the transactions in Validio are enforced by Zero Knowledge Proofs and Ethereum. So it's not possible to make anything invalid. It's not possible to execute something not on behalf of the user.
Starting point is 00:48:15 So in this sense, it derives a very significant part of its security from Ethereum. And for some use cases, you don't care that much about data availability. You only care about the correctness of the code. Like, an example of this would be, let's say, online voting. Let's say that you're voting, like, let's ignore censorship persistence for now. let's imagine that we have some perfect censorship-resistant input. Because maybe like each party collects their own votes, and then each party submits there,
Starting point is 00:48:54 like we have like blue party and the red party. Both of them get as many votes as they can from their respective proponents, and then each of them submits a transaction on this chain. And all you care about is to calculate the votes correctly. and then you want to publish the result on chain but you want to make sure that it's actually correctly available on chain to other contracts not just to humans to demonstrate
Starting point is 00:49:22 which you could do off-chain as trying generating zero knowledge proofs. You want to make it available to contracts. So for this, data availability is really not important. All you care about in the end is that the result is correct. The same applies to oracles. Like when you do Oracle updates, you don't care about the state of the Oracle update
Starting point is 00:49:42 because you can discard that like if the chain is frozen who cares like you will be able to make new Oracle ticks on the new chain when when users migrate right so but what you care about is that the Oracle updates have been correctly verified that they're coming from trusted sources that the weighted averages are computed correctly and so on and so right so for those use cases
Starting point is 00:50:05 Validium does not constitute any reduction in security because you only use security for computation, which it's perfectly secure. So from this perspective, I think that Validium is actually, like it should deserve to be called ML2 to some degree. When we talk about the actual, like real-life security, like real-world security for the user assets,
Starting point is 00:50:29 it becomes a lot trickier. It's really hard to reason. Like now we're back to this game theoretical plane where we can imagine that the operator, the validators of this validium chain that locks a lot of, like, some billions of dollars of user assets, they could say, oh, you know what,
Starting point is 00:50:51 let's try to blackmail our users, let's freeze the state and then, like, demand ransom. Or maybe they don't freeze the state. Maybe they are hacked, because the servers that operate the validium have to be online. The keys have to be on the hot machines. So maybe there is some,
Starting point is 00:51:07 like ingenious way to hop the systems and they compromise the majority of the of the validators and then the hackers can demand ransom. It could potentially become tricky. So I would of course treat validium accounts always as like significantly
Starting point is 00:51:24 less secure than Ethereum accounts. But then you like I think the ideal solution is the combination of both. When you have something like a volution system where you as a user can decide for each of your accounts we want it to be stored on Ethereum
Starting point is 00:51:43 like on a ZQ roll-up with full Ethereum security guarantees but maybe higher cost of transactions on this account and for some other account you decide that it should be a Validium or Ziki partner account where the data availability is secured off-chain by some committee with some economic security
Starting point is 00:52:04 but you use it as you would use your savings and your current account. You would keep most of your assets, most of your savings on the savings account. Maybe you put them in some defy protocols from your roll-up account, and you don't touch it every day. You only transactorarily on it. So most of your value is there. And then you move some of this value to your current account on the validity on the volidium side of the volition.
Starting point is 00:52:31 let's say on like you on ZKSync, it would be called ZK Porter. So you put it in ZK. Porter, maybe you have like your millions of dollars on that side and just a few thousand per month to pay for your daily needs. On the evolution side, and so you kind of like you explicitly manage the risks of this exposure. And if everyone else is doing the same, then you have a lot less value locked on the Validium side of the evolution. but you will have a lot more transactions, which actually makes a perfect sense
Starting point is 00:53:05 from this balanced perspective. You will have a lot of cheap transactions, a lot of trading going on there, a lot of computational activity, like oracle updates, and arbitrage transactions and so on and so on, happening on evolution side. Sorry, on the validity.
Starting point is 00:53:23 Can I quickly kind of question this to a certain extent? If you kind of look, how, say, cello is transitioning to being an L2, they're not foregoing their validator set, right? So while kind of ZKSink, and to be fair, all other L2s have a centralized sequencer, which is definitely also an attack vector, right? The validiums that kind of we see emerging, they already have a decentralized validator set. So basically there's many entities that in principle are allowed to kind of build. the blocks, where do you see the trade-off here?
Starting point is 00:54:01 So, I mean, obviously, if you had a decentralized sequencer on ZK Sync, this point would kind of fall, but currently you don't, right? You can't really compare the two. So first of all, we are committed to decentralizing the sequencer. We are actively working on this. We have a prototype, which will unveil very soon. It's running on the hot stuff consensus. It's going to be fully decentralized as a sequencer.
Starting point is 00:54:26 And what you want to get from the sequence of decentralization primarily is the resilience and the liveliness of your network. You want to be sure that the network will remain operational for everyday transactions, even if one or multiple parties are compromised or going down or unavailable or whatever reasons. But the sequencer does not affect the security of your funds. So you could argue that liveliness is part of security. I will admit that. but if you put a lot of value in some DFI applications, yes, you will be annoyed if it's not available
Starting point is 00:54:59 and you might have some like opportunity costs for not being able to use it for a couple of days, but it's nothing compared to the ability to lose all of your funds locked in like in the uniswok or example, right? But the validators can't steer from you, right? I mean, to basically if you have the ability to kind of run your own full node, you can't be duped about the state of the chain. you can never steal funds
Starting point is 00:55:23 you can never steal funds like all you could try to do is like the validators could try to double like they can't even do a double spend like they can only do some short term faking on chain that will never finalize on Ethereum and if you
Starting point is 00:55:38 if you're making high value transactions you always want to wait for the final like for the finality on Ethereum for the checkpoint that actually guarantees that this transaction is final before accepting it. If you're expecting someone,
Starting point is 00:55:54 like for someone in a payment of $1 million, you have to wait for Ethereum final. You cannot trust the sequencer to just, oh, I have a confirmation. Absolutely, but that's the same on Validium, right? So on Validium, the validators or like your gardens of data, the data availability providers, can actually freeze your estate, can freeze the entire chain,
Starting point is 00:56:16 and so your value will become unavailable to you. You need only a single honest validator, right? Because you only need one person who kind of makes the data available. Great question. This is a big misconception about the data availability systems. If you have a state transition, which has been authorized by a decentralized set of validators, and at least one of them is honest, that will apply. This one honest validator will share the data with you.
Starting point is 00:56:47 However, if the majority of your validator set is compromised, let's say like two-thirds of the validator set is malicious, then they will do a state transition without sharing it with any of the honest validators. So even though you have one-third of the honest validators on the chain, they will never get a single bit of this data from the malicious parties. So it's like what matters is like who controls the state transition. not who controls the data
Starting point is 00:57:19 for the honest blocks because if you control the state transition ability then you can always make this state transition into this unavailable state and it doesn't matter that you have like a lot of honest people on the chain. Okay, so you need more than one third honest majorities.
Starting point is 00:57:38 Some quorum, like the chain can decide is it like 50% two thirds or whatever, but like some quorum this quorum that decides that, yes, we can accept the state transition because the smart contract on Ethereum has to make a decision whether this block has data available or not. And it's impossible for this block to, for this contract, to tell it subjectively.
Starting point is 00:58:09 It cannot talk to all the validators. It cannot make data availability sampling because it's a smart contract. It can only work with a limited amount of data that's been passed. So what the contract can verify is like, okay, do I have enough signatures? Do I have a quorum of signatures? Let's say two-thirds. It's a limitation of this verification code.
Starting point is 00:58:31 Okay, yeah, that totally, that clears up my question. So if you look at kind of how DK Sync and all other L2s are implemented at the moment, they're all upgradable, usually from a single. multi-sick, right? So I totally understand why this is done. So you need to be able to mitigate potential vulnerabilities and on top of that L1 hasn't really ossified enough. So there might be upgrades on L1 that kind of might actively break things on L2. So how do you then mitigate kind of the risk of compromise key attacks, right? Because say there's like 20% 20 people on your on your committee and you need like, you know, 10 signatures for upgrading the contract,
Starting point is 00:59:16 allowing you to steal funds and so on, like actually stealing funds, not just freezing them. How do you mitigate that risk? This is a great question. And this is something, like, if you would ask me, like, how I feel about this of gradability. I would say I feel terrible. Like, we, the at ZKSync, the very first version of the protocol we deployed this, Zikisink 1, which is now called Zikisink Flight, it was completely immutable.
Starting point is 00:59:44 It was immutable with some upgrade period because we could not tolerate that, like, the idea that some multi-second control upgradability or the team can control upgradability and just propose arbitrary changes completely defeats the idea of this trustless scaling. Right? So, like, that was something we were finding disgusting. And then we had to learn the hard way
Starting point is 01:00:09 that like we're way too early we have to react to vulnerabilities because the bugs will be found in the short term while the system is in developments you have to think you have to take a very paranoid defensive mindset with mechanisms of defense in depth where you have multiple layers of protection
Starting point is 01:00:29 and you should be able to react in a timely manner to threats so we came up with the idea of the security console and since then that like The second chain that embraced the security council was arbitral earlier this year, and now it's a broadly popular idea that most other chains are embracing, and Vitalik and L2B are making great effort to push for it to assess a high bar of standards
Starting point is 01:00:58 of what constitutes a true security council. So let's look at how security council works. A team should only be able, like the core team of any protocol, should only be able to propose a change for the upgrade, which will execute with some time log, so that the users have time to withdraw from the protocol if they don't like the change or if outright, the protocol looks malicious.
Starting point is 01:01:23 Now, if you deal with an emergency, if there is an open buck on your system on live smart contracts, you want to be able to accelerate this, and this is where Security Council would kick in, and you would get some external people may be appointed by the governance of your protocol, like hopefully or ideally, a broad set of participants of highly reputable people from the community from a lot of different jurisdictions who use off-chain cold wallets to control these keys.
Starting point is 01:01:56 So what it gives you is the protection from a regulatory capture. It's a lot harder to compromise people in different jurisdictions because now it has to be something universal. If one government goes wrong and tries to hijack the protocol, they will not be able to do it. Like if a single company controls the protocol, then the government of the company where the company is incorporated would come to them and say, like, make this upgrade or your goal to jail. It's something like this, right? But if we have a broad set of participants, then it's much less likely because it's harder. The second protection
Starting point is 01:02:35 gives you is that compromising a lot of cold wallets, maybe hardware wallets or air gap machines or something like highly secured, is a lot harder than compromising the online servers with hot keys in memory that are probably running in some cloud providers that have access to the internet
Starting point is 01:02:58 that are vulnerable to zero days to compromise the the cold wallets that are not connected to the internet, someone would have to go and break into all of the multi-sic participants, which is a lot, a lot harder. But it's still not a perfect solution. And the perfect solution would be to bypass the need to withdraw the rights of upgrades
Starting point is 01:03:24 from both the team and the security council altogether. So the... I can imagine the world. where the Security Council appointed by the governance, broadly decentralized, highly secured with cold keys, can only freeze the protocol for a short period of time if they see an emergency. If there is a bug that needs a treatment,
Starting point is 01:03:49 they should go and do this, and then they should bring this question to the protocol governance, and the protocol governance should decide and then make appropriate changes and make the emergency upgrade on the product. And then even in this case, the question is, like, what is governance? Is it just the majority of the token holders? Is it some broader set of participants?
Starting point is 01:04:11 Like, you know, like, I really admire. I want to make a shout out to the optimism team for doing a lot of work on the process of governance, like elaborating different schemes. Like, they have the idea of the House of Citizens, the House of Tokenholders. You often a consensus between both of them. And even then, we might. have some better mechanisms such as like eventually bringing up this question to the consensus of layer one, you know, maybe initiating an upgrade in a way that can only be accomplished
Starting point is 01:04:44 with the soft fork of layer one itself if there is a disagreement between different parties of this governing entities. So this is a big question which requires a lot more research, but I hope I could have some flip. cool that was very helpful so one question i think is kind of like very connected to this you mentioned you know the risk of bucks i mean i've also heard of other people being quite concerned about this with regards to zk roll-ups can you talk a little bit about you know where are the risks of you know bucks and and what kind of consequences could we have if there are maybe bucks at different places?
Starting point is 01:05:28 Well, all of the roll-ups are relatively new systems that need to be bottle tested for a long period of time with a lot of money until we kind of have enough confidence that they are secure. The box can happen in various places for Ziki roll-ups. They're really, like, if you look at the complexity of the systems, the complexities is bounded to different isolated components. So you could have a buck in cryptography, potentially, theoretically, that could allow you to fake proofs. Or you could have a buck in circuits that would allow you to, if you have, like if you are the validator,
Starting point is 01:06:11 you can submit any proposal for the new block. You could submit a malicious block and then fake the proof that this block is actually valid, right? If there are box, if you forgot some constraints, if some things are missing in the circuit. So to mitigate these things, it's actually a lot easier to do in Zika role-ups than say in optimistic role-ups because you can create redundant systems. Like the way it works in all mission-critical systems today
Starting point is 01:06:40 with life emergencies, aviation, nuclear energy, and so on, you never rely on a single component for security or for safety, for that matter. In aviation, you always have multiple engines and multiple sensors. The same thing can apply here. You don't want to rely just on the validity proofs. You want to do things like 2FA,
Starting point is 01:07:03 where you need two separate mechanisms. In the case of Iziki Roll-up, for example, it would be, first, a consensus of the validators has to agree on that the block is valid, and only then they produce the zero-knowledge proofs, and Ethereum contract verifies both the signatures of the majority of the validators, and the zero-knowledge proof, and only both match, then you make a stay transition. So now if there is a buck in cryptography, in addition to breaking cryptography, you have to compromise the majority of the validators, which is hard,
Starting point is 01:07:39 because most likely they were honest to begin with, and then it would be hard. You would have to buy a lot of stake to be able to get to this critical quorum. But then, in addition to that, if you introduce some things like withdrawal delays, as we have in ZK Sync era, for example, like the delays I withdrawed, the withdrawals are delayed by a few hours in addition to just refine the proof. You could say that,
Starting point is 01:08:08 you would also need to compromise the Security Council who would be able to spot the problem and freeze the contract before the new malicious state transition is compromised, which will bring you to a three-fay. And then you can add one more layer of protection. You can say that in addition to the value data and zero knowledge proofs, and this kind of fraud-proof monitoring by the Security
Starting point is 01:08:28 Council, we would require a trusted execution environment, like SJX, for example, proof that the state transition is valid. It would give you like four layers of protection that you have to bypass. And you don't have to rely on Intel or NVDA or other providers. for this. Like, if the SJX proves become unavailable, then the governance can always intervene and just switch them off.
Starting point is 01:08:56 This would be something. So, like, you would always have kind of a multi-sick of several different components with a growing degree of complexity of breaking that for security. Okay, cool. Thanks for explaining that.
Starting point is 01:09:12 So maybe a final topic we wanted to touch on, is that one thing that's being discussed a lot, which is the sequencer and the decentralization of sequencers. I mean, it sounds like if I get sort of the ZK sync path correctly, right, is that you basically want to have the technologies to allow, you know, like lots of different people to do lots of ZK roll-ups and to choose sort of whether they want centralized sequencer or decentralized sequencer and maybe different types of parameters. So are you, is it a result, is that correct?
Starting point is 01:09:53 And then for the path of, okay, I want a decentralized sequencing solution. Are you guys building a decentralized sequencer? And, you know, how, what does that look like? This is a correct statement. We are focused now on the ZK stack. So we're, which is based on the source code of ZKSync era. and this is a modular framework. This is a framework where you can replace different components
Starting point is 01:10:21 from the sequencer to the data availability layer to the way you handle MV. All of these things are going to be customizable for you and you have the full freedom to choose what components you use in what way. But we have to provide the basic components, which some of them, we are working on at Meta Labs. some will be provided by the community, some will be provided by the grants program
Starting point is 01:10:48 that we want to initiate. But sequencer is really fundamental. It's one of the most important parts of the stat, and this is something we definitely want for error. We don't want error to remain a chain with centralized sequencer. We want to go to decentralize it in all aspects as soon as we possibly can. So we are actively working on a decentralized consensus.
Starting point is 01:11:15 As I said before, we're using hot stuff or a modification of hot stuff called Hot Shot for it, which gives you a very decent throughput at a very decent latency. So for ZKSync, we always take the user needs as, the as the north star of where we want the architecture to go. We always architect back from the ideal user experience with vision. And the ideal user experience is that every transaction confirms almost instantly, like within under one second, and it costs like under one cent. So that's kind of the utopia, the crypto utopia that we want to build.
Starting point is 01:12:09 And from this perspective, the consensus has to be fast decentralized, but fast enough and low latency enough. And Hotschold was meeting these criteria, so it became a choice. So just to clarify this here, the consensus here, so it's basically, like, how does that work? Does it basically mean you have kind of like in proof of stake
Starting point is 01:12:38 that let's say there is someone that's kind of like the leader at this point who does the sequencing and then a bunch of data together gets passed around and others kind of using hot stuff. They test the validity or like the correctness of it. And then that's confirmed and then you go to the next leader. Or is hot stuff just used to basically choose the sequencer? and then for some point, you know,
Starting point is 01:13:09 you just have this one party that's a sequencer or like, how does that work? So the blocks are being produced roughly every second. And every second, the leaders decide that and all of the validators
Starting point is 01:13:24 reach consensus on what's the next block. And it's within full tolerance. So like if the leader, like if some of the participants go down or become a responsive, it will, self-repair quickly. But every second we have a decision.
Starting point is 01:13:42 Yeah, yeah. Okay. And then do you feel like all of these things that are like huge topics at the moment on Ethereum, you know, like around the MEP pipeline and, you know, the proposed builder separation, those kind of things? To what extent will us carry over to like this and the kind of decentralized sequencing Sika sync roll-ups? this is a really, really good question. I don't have a definitive answer to it. We are in the research phase. We're looking at all the different developments there. We're in touch with the Ethereum Foundation, with guys building.
Starting point is 01:14:21 There are really amazing teams out there being shared sequencers who think that shared sequencing is the paradigm for Ethereum. I don't have a firm conclusion yet, like how it will work. I have some concerns which I shared with all of them about the centralization of power if we come to a world where proposal builder separation leads to just a few entities
Starting point is 01:14:46 essentially building all of the blocks and all the chains. I think that's a bit dystopian because it will give them a lot of soft power and like you don't want eventually to like some Google or Facebook or something like a big corporation like this
Starting point is 01:15:05 to be powering all of the blockchains with nominal decentralization through solar statement. So we want to design systems that actually encourage decentralization in various regards. But there is a popular opinion that might not be possible and eventually, because of the complexity
Starting point is 01:15:27 of MV extraction and the value that you can produce from there, that the, this inherent specialization for builders will inevitably lead to this outcome. I don't want to believe this. I want to believe that we will be able to come up with designs that actually broadly decentralized systems
Starting point is 01:15:50 with a lot of individual participants with ways to actually mitigate M.V and prevent at least toxic M.V. from happening, like targeted MIV attacks. I think there are designs out there that largely can accomplish this, but I've also heard concerns of schemes like encrypted mammals, that they could lead to other problems leading to less efficient arbitrage on such systems and so on. I don't have a definitive answer, but we have a research team focused on this,
Starting point is 01:16:31 and we will be just trying things out. I can promise you that ZK Sync, to our core values, and we have not discussed the ZK Credo yet, but this is like I encourage everyone to read and contribute to the ZK. Credo, which is our mission and philosophy and values statement, we will do everything we can and we want our community to enforce it
Starting point is 01:16:52 to go in the direction of the maximum decentralization and trustlessness. And so we will just pick the designs for ERA and I think the community will support it that maximize the decentralization and trustlessness and minimize MIV. But for hyperchains, I'm sure that we will see a lot of experimentations
Starting point is 01:17:13 and people will be trying all the different models with like from going maximum MEP and then distributing it to the community to try and minimize MEPE and work with like first conference serve principles or encrypted memples or some other novel approaches which I'm not aware of. And eventually the market will decide. what works best.
Starting point is 01:17:34 I wish we could go on because lots of these actually sound super interesting, but we are already way over time, so we'll just have to have you back at some point. Thank you so much for joining us, Alex. It was super insightful. Thank you. It was my pleasure. It was a very interesting conversation. Thank you for great question. Thank you so much. Thank you for joining us on this week's episode. We release new episodes every week. You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts. And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast.
Starting point is 01:18:12 Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen. And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released. If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter. And please leave us a review on iTunes. It helps people find the show, and we're always happy to read them. So thanks so much, and we look forward to being back next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.