Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Jinglan Wang & Karl Floersch: Optimism – The Optimistic Approach to Ethereum Scaling

Episode Date: April 21, 2020

The Plasma Group was a not for profit research group focussing on layer-2 scaling on Ethereum. Optimism is a new Public Benefit Corporation that builds on the lessons of the Plasma research and is imp...lementing Optimistic Rollups. This solution scales Eth 1.x and offers near-instant transaction finality on Ethereum, while providing over 100x transaction throughput. Jinglan Wang and Karl Floersch, co-founders of Optimism, explain the transition to this new entity and its goals moving forward.Topics covered in this episode:Jinglan and Karl's backgrounds and how they came to work togetherHow the Plasma Group and all of the Plasma research evolved into OptimismFunding of Optimism through Gitcoin, and the lessons learned from donation fundingWhat are Optimistic Rollup and how they achieve scalingHow to make an Optimistic Rollup chainOptimistic Rollups compared to other scaling solutions like Truebit, sharding and zkRollupsHow Optimistic Rollups fit within the Eth 2.0 roadmapThe role of an aggregator and how to become oneSubmitting transactions, the fees involved, and different finality levelsProblems that could arise from mining transactions on incorrect statesWho is using the OVM alpha and what's next for the projectEpisode links: Optimism WebsitePlasma Group WebsiteIntroducing the OVM - Plasma Group Blog""Ethereum Smart Contracts in L2: Optimistic Rollup - Plasma Group BlogA New Way to Scale - Optimized Optimistic Rollup — IDEX BlogOptimism TwitterJinglan Wang TwitterKarl Floersch TwitterSponsors: Least Authority: Register for Security Sessions on April 30th to learn about security audits for your blockchain project - https://leastauthority.com/meetupThis episode is hosted by Friederike Ernst & Sunny Aggarwal. Show notes and listening options: epicenter.tv/336

Transcript
Discussion (0)
Starting point is 00:00:00 This is Epicenter, Episode 336 with guests, Jin Lang Wang and Carl Flourish. Hi, welcome to Epicenter. My name is Sebastian Futura. Today, our guests are Jin Lang Wang and Carl Flourish. So the co-founders of Optimism. Optimism is a new company. It was founded just a few months ago. And it's essentially a spinoff of the plasma group. So the plasma research was led by this nonprofit group of researchers and they were building layer two scaling solutions for Ethereum. And that has more. morphed into a public benefit company, which is continuing on that work. Optimism is building optimistic roll-ups as a scaling solution for ETH1.X, and this could be used
Starting point is 00:00:55 in ETH2.0. And as you'll hear in Sondi and Fredelika's interview, optimism is benefiting from the many, many lessons that were learned during the plasma era. I'm going to say that up front. This is a pretty technical episode, and I'm not the most knowledgeable person about the details of plasma and optimistic roll-up implementations, but I'm going to try to sum up the advantages of optimistic roll-ups as I understand them. So with optimistic roll-ups, the data is stored on the main Ethereum chain. So the data availability problem that existed in plasma essentially goes away. Optimistic roll-ups also offer upwards of 100x in scaling, which is pretty impressive. Now, in terms of limitations because the data has stored on-chain, optimistic roll-ups could have higher fees than
Starting point is 00:01:43 with plasma since one has to pay for that storage. And as with plasma, there is a waiting period of about a week in order to exit onto the main chain to cash out effectively. And this is due to the dispute mechanism needing time to detect fraudulent transactions, just as with plasma. Here's what you'll learn in this interview. Gin Lang and Carl's background and how they came to work together, how the plasma group and all the plasma research evolved into optimism, how Gitcoin funded most of optimism and the lessons learned from donation funding, what are optimistic roll-ups and how they achieve scaling, how to make an optimistic roll-up chain, optimistic roll-ups compare to other scaling solutions like Truebit, sharding, and ZK roll-ups,
Starting point is 00:02:29 how optimistic roll-ups fit in the Ethereum 2.0 roadmap, the role of aggregators, and how to become one submitting transactions, the fees involved, and the different finality levels, problems that could arise from mining transactions on incorrect states, and who is using the optimistic virtual machine, OVM, and what's next for the project? As you know, I'm organizing reset everything. It's happening on April 29th, and this is where academics, entrepreneurs, and thought leaders will come together to discuss the lasting effects of the COVID-19 crisis. Let me tell you about some of the amazing. We have lined up for this conference. We have Jeff Jarvis, Professor at CUNY and author of
Starting point is 00:03:11 of What Will Google Do, Brian Bellendorf, Corey Doctoro, Robin Hanson, who's been on podcast before, Associate Professor of Economics at Georgetown University, Riva Tess, Senior Director of Strategic Technology and Infocatives at Intel, Ryan Selkis, who was on the podcast recently, Dr. Yelena Kikmanovic, who is a clinical psychologist at Georgetown University and an expert on anxiety. And the list goes on and on. So to check out the full list of speakers, go to Reset Everything. Everything. Events, and you can also get your free tickets here. Once again, it's happening on April 29th. We've all heard of different projects talking about how they recently completed a security audit. And sure, security audits are important, but security should be incorporated into every aspect
Starting point is 00:03:55 of your project and your organization. So the ETH2.0 spec was just audited by Least Authority, and Least Authority approaches project reviews from a whole. holistic perspective, and they engage with teams on different levels. They do one-time code audits. They do security consulting, and they also do ongoing security support. And they cover requirements for each project by involving different team members to best utilize their various specialized skill sets. This is the security by design approach. So to help you get into the security by design mindset, lease authority is hosting their first security sessions on April 30th. This is a free online meetup where you will learn about the low-hanging fruit common vulnerabilities that you can
Starting point is 00:04:38 fix today, what a security audit actually looks like from start to finish, and exciting developments in blockchain security research. You can ask questions to their team of expert security researchers and get advice about the things that you're most concerned about. And it's free. So to sign up, go to least authority.com slash meetup. Once again, it's on April 30th, and there will be several sessions to accommodate for all time zones. And with that, here's our interview with Jin Lang Wang and Carl Flourish. So we have with us on the show, Jing Lin Wang and Carl Flurish, who are the co-founders of optimism and previously the plasma group. Carl, you've been on the show once before, and Jing, you're new to the show. So let's start with you. Could you tell us a little bit about what your
Starting point is 00:05:24 background is and how you got involved with Ethereum and in the blockchain space in general? I guess I used to be a bitcoiner, but I quit NASDAQ to work on Handshake with Joseph Poon, and he introduced me to a bunch of Ethereum people, and I was like, wow, these people are really nice. I don't know, there wasn't like a strong reason. It just happened. And Carl, last time you came on, you were working with the Ethereum Foundation. And can you tell us a little bit about what you've done since then and also how you guys started working together? We actually started working together on Crypto-economics. Study even. And then Crypto-economics that study was an educational resource
Starting point is 00:06:03 and kind of went over all of the blockchain's fun stuff. Heath II seemed to get a lot of good attention. There were a lot of great researchers that were joining, Justin Drake, and it was like looking really good. And so the thing that needed the most attention, you know, I felt, was plasma. And so then we started working on Plasma Group. We founded that together as well with Ben Jones, another co-founder and Kelvin. And then we worked on plasma for a really long time, kind of came up with this whole plasma
Starting point is 00:06:32 predicate's design, and then, you know, was noodling around and figured, you know, plasma gives us this infinite scalability, but do we really need that infinite scalability off the bat? And so one thing led to another. We released this whole optimistic roll-up post and then Unipig. And that kind of led to the formation of what is now called optimism. So optimism did emerge out of the Plasma Group. And it was very much a non-profit venture at first. And now it somehow has turned into this for-profit thing.
Starting point is 00:07:11 So tell us about the evolution of the project. Plasma Group was a research group. So we didn't want, you know, profit-moder. or business models to get in the way of doing the research. If you don't know yet what the thing you're building looks like, then how can you commit to a timeline and a business model and all that? However, there are difficulties with being open-source, non-profit, public good in crypto, as a lot of people talk about this on Twitter,
Starting point is 00:07:42 a lot of people have experienced this themselves. How do you find a sustainable source of funding? And so even though we set out to solve the scaling problem, we ended up thinking a lot and spending a lot of time trying to figure out the funding problem as well. There is overlap between the leadership of Plasma Group and optimism, but they are very much separate projects, separate entities, separate goals. But emotionally it feels very similar because of the human overlap. We stopped working on Plasma Group because we did. did some user testing with IDEO, which is an awesome VC firm and a design consulting firm in San Francisco. We got a bunch of Ethereum power users together in one place, and we asked them,
Starting point is 00:08:31 look at this really awesome thing we built. Don't you want to use it? And nobody wanted to use it. You know, it's the journey to finding product market fit. And you recently raised 3.5 million from paradigm and IDEO, right? So basically, what's the plan for operating this? as business. We're still in the process of trying to find product market fit, building something that people will actually use with a better developer experience than plasma, actually talking to more users on the way to building the final thing this time. In terms of how we turn this into a business, I think it's really important to everyone on the team that this remains always open source
Starting point is 00:09:13 and always a public good, which excludes a lot of business model types like SaaS or whatever. And it's also important to us that this becomes more decentralized over time, even if it doesn't start out that way. I think MIVA, minor extractable value auctions, is, you know, one of the key ingredients that we've been noodling on. but I'm sure we'll come up with other mechanisms to sort of support MIV or maybe even different mechanisms if we don't end up using it. But the business model part is not the main thing that we're thinking about at this stage. If you're talking about Miva, just for our listeners, do you mean something like giving miners the right or selling the right to reorganize transactions in a single block so as to basically extract the things that otherwise, you know, like front runners and so on, would have been able
Starting point is 00:10:11 to extract? Is that what you're referring to? That's correct. Miners already do this on layer one and we're just imposing a Harburger tax-like mechanism on layer two to just return some of that value back to the users because the users are the ones getting shafted by it. How big do you think that value is that actually currently goes to the miners in addition to the transaction fees? So can you kind of put a cap on that? One of our great friends is Phil Dyan. We have heard from our friends, the money that is extractable from things that are not just plain old transaction fees is actually extremely high. Some estimate, like as much as the transaction fees themselves, some estimate even more. In fact, there are instances where in one day there's
Starting point is 00:11:00 $300,000 that was, you know, potentially extractable by miners when we had that whole flash loan fiasco just a few weeks ago. So it can be very, very high. This money should not be just going straight into, you know, the pockets of minors who we don't necessarily know, don't necessarily support the community. I'm sure that many of them do. And, you know, my heart goes out. to them. However, I think that the miners would agree that giving back and kind of supporting the ecosystem by, you know, we spend transaction fees, the money from our transaction fees goes back to be reinvested in the protocols that we use in the not just protocols, but in the just general social services that we could create on Ethereum. So that's like a real, just such a
Starting point is 00:11:47 potent vision that I cannot wait to see it unfold. And I really want this community. I want to live in an ecosystem that is self-sustaining. And so that, you know, never again will we have to raise money? You know, we'll just be living off get coin grants and, you know, happy, happy la-la land. Majority of my time at Plasma Group was spent finding funding. And I think because, you know, we have Carl's great track record in the community and rapport with organizations, as well as Ben Jones and Calvin, it was relatively easy for us. to find grant funding, but it still wasn't easy. And the problem with donations is that it's just
Starting point is 00:12:30 not reliable. I think Gitcoin is a really amazing distribution mechanism for funding. And we're thinking, okay, if we can rely on Gitcoin as a distribution mechanism, can we create like a money out of not completely thin air, but a little bit mechanism? A sustainable revenue source. There we go. As of the last episode, when Carl was on, you know, you were talking about plasma cash. And I was kind of following along with plasma. And, you know, I understood plasma. I understood MVP. I understood cash.
Starting point is 00:13:02 I understood debit. And then came all this stuff, plasma prime, plasma leap, I just lost track. And could you give us a bit of catching up on like, what was the saga there? What ended up happening to the plasma era? Plasma is dead. All the research has been thrown out. I'm just kidding. That is absolutely false. All of the Plaza research, all of these projects and protocols are alive and well, and in fact, inform all of the new flashy terms that we use to describe our protocols. And like little known fact, for instance, Starkware is building a plasma. Their whole data off chain and using the snark and then they have a data availability mechanism. So plasma is well and alive. And in fact, you said Leap, Plasma Leap is super. And in fact, you said Leap, Plasma Leap is super. super cool and almost feels like a, you know, first stab at the kind of goals that we were trying
Starting point is 00:13:57 to tackle when we started talking about this whole optimistic roll-up thing. Because one of the key insights was that, okay, we really need general purpose computation. We need smart contracts. We can't do, you know, UTXOs. We can't, like, impose a new programming paradigm on developers. It's way too much. And so we kind of went back and forth, look at it. for designs that got us further and further along that spectrum to like Ethereum normalcy until we realized one little bit sobering fact about plasma. And that is that because plasma has data availability challenges, there can be liveness failures that are imposed by the, you know, the quote, operator, the main guy in charge, or even the consensus protocol, if there's a, you know,
Starting point is 00:14:47 tendermint consensus in charge. There can be a liveliness failure. It's not a safety failure, so it's not horrible, but liveliness failures break a lot of kinds of smart contracts. And so that's why we kind of move towards the roll-up paradigm just for the kind of constant liveliness properties. But all of that research has been used and serves as the foundation for what we're building. Could you tell us then a little bit about what is optimistic roll-up and sort of what the history is, is it sort of something that came out of plasma research, or is it something that kind of happened in parallel and you guys decided to kind of like jump rails to this one? A couple things happened at the same time. In the wake of our sobering realization that nobody
Starting point is 00:15:36 wanted to use our extremely complex, esoteric developer tooling to build predicates custom on plasma, like some flavor of plasma that they didn't understand anyways. You know, know, we started talking to a lot of people in this space like Dan Robinson, Vitalik Buterin, and most notably Barry White Hat, at a conference that we hosted called Scaling Ethereum, where we realized that roll up was super dope and that we should think more about it. Similar concepts have been dreamed up by others in the past, notably Shadow Chains from Vitalik in 2014. Arbitrum was also super early, Ed Felton also in 2014, I think. a lot of the concepts that are swimming around in layer two are all very similar. It's tempting to
Starting point is 00:16:22 add a new name onto it because then you feel like you discovered something new. But we really are just renaming prior work by calling it optimistic roll-up and building the Unipig demo really catapulted it into hype, Twitter hype. And we named it optimistic roll-up, our implementation, you know, optimistic execution, a roll-up with fraud proofs, all that fun stuff. and importantly, smart contracts. Like, it scales smart contracts. But since then, the community has kind of taken the term and adapted it to mean just generally roll-up with fraud proofs.
Starting point is 00:16:57 So not everyone who calls themselves an optimistic roll-up scales smart contracts or has the optimistic virtual machine. It's a crazy universe out there where, like, there's a million different articles with, you know, random little different tidbits about optimistic roll-up. And one of the little known tidbits is I was explaining optimist to grow up to someone, and they were like, huh, that sounds like a lot like Pocod. And I was like, oh, yeah, that's right.
Starting point is 00:17:25 And hilariously, you know, so like, you know, Cryptokitties is doing a kind of similar kind of roll-up scheme. But it turns out that the idea of coming to consensus on data availability first and then building things on top of that, this whole like roll-up paradigm, that's really where the kind of, ideas, I guess, really were started just exploding, right? There was so many different ways to use this data availability. And, you know, the concept of roll-ups generally, at least the term, come from Barry Whitehouse. I don't quote me on that, but I know that Vitalik was talking about, you know, his whole ZK Snark design with 500 transactions per second. And so then taking that, but there's, you know, ZK Snarks, oh, why don't we use fraud proofs instead of ZK Snarks? And so that's
Starting point is 00:18:14 the kind of the distinction between optimistic roll-up and ZK roll-up. And it kind of was this natural evolution that fell out and, you know, merge minimum viable consensus, like all of these different protocols kind of built on the the same theory. We'll deep dive here in a second, but can you give us an overview in a nutshell of how optimistic roll-up actually achieves scaling? The way that optimistic roll-up scales is it scales computation, not data availability. And so the base layer, the kind of a layer one blockchain is used to post all of the data.
Starting point is 00:18:50 And that allows us to recreate the head state at any time because we can download the Genesis state. We can download all of the different transactions. And then we can play them and recreate the head state. And that's what gives us that liveliness property, because we always will have the full record of transactions. And then on top of that, there is, you know, the whole fraud-proof mechanism, which is essentially a kind of interactive game, which serves as, an Oracle to make sure that everyone is downloading and starting from the same correct state. And that's what allows Ethereum to know, oh, this coin was burned on this optimistic roll-up, and now I should mince something on Ethereum.
Starting point is 00:19:28 So could you walk us through how does optimistic roll-up work? What are the steps involved? Like, how do we make an optimistic roll-up chain? The first step is deploying a smart contract, which has the kind of optimistic roll-up Genesis state built into it. It says this is the first state. And then the second step is someone posts transactions to a kind of a smart contract that serves as just a log of what transactions have happened. They don't store information, the actual transaction data. They store kind of an accumulator of what transactions were added. So you start building up this
Starting point is 00:20:07 list of transactions. And then once you have a list of transactions, you play an interactive game to determine what the actual state, resulting state of applying those transactions is. And so you can generate this head state. And so someone submits the transaction. And then it could be the same person. It could be a different person submits a state route, which is a kind of claim that this is what the state is after applying that transaction. And then you wait a while. And this is optimistic roll-up, notably. So in ZK roll-up is even simpler. You literally just post the transactions, you post the head state and you prove that the head state is valid. Then in optimistic role, it's a little bit more complicated, but it actually has some great benefits.
Starting point is 00:20:49 You post the head state and you wait a timeout. So you wait a week, you wait a day, and you say, okay, well, if no one has proven this head state wrong in that time frame, then I'm just going to accept it as fact. And so after, you know, the timeout period, that state route becomes finalized and you can move on with your life. And you can use that head state to, for instance, withdraw your coin. And so what you do is you get the kind of availability properties, the liveliness properties of Ethereum,
Starting point is 00:21:20 and you also get the safety properties of Ethereum, assuming this interactive fraud-proof game plays out correctly. It's relatively simple. And the only difference between this and plasma is that in plasma, you don't post the transaction date on chain. That's it. Is this sort of similar to like true bit? I remember in true bit where like, you know, there people would post these like computation that was supposed to be done on chain.
Starting point is 00:21:46 And then you'd have someone would go ahead and post a solution. And then there was a challenge period. So is this sort of basically using true bit to build a chain? Essentially. And that's why I was saying Plasma Leap is like especially interesting. Because Plasma Leap was building a True Bit like. interactive fraud-proof game for arbitrary computation, and they're just using plasma, which had that, like, liveliness failure, you know, difficulty. But yes, it is like Trubit,
Starting point is 00:22:16 in that it is a kind of interactive game, which evaluates the validity of arbitrary computation. But in particular, our kind of what we call the OVM, which is the way that we run this computation in a way that is fraud-provable, right, that is compatible with fraud-proofs, this OVM actually doesn't share a lot of similarity with Truebit, because in Truebit, you have to implement a kind of machine virtualization scheme, and you have to have this interactive fraud-proof game. And it's extremely inefficient to play the state transitions on the main chain using a kind of machine virtualized virtualization layer. I won't go into the too much detail, but we instead, with the OVM, built something like Docker, which is much more efficient and allows us to
Starting point is 00:23:04 to kind of break, instead of segmenting up our computation into like computational steps, we're segmenting them up into transactions that are just normal Ethereum transactions. So it's much lower granularity. We'll come back to the OVM in a second, but let me unpack this first. So basically the way that I understand optimism now and the way that it sits on layer one is that basically our data is posted on Ethereum layer one, then computation is conducted off-chain or in layer two. And then basically the results are posted back to layer one.
Starting point is 00:23:38 So it seems to me that in plasma, putting all data on chain was always considered infeasible. So why has this changed? Like largely the reason was just plasma was promised billions of transactions for second, trillions of transactions for second. And the reality was, no one needs that much speed.
Starting point is 00:24:00 And you can actually do the calculations and you get like, you know, thousands of transactions per second using optimistic roll-up. And so at that point, it's like, who cares? If you have so many transactions, it's just not an issue. And like, we all, we need 100x scalability increase. We don't need a million X scalability increase, at least right now. There are some applications that do need that level of scalability. There are a lot of teams working on that, you know, like Crypto Economics Lab is Lume still
Starting point is 00:24:32 doing plasma, I don't know, Omisei Go, a lot of people are still working on it. If you have a custom chain for your specific decks, plasma's great. I think a lot of people classify Idex's optimistic roll-up as more of a plasma, and it seems to work really well for them. So no shade. It's just that no one wanted to use our plasma. How would you describe how this is different than sharding? Because this sounds to me to be very similar, almost, I mean, it seems to be what sharding was, from when I was really focused on sharding maybe like two years ago, this seems to have accomplished that. What separates this from what, you know, maybe ETH II call sharding or a polka dot call sharding and whatnot?
Starting point is 00:25:13 So I think I was the most annoying person at the most recent, like, ETH research, you know, jam session. There was this big little, little event because I went around to every researcher and I was like, hey, I just realized this roll-up stuff and this sharding stuff seems really similar. roll-up design that is so stupid, is it a shard? And the roll-up design is literally you put the data on chain and then you evaluate the state transition on chain, but you do them in separate steps. And I'm like, this has the exact same liveliness and validity properties as Ethereum. Is this a shard? I don't know. And so, you know, got into a bunch of debates. But I think there is at least some consensus that depending on how you use the term roll-up,
Starting point is 00:25:59 Roll-up is very close to synonymous with shard. It's a little different. You can come up with distinctions. All of these names are arbitrary. But if someone were like, oh, that's a roll-up or, oh, that's a shard, they have such similar properties in effect. There's asynchronous communication between different roll-up chains, right? There's asynchronous communication with different shards.
Starting point is 00:26:22 The distinction is not very clear. And that's a good thing because it means that the technologies that we're coming up with, are kind of coalescing around similar designs. But in the long term, what we'll probably see is we'll see, like, shards are the, you know, this one level of abstraction that people will use, oh, I'm on this shard, but then they're going to be inside of that shard, inside of a roll-up.
Starting point is 00:26:44 So it's all probably going to get a little messy. Why aren't we done with sharding then? Like, you know, I used Unipig, which is the demo you guys put together, and it seemed to work. So why is, like, roll-up not part of the current ETH2 roadmap? It is very much part of the current ETH2 roadmap. And in fact, the designs with roll-up have heavily influenced where, you know, the direction has been going. Like, people have said that sharding phase one is just like the roll-up chain, you know, phase essentially.
Starting point is 00:27:16 And people are just thinking, oh, Vital would be totally fine if we got to ETH2 phase one. And then we had a bunch of big roll-up chains and we called it a day. That's true. Don't speak for other people, but like no one, what I'm trying to communicate is like no one is particularly attached to the kind of evolution of this organic system. And I think that we will be building ETH II and we will be, you know, building these shards and building these roles all at the same time. And we're just going to have to watch it all play out and watch everything happen. Like I think it's, they're all fighting for the same cause. I think it all depends on timing.
Starting point is 00:28:00 If ETH2 Phase 1 comes out and that scales data availability and optimistic roll-up not only gives applications like the composability and interoperability they're used to, but also gives these shards native smart contract support and it's ready at the right time, then I'm not an ETH2 researcher or implementer. But I don't know if that scales Ethereum, then why go through all this extra legwork to build native smart contracts? contract support in a different way. Also, super bullish number, Vitalik's estimates of roll-up TPS on ETH2 phase one is 100,000 transactions per second. And right now, we're hitting, we'll say 200. By the way, there is one thing that's notable, and that is ETH2 provides two things. In phase one, it provides data availability kind of guarantees certificates, right? The other thing that it will provide in phase two is state transition validity certificates or, you know, guarantees. So it's a, you use the consensus protocol to know that data is available in phase one, and then you use the consensus protocol to know that data is valid in phase two.
Starting point is 00:29:11 And so this is not to say that ETH2 kind of data validity, you know, guarantees are not useful. In fact, they are extremely useful. and they may be the easiest way to get the highest level of, you know, immediate guarantees of data validity. But there's also an alternate universe where ZK. Snarks or whatever just become just incredibly efficient for arbitrary computation. And we just start proving everything up front and using the data availability layer, you know, just pretty heavily, but not the validity layer.
Starting point is 00:29:45 We'll get back to this later. But for now, I'd actually like to understand how the rest of the protocol works. So I understand that the data availability is solved because the data is available on the main chain. So how do the fraud proofs work? So basically how do I make sure that the computation that is done off chain is actually valid? The way that it works is we have something called the OVM. So the optimistic virtual machine. And in fact, you could call it the OVM, which is like optimistic Ethereum virtual machine.
Starting point is 00:30:16 Because it is basically an EVM. And so the way that it works is actually very simple. What you do is you initialize a pre-state. It's almost like a stateless client, if you happen to know what this is, but you initialize a pre-state, which is you have a state route, and then you prove a few storage slots in that state route. And so you have the kind of current state of the chain. And then let's say someone committed a state route, the next state route,
Starting point is 00:30:44 the post-state route, and they committed it incorrectly. It was invalid. So the way that we prove that it was invalid is the stupidest simplest way possible. We literally prove that it's invalid on the main chain by doing the computation itself. So you just load up the pre-state, take the transaction, play the transaction on the main chain, do a bunch of computation, and then output a post-state. And then check to see if the post-state root equals the post-state route that was committed. And that's it. extremely simple. So how expensive is this to sort of rerun a transaction on chain?
Starting point is 00:31:25 Because my assumption was that it does always assume that that was a relatively expensive operation. It is in the normal way that one would do it. However, we do have a very nice scheme, which is kind of borrows from Docker. So the way that Docker works is essentially it creates a like containerized, set of storage and variables and, you know, directories. And so this is very similar. We get to, like, create this, like, container of contracts. And then what we do is once we have the container of contracts, we can literally just proxy all of the calls to make sure that they stay within that container.
Starting point is 00:32:08 So we have two kind of key components that enable this is we have a purity checker on-chain. So make sure that the contracts that you're running run through this purity checker and all of the op codes are going through our kind of execution manager. And the execution manager is making sure that everything is routed to only contracts and only storage that is relevant to the fraud proof. Because you don't want to reach outside of your container and touch Ethereum state because then you can kind of get a virus. You're like, you know, polluting your environment. So we create this containerized thing. And so we actually only end up adding with this and a transpiler that we have written
Starting point is 00:32:52 that turns your solidity code into solidity that kind of gets routed through this system. We add, let's say, it's something like a 20% overhead to S loads. It's pretty solid. And even in our unoptimized version, we have something like 150,000 gas added for the kind of initialization step of, you know, running these transactions. So it's actually not so bad.
Starting point is 00:33:20 And then you can use this as a foundation for even more optimized fraud proofs. But it's not worth going into those. Sorry. So just to make sure if I understand this correctly. So what's happening is let's say I deployed a specific contract on the OVM, like on this, like, optimistic chain. Are you saying that I basically also deploy the same contract at the EVM layer? And so how we do the fraud proof is when we need to verify this contract call, you instead do it, run it on the EVM, like on the base chain.
Starting point is 00:33:58 Rather than like kind of doing every step by step like Trubit did, you just like have the same contract on two different spots. Am I getting the right idea here? Exactly. And you have to initialize the state so it matches it perfectly. And this is the most naive version. There are optimizations you can make, cut it down even further. But in a sense, yes, code mercilization, etc.
Starting point is 00:34:21 So does this mean that this would only really work with like the EVM? Because let's say I wanted to do Ewasom-based OVM, right? The problem is how would I run Ewasum on the base layer EVM? Exactly. And this is why I kind of called it like OVM, like the OEVM, because in its current instantiation, it is an EVM optimized system. Of course, the same techniques could be used in, you know, if we had an Ewas and base layer, right? But right now, this is just EVM focused.
Starting point is 00:35:01 I've tried following the ERC20 demo that you guys had. And, you know, it worked pretty smoothly, to be honest, I'm not quite sure what I did, but, you know, I followed the directions and all the tests passed. Understanding is, you know, a bit of a harder problem. But it was pretty cool. Like, you know, all I had to do was just change like, okay, look, instead of compiling using the normal Salsi compiler, I just pointed to your compiler and poof,
Starting point is 00:35:25 I suddenly have magic scaling. Does this mean I can do this with literally any contract? Or are there, like, limitations, are there like certain types of contracts that are not going to be portable or might need changes? or is anything, literally any solidity contract portable in this way? We're pretty much one-to-one compatibility. There's some weird things like block hash, block number, timestamp, that like have slightly different behavior for a very strange technical reason.
Starting point is 00:35:54 Static call has slightly different behavior, but it's almost one-to-one. And that's because we're doing this really crazy dockerized hack. Cool. But eventually, like, you know, once this thing is out of alpha or whatnot, It should be every contract. Could I copy the entire state of like the current Ethereum chain onto the OVM chain if I wanted to? We're thinking about like customer acquisition or adoption or whatever you call it. We were like, oh, how do we get all these companies to migrate?
Starting point is 00:36:24 And someone was like, oh, you just, you know, put up a bunch of hackathon prizes for people to clone popular Ethereum projects onto optimistic roll up. So yeah, you could. So the computation happens off chain. It's done by so-called aggregators, right? So what exactly is it that an aggregator does? And how do I become one? That is a great question. In the most plutonic form, aggregators submit state routes.
Starting point is 00:36:54 That's all they do. And they have a bond. And if they submit a state route that's invalid, their bond gets slashed. Note that anyone can submit a transaction, it's just the people who are attesting to the state. Those are the aggregators and they can get slashed. However, in the way that we are doing it, we are using MIVA. And so in Miva, we actually have what is called a sequencer.
Starting point is 00:37:16 And a sequencer is a kind of like privileged aggregator. And what they have, the right to reorder transactions for a short period of time, so like 10 minutes or so. And they're also probably going to submit a bunch of state routes. So the aggregators, all you need is a bond and a computer that can run it. and in, you know, what we're building, you have a sequencer, and for that, you need to win the MIVA auction. So basically, if I'm an aggregator and I behave maliciously, I'm slashed. So I have to put up a bond. How large is this bond and how is this determined?
Starting point is 00:37:49 Because in principle, if I'm an aggregator and I act maliciously and I want to censor certain transactions or I want to prevent certain things from happening, it's very difficult to actually put a price tag on that, right? Absolutely. So in the, wow, I'm juggling. two different approaches. But one approach is the kind of the simplest, which is you have these, you know, Vitalygoes into like 10th bonds or something like that, something really, really small on a per state root basis. So, you know, each state route you submit is, you know, you have a 10th bond.
Starting point is 00:38:23 The kind of designs that we're moving towards these days is we're kind of interested in having a much larger bond and a kind of single sequencer that has a really large bond that is doing most of this state route proposal. And the reason why is because we end up seeing in practice that there's really only one aggregator that like wins the race to the main chain over and over and over again and people kind of coalesce around it. And so instead of building a solution that kind of like tries to ignore that fact, we're trying to build the incentives around that fact to then, you know, make sure that everything, you know, the incentives are still a lot. aligned in that context.
Starting point is 00:39:02 And that's why we have this whole MIVA. So is the aggregator then in a sense an operator, like an operator you would have on a plasma chain? They are similar, except they cannot cause liveliness failures of, you know, at least in the sequencer, you know, model of more than 10 minutes.
Starting point is 00:39:19 And it's not even a liveliness failure in the kind of, in some sense. It's a liveliness failure in that you don't know the transaction ordering, but you do know your transaction will be included. Now I want to look at this from the user perspective. So basically I want a transaction included. Who exactly do I submit it to? Do I submit it to that single aggregate or is there pool?
Starting point is 00:39:42 What's the limit? So is it more of a bandwidth limit or of a computation limit? Yes. So you submit it at first, you submit it to the sequencer. And you probably will want to submit this in a commit reveals, you know, using a commit reveal scheme. So you send them the contents of the transaction. in a, you know, encrypted version, and then you send the key shortly after,
Starting point is 00:40:05 once you get a guarantee that it will be included. Not how it is currently done in our implementation. That is correct, because that's hard. But this is definitely the form that, you know, it needs to be. And then if that does not go through, you can submit it to a kind of backup aggregator if you want, but the most reasonable is just submit a transaction to the main chain. And the transaction can be anything arbitrary,
Starting point is 00:40:28 it can be, you know, get me out of here. It can be, you know, any number of arbitrary state transitions. The point of the priority aggregator is to get the instant transactions. You don't have to come to consensus on the ordering of the transactions, which is super fast. But we have this like backup free for all, you know, mechanism as well for censorship resistance. So I don't know. I don't think we actually expect anyone to be using it unless the aggregator. these sequencer is misbehaving.
Starting point is 00:41:01 Okay, and basically the aggregator gets some sort of cut of the transaction fees or similar? Yeah. Also, they, you know, have the very profitable ability to actually order the transactions, which is, you know, the dark money that miners make on Ethereum that people don't talk about and we're being very explicit about it in layer two. we know it's going to happen. You can't prevent it. I mean, you could, but it's hard. And so you might as well, you know, allow them to make money for it and pay a fee for the privilege. And how does this relate to the layer one fees? You pay layer one fees as well. So the sequencer is
Starting point is 00:41:45 receiving these transactions. They have a transaction fee. The sequencer is then using Ethereum. The sequencer is paying gas on Ethereum. But the gas is a very small fraction of what the gas would be if it was played on Ethereum. Okay, I get that. So now I submitted this transaction. It's been included. Do I get finality? Ah, so you get an instant economic finality or, you know, crypto economic finality.
Starting point is 00:42:09 That is what it has been called. But that is not the strongest form of finality. So that is good enough for, you know, relatively small transactions, medium-sized transactions. If you're, you know, spending a few million dollars buying a house, then you'd want that, you know, the sequencer bond to be absolutely massive. And so this is, you get a kind of instant, you know, medium level guarantee. And then you get, you know, full Ethereum finality after, you know, not just one block confirmation, but, you know, eight or, you know, 40 if you want.
Starting point is 00:42:43 So how does this look like for basically, why do I get different finalities for a small and medium and large transactions? So what determines that? This is the kind of like the nature of, you know, consensus protocols in this like decentralized setting where even in, you know, ETH II and sharding, you're going to be getting these like quick guarantees that are, you know, pretty strong. But then as time goes on, it gets stronger and stronger and stronger. And so the reason why you get this instant guarantee in this case is because the sequencer will send you a signed promise. And the signed promise will say, hey, I will include this transaction, and I will not just include it, I will include it in this order. And I will, you know,
Starting point is 00:43:26 I guarantee that this is the state route after you included this transaction. And then I will see that and I will say, okay, well, if they lie to me, I can send a transaction to main chain Ethereum that will get that bond slashed and, you know, a portion returned to the, the prover, but a small portion. So how do you deal with reorgs then? I mean, I assume there's two kinds of reorgs here. So there's three oaks on the layer two, and then there's three oaks on the first layer. So how do these interplay and do they kind of mess with my finality? Great. And this would mess with your finality if you did not have a kind of priority queue. And so the priority queue is saying, okay, we think that the sequencer will be able to get their transaction in before all of these different aggregators within some
Starting point is 00:44:18 time frame and will give them a kind of buffer. So not only will they compete on, you know, the gas price auctions and they can win those gas price auctions, but they can also just get a kind of head start versus all of the other aggregators, which may and may not even exist. And then you can parameterize it to give different levels of cushion for the sequencer. But you don't want to give them too much cushion, you don't want to give them too little. It's kind of a game of parameterization. What are like the sort of cusp points of finality here? Well, there's one where when you get a signed response from the sequencer,
Starting point is 00:44:55 then there's like another one when the transaction first makes it onto the chain, and then there seems to be another one when the challenge period, like, gets like, you know, you're finished with that. But between that second and third one, does it sort of increase in finality as more blocks are built on top of it? or is it sort of flat and then a sudden cliff? That is a great question and a very astuted, accurate kind of description of it. So, yes, those are essentially the cliffs,
Starting point is 00:45:25 and it's when it hits Ethereum and then gets mined a little bit. There are different ways to kind of parameterize the optimistic roll-up finality window stuff. Some of them are more gradual in their increase. Ours is more kind of sudden in that you get like this really, really strong a very large bond immediately once it's been finalized on Ethereum. And then the most important is that, you know, one week later, one day later, you know, depending on how you parameterize it, could be three hours later. And that is when you can withdraw from the chain.
Starting point is 00:45:59 So this is what Ethereum uses as their finality window. And so that will, you know, give us, that's the withdrawal limit, time limit. if I submitted a transaction and the transaction is in a block that then become slashed for whatever reason. So basically there was a fraud in it or whatever. So I assume that my transaction gets reverted as does any other transaction in that block and any block that follows on that chain. So what happens then? Ah, this is actually what you would intuitively think, but is not the case.
Starting point is 00:46:35 So the way that it works is instead there is a single list of transactions. that ordering of transactions is final, right? It is deterministic. You can always compute it. And it is recorded in Ethereum. So it's as final as Ethereum is. And the state route is what gets reverted. So the claim about what the transaction did is what gets reverted. It's not the transaction itself. So you just make a new claim about what the transaction did. And by you, I mean the sequencer, the aggregator, a random ether scan. Is your system sort of like, smart enough to tell between, let's say, like, you know, I sent a transaction to Alice and then you sent one to Bob. It turns out my transaction is fraudulent and yours came after mine.
Starting point is 00:47:22 Does your system automatic, like when you do the state route, does it automatically invalidate all the transactions that came after it? Or because like, you know, it's sort of an EVM and it's really hard to do sort of like a deep analysis to figure out dependency graphs or is it actually smart enough to know, like only invalidate the. transactions that would have been impacted. The thing that you said, and I'm not sure, I'm not sure if I misheard, but it sounded like you said, fraudulent transaction. Like an invalid state transition.
Starting point is 00:47:55 So the intuition is that in Ethereum, if there is a, you know, transaction, right, and there's something that, you know, something bad happens, it just reverts, right? And so that is the same way that we calculate all of these, you know, the state transactions. So just like there is kind of only one true state route because Ethereum, because the EVM is deterministic, and only, you know, this transaction will always yield this result, you know, based on the previous state route. There is only one possible future that it could be. What I mean more is like, let's say, so there's this list of transactions. And I claimed it had some state route. But it turns out the state route I claimed is.
Starting point is 00:48:40 incorrect because I improperly executed one of the transactions in that list. Are we going to now assume that all the transactions that came after that improper transaction is, are we going to still execute those? Yeah, how are we going to do that exactly? Great. So we will not only execute all the previous transactions. We will also execute the transaction that the person claimed the invalid state route about. We will always keep the same list of transactions.
Starting point is 00:49:13 So you have pre-state, apply transaction, post-state, apply transaction. Couldn't this be problematic where let's say, let's imagine like sort of a uniswap like thing, where like the person who, the node I'm relying on, the sequencer told me the state is one thing. And then I run a transaction because I, you know, think it's a good trade to be making. But it turns out they were lying to me and they actually, you know, the state was something else. And they tricked me into running my transaction on a state that I didn't want to run it on. Yes. And so the way that you get around this is you have multiple people who are running the transactions, that transaction log, independently off chain. And you can ask them what happened. So you don't necessarily just listen to the sequencer.
Starting point is 00:50:03 You get the sequencer and you're like, okay, that's a pretty solid. guarantee, but maybe they've lied in the past, or maybe their bond isn't high enough for your transaction. And so you decide, okay, I want more confirmation. So you wait a little bit, and you wait for the transaction to be posted into the canonical transaction log. Once you're in the canonical transaction log, you go to EtherScan, you go to, you know, Coinbase, you go to any number of kind of what crypto economic like client providers. And you say, hey, what is the state at this moment in time. And they'll all give you a return value.
Starting point is 00:50:40 And then you'll say, oh, are all of these states equal? Oh, no, this one's not, you know, the one that I received from the sequencer is not the right state route. Let me go punish him. And that goes for anyone who gives you a state route. You can post on Reddit, say, hey, you know, I'm going to slash your reputation by saying you just gave me the wrong state. And this is how you get a kind of, you know, this is the good thing about having a canonical log of transactions that's publicly viewable on Ethereum. So going back, like, you know, when you mentioned that sort of Vitalik mentioned, said that, oh, you know, a roll-up chain can get, maybe we get 100,000 transactions per second. What I don't, what I'm trying to understand is why a single roll-up chain would be particularly more scalable than a single root chain, right?
Starting point is 00:51:27 You know, you're still trying to shove all the data onto the chain. I mean, it's in call data. That's actually one thing I think we may not have covered specifically, was that the transactions were putting on our call data. rather than in state. So that's where the space savings come from. But yeah, so we're still putting all the data on chain, and we still need many people running these transactions. So we can't, unlike plasma,
Starting point is 00:51:47 we can't just assume that there's like one person running all the transactions. And so I understand we get scalability from having many roll-up chains. But how does one roll-up chain give us particular scalability? This is a great question. So the easiest answer is in ZK roll-up, right? ZK roll-up, you have one big chain and you prove all the state transitions up front. And so that just means that you know all of the state routes are valid immediately. And what you do need, though, is you need someone, you need to hope that someone is running the chain and has access to the
Starting point is 00:52:23 current state, right? Because if there's no full nodes, if it's only enough compute for one full node, then you're kind of out of luck. And that is actually the same in optimistic roll-up. So you assume that there exists an honest full node. It's this single honest full node assumption. And the honest full node is defined by a full node that runs all the state, holds all the state, runs all the transactions, and will submit fraud proofs. And so that's, you know, you can skip the submitting fraud proofs in ZK rollup, but, you know, that has its other downsize. So why is this like feasible? Why is it, you know, we could, we could, you know, start going, you know, I don't want to name other blockchains.
Starting point is 00:53:04 but many other blockchains solve scalability by just beefing up their nodes and saying, oh, we have more transactions per second. Like little known fact, a geth runs at about 4% its capacity. So that's like we could scale a lot just today. However, the important thing to note is that when you have a one honest full note assumption that is very different from an honest majority assumption, like honest majority of hash rate, right? if we expected the honest majority of hash rate to also have the massive, you know, large computing power of one of these full nodes, that would be infeasible because you'd need so many computers
Starting point is 00:53:44 all over the place. But if you have a hundred, you know, a hundred full nodes and they're all pretty beefy, then you can safely say, okay, I think at least one of those is going to be honest. A hundred is just not, you know, not enough for a big proof of work chain like Ethereum. And, you know, even when it moves to proof of stake, it, you know, is going to be thousands and thousands of of validators. So you already started comparing optimistic roll-up with ZK roll-up, that the computation that you are actually bringing back on-chain, the result that you're bringing back on-chain,
Starting point is 00:54:18 is actually the correct result. So is that inherently superior to optimistic roll-up where you still have to deal with fraud-proofs and where you have this latency period where you can't be sure that things actually get executed the way that you assume? I think superiority is in the eye of the beholder. It's secure from a, I mean, it's superior from a security perspective.
Starting point is 00:54:42 It's superior from a lot of perspectives, but optimistic roll-up is easier and it's farther along. Maybe you could even say that optimistic roll-up is an MVP for ZK roll-up, proof of concept. I don't know. You could go that far. I think it's superior. So if it's an MVP for you guys, are you guys also working? on ZK roll-ups? We are not right now because ZK roll-up fundamental research is not capable of building
Starting point is 00:55:12 EVM-style smart contracts. And it's going to take a long time for that to be the case. But once it is the case, absolutely. You know, we'd love to use that research to scale Ethereum. But we need to scale Ethereum ASAP. And the security properties of optimistic roll-up are very, very good. And, like, they're essentially comparable, almost equivalent to ZK roll-up. And so because they're in the same kind of ballpark,
Starting point is 00:55:37 but optimistic roll-up is way further along from the smart contract side. That's what we're starting out with. I think there's also a question of like superiority in usability. Like, yes, it is a headache that, you know, there's this one-week waiting period to withdraw out at the chain. You can have, you know, hacks like economic fast withdrawals where you buy the time value of ETH, pay a small fee. But I think that it's also important,
Starting point is 00:56:04 that with optimistic roll-up, you keep the solidity, the Viper development experience. You don't change that much like the tutorial you did Sunny. You get to use waffle, truffle, you know, Open Zeppelin, the graph, all of the things you know and love. As I understand it, I may be wrong. With ZK roll-up, you have a different programming language. You have to sort of on board, there's a learning curve. But yeah, you don't have to understand.
Starting point is 00:56:34 how ORU works to use it. So the OVM is sort of like the killer feature of the optimistic roll-up. Absolutely. So is anyone else working on an optimistic roll-up? Yeah, there are some really brilliant researchers at Fuel Labs, Nick Dodson and John Adler. Maybe Makira is also working on it. She's an amazing researcher who dropped out of university recently to go full-time into crypto.
Starting point is 00:57:02 Arbitrum is working on three-scale. yelling solutions. One of them is an optimistic roll-up-esque thing. That team there is incredibly legit. It is the most stacked team in crypto. Stephen Goldfetter, Harry Colladner, Ed Felton, and recently they just hired Daniel Goldman. There are a lot of really smart people working on optimistic roll-up and its various flavors. Am I missing anyone? Oh, Interstate Network, the former CTO of IDEX. And I haven't met them in person, so I don't know them as well. and Andy and Dylan, I think. I don't know their last names,
Starting point is 00:57:39 but you should check them out. Cool. So now that you guys have sort of done the Unipig demo, we had something running, the OVM Alpha is out, what are sort of like the next steps for you guys? What's next to see optimistic rolloop out in the wild? We are testing popular smart contracts,
Starting point is 00:58:01 running them through our transpiler. just, you know, hardening the OVM, debugging, building demos. The Unipig demo was great, but there was a lot of custom code written for it. It wasn't, you know, this, it wasn't a general purpose solution. And so this is really the first general purpose solution that we've created. Yeah, just like next steps, hardening. I think that the development cycles for infrastructure are much more similar to hardware, for example, where an MVP might take six months or a year to create,
Starting point is 00:58:36 as opposed to like consumer applications or financial applications where the mockups and the POCs are much shorter. Oh yeah, we are working on a demo with synthetics exchange. The community there is nuts. Like they have such an awesome community. The team is also incredible. And we're grateful that they're risk tolerant enough to want to do a demo with the OVM Alpha.
Starting point is 00:59:06 But yeah, that should be coming out soon, TM. Cool. Awesome. Well, thank you guys both for coming on this show and educating us on the state of Optimistic Roll-up and OVM and everything. And super exciting. And I'm sure we'll have you guys on again at some point when Optimistic Roll-Up is even further along. So thank you guys so much. Thank you. Thank you. I love Epicenter. Thanks for coming on. Thank you for joining us on this week's episode. We release new episodes every week.
Starting point is 00:59:38 You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts. And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast. Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen. And while you're there, be sure to sign up for the newsletter. So you get new episodes in your inbox as the rest of your. released. If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter. And please leave us a review on iTunes. It helps people find the show, and we're always happy to read them. So thanks so much, and we look forward to being back next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.