Bankless - How Native Rollups Scale Ethereum | Uma Roy & Justin Drake

Episode Date: March 17, 2025

Native Rollups are the next big step toward scaling Ethereum securely. In this episode,Uma Roy (CEO of Succinct) and Justin Drake (Ethereum Foundation researcher) break down what Native Rollups are, h...ow they leverage Ethereum’s core infrastructure for execution and validation, and why they're crucial for stronger security, better composability, and sustainable Ethereum growth.Tune in to understand the evolution of Ethereum’s rollup-centric roadmap and how Native Rollups could become a foundational upgrade for Ethereum's future.------📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24https://bankless.cc/spotify-premium------BANKLESS SPONSOR TOOLS:🪙FRAX | SELF SUFFICIENT DeFihttps://bankless.cc/Frax🦄UNISWAP | SWAP ON UNICHAINhttps://bankless.cc/unichain⚖️ARBITRUM | SCALING ETHEREUM⁠https://bankless.cc/Arbitrum🛞MANTLE | MODULAR LAYER 2 NETWORKhttps://bankless.cc/Mantle🌐CELO | BUILD TOGETHER AND PROSPERhttps://bankless.cc/Celo🏦INFINEX | THE CRYPTO-EVERYTHING APPhttps://bankless.cc/Infinex-----✨ Mint the episode on Zora ✨https://zora.co/coin/base:0x9ad876e913102d5097df82db0617816a17b07414?referrer=0x077Fe9e96Aa9b20Bd36F1C6290f54F8717C5674E------TIMESTAMPS0:00 Intro6:41 Why Native Rollups?16:32 Security & Composability Benefits38:13 ZKProofs42:52 Making L2s Native50:48 Network Effects52:35 Customizability Trade-Offs58:59 Timelines1:02:39 ETH Economic Impacts1:05:46 Other Advantages1:08:17 Succinct1:08:59 ETHProofs.org1:11:32 Real-Time Proving1:12:33 Ethereum’s Bottleneck1:15:43 Closing & Disclaimers------RESOURCESUma Royhttps://x.com/pumatheumaJustin Drakehttps://x.com/drakefjustinNative Rollupshttps://ethresear.ch/t/native-rollups-superpowers-from-l1-execution/21517Succincthttps://www.succinct.xyz/L2Beathttps://l2beat.com/scaling/summaryETHProofshttps://ethproofs.org/------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 I think the simple, like, one-liner is become native, get more TVL, get more users. And everyone wants that. And that's good for the roll-up. That's good for users. That's also good for Ethereum. Welcome to Bankless, where we explore the frontier of internet money and internet finance. And today on the show, we're exploring the frontier of native roll-ups on Ethereum. The big problem in the Ethereum roll-up landscape is the problem of non-homogeneity.
Starting point is 00:00:30 When Ethereum decided it was going to scale with roll-ups, it exported its scaling strategy to independent layer two teams, which all rolled their own code for a roll-up design. Now, Ethereum has scaled with many different roll-ups, but all of those roll-ups are themselves not very close to Ethereum. They all have different code, they all have different security assumptions, and they're all creating their own competing standard. This heterogeneity in Ethereum roll-ups have given rise to the meme that Ethereum Layer 2, are not Ethereum. Ethereum layer 2s are in fact distinct from Ethereum, and a lot of problems in Ethereum are downstream of this reality. Enter Native Roll-Ups, which is a design construction of an Ethereum roll-up
Starting point is 00:01:13 that tackles the issue of non-homogeneous roll-ups head on. It does this by baking in the security of Ethereum roll-ups directly into the layer one itself. Instead of optimism, arbitrum, base, all having fraud proofs and a security council, the idea of native roll-ups is that this is actually a service that the Ethereum Layer 1 can and maybe should provide by baking in an EVM pre-compile directly into the code of the Ethereum Layer 1. The Ethereum Layer 1 in this world would verify the validity of its native roll-ups. And roll-ups like Arbitrum Optimism Base, they can all just throw away their fraud-proof code, they can throw away their security councils, and simply use the Layer 1 pre-compile to access Ethereum-level security.
Starting point is 00:01:57 Today on the show, I have Justin Drake and Uma Roy to help me understand this conversation. It's a very technical episode, and I am a non-technical guy. So that means I help you break down the technical topics as best as I can. So I think you'll be able to keep up with Uma and Justin just fine. It was challenging, but I think I did an okay job. Let me know if I didn't. Some topics we discussed. What's a native roll-up?
Starting point is 00:02:19 How do they work? What do they fix? Why EVM chains like base Arbitrum ZK Sync all benefit much more from native roll-ups and non-EVM chains all get left behind. What does it take for an existing roll-up to become native? How are based roll-ups relevant? How does this impact the economics of ETH? What's the whole timeline on this whole thing anyways? And why, like all of Ethereum's problems, does this ultimately collapse back down to human coordination at the end of the day? So let's go ahead and get right into it. But first, a message from some of these fantastic sponsors that make this show possible.
Starting point is 00:02:49 Bankless Nation, super excited to reintroduce you to Uma Roy. She is the CEO of Sistinct. Sucinct is a developer platform to help make ZK proofs easy to work with. No gigabrain needed. Just enables more of the internet to become ZK proven more easily. Umah, welcome back to Bankless. Thanks for helping me. And once again, we also have Justin Drake. He is a researcher at the Ethereum Foundation. And he has been researching and advocating, advocating for newer constructions of rollups that I'll say are all kind of closer to the core of Ethereum. Maybe first with base rollups that repurpose the layer one validators to replace sequencing for roll-ups and now with native roll-ups, which repurposes the layer one EVM for execution.
Starting point is 00:03:30 I'm still a little foggy on what that means, which is why we are doing this episode. Justin, welcome back to Bankless. Hey, David. Thanks for having me. Okay, so I want to start with actually showing this tweet on screen because this tweet is exactly why we are doing this episode. This is a tweet from Uma. Uma, you tweeted out, I finally understand Justin Drake's native roll-up vision, and I'm so fucking
Starting point is 00:03:49 bullish Ethereum. native rollups solve eth value accrual and extend ethereum layer one security into the entire rollup centric roadmap and then you follow up saying more details below with a pretty long tweet which is basically turning into the agenda for for this podcast so maybe we can just like start there you got native roll up pilled maybe you can kind of set the landscape for us what's the current problems in the current roll up landscape that we have with the optimistic rollups like arbitram optimism base with a single sequencer and a governance token. Maybe we can kind of like set the context for what the problem is and what you're really excited
Starting point is 00:04:27 about with native roll-ups. Yeah, I mean, I think the roll-up central grid map is definitely terminally correct. So let me start off there. I think the Solana kind of architecture of we're going to fit everything on one computer is very dumb and is not very practical. If you look at any system that's ever scaled before in the history of the internet, you have to have many different servers and you have to be horizontally scalable, which is the roll-up-centric roadmap. So I think it's good that we all align that roll-up-centric roadmap good, but at the same time
Starting point is 00:05:00 I acknowledge that today the roll-up-centric roadmap has, you know, a bunch of problems. Certainly one of the biggest problems is interoperability between different roll-ups. So for example, you have like optimism, arbitram-based, top three roll-ups by TVL, I believe, and all of them have these seven-day withdrawal windows due to their fault-proof systems, which is the optimistic fault-proof game that they have to play to prove their validity to L1. And what that means is basically, like,
Starting point is 00:05:28 if I'm a user and I have assets on one chain, it's really hard to get assets on another chain. And we've all talked about that problem a million times. That's one problem, and I think ZK will help solve that by basically, instead of having to wait seven days for this optimistic fault-proof game, you can just have a ZK-proof of validity
Starting point is 00:05:44 and then settle the roll-up in one hour, if not, you know, like five minutes. So ZK helps solve interoperability, but then there's still some other problems, which I think Justin's excited to solve with his based and native roll-up proposal. So to dive into two of these other problems, one of the problems is that today's sequencers for roll-ups are centralized. So, like, for example, they're just running a single sequencer, and that actually contributes to some of the interoperability problems.
Starting point is 00:06:14 is kind of a centralization vector and has other challenges that I'm sure Justin's excited to talk about. And based roll-ups basically are a way of leveraging the L-1 for sequencing, which can help solve some of these single-sequencer problems.
Starting point is 00:06:28 The other problem is that right now, every single roll-up has to kind of roll their own state transition function and proof system to prove its validity back to the L-1. And that can be problematic for many reasons
Starting point is 00:06:41 from a security perspective, and then also it can mean that, you know, you have all these different standards. There's no like shared standard for all these roll-up teams to, you know, talk to each other and interoperate. And then also the most problematic part is that today all these roll-ups have security councils to basically govern the upgrade of all their proof systems. And that's not ideal because basically you have this single point of failure. And the security councils are multi-sig, so it's not, you know, just one person.
Starting point is 00:07:09 But you have the single point of failure where the security council, has a lot of responsibility over, you know, what is a valid roll-up, when does the roll-up upgrade, et cetera. And native roll-ups solve a lot of those problems by extending Ethereum social consensus and Ethereum-level governance and security over that very important part of a roll-up stock. It's kind of like, I guess there's a lot more we can talk about here, which is, I guess, why we have this podcast episode. But yeah, maybe that's like to try to summarize in, you know, a few sentences. Yeah, I do want to hang on and really talk.
Starting point is 00:07:43 try and set the context for really understanding the landscape here. I think people are not, everyone's familiar with Ethereum's interoperability problems, but unpacking that even deeper than that can take some time. And I really want to set the context here. And so, Justin, I want to give you your turn to kind of maybe also kind of do the same. But I want to also bring up something that you said one of the times we had you on previously where Ethereum, this is something that you said, is like Ethereum did this genius maneuver where it outsource, scalability to the roll-ups. And what that means is, like, we outsourced research and development of scale to VC-backed teams. And so Ethereum as an ecosystem is leveraging VC involvement,
Starting point is 00:08:25 VC investment into things like Arbitrum, into optimism, in the example of base with Coinbase. And now we understand how to scale. And we got that knowledge by, like, just incentivizing VCs to kind of scale Ethereum, which is cool. That's, like, some free work, some free research by the the layer two teams, you know, VC backed. But one problem that had that Uma highlighted was that, well, each different org made their own solution. And now there's like 17 different solutions and there's not really one standard. And since, you know, every single roll-up wants to become the best version of themselves.
Starting point is 00:09:01 Everyone, if you're at base, you want everything to settle on base. If you're arbitram, you want everything to settle on arbitram. But we lose the credible neutrality of the Ethereum layer one. And so now we have like 17 competing standards. and we're losing some of the properties that the Ethereum Layer 1 had, even though we have the research and the knowledge, we lost some of the things that make Ethereum so magical by outsourcing this to privately backed entities.
Starting point is 00:09:26 And so that's part of the problem here. That's maybe something to add to the context of where the problem of Ethereum's roll-up-centric roadmap has run into. So maybe that's my contribution to this section. Maybe you can also add like your commentary as to like really helping to find the context of the the topic here. Yeah, absolutely. So a few points.
Starting point is 00:09:49 One, going back to the initial tweet, I do want to clarify something around value accrual. I think it's easy for people to think of me as ultrasound money man. And therefore, like all of the research downstream of this as being to increase the eF burn for native role up specifically, the proposal is to provide this new precompile called the execute pre-compile essentially for free. And the reason is that, yes, there would be a burn similar to EAP-1559, but the bottleneck, I expected to be the data availability, not the
Starting point is 00:10:26 execution. And so most of the value act rule comes from the blobs. Now, in terms of the problems that the relative roll-ups are trying, the native roll-ups are trying to solve, are around, number one, bugs. So, right, as Uma said, the roll-ups today have to emulate the EVM, and this process of emulation introduces bugs.
Starting point is 00:10:54 And we don't even expect today L-1 clients to not have bugs. We assume by default that GF, Raff, Bessu, Nevermind, and all of the other execution clients have bugs. And our strategy to mitigate the bugs is to lean on client diversity. But you can't really do that within the L2s. And so instead of emulation, what the
Starting point is 00:11:17 execute precompile provides is introspection. So it allows the L2 to peek under the hood into the L1 and have access effectively to this client diversity that the L1 is providing. And then there's a second problem, which is even more fundamental than the bugs, which is this idea of being forward compatible with EVM upgrades. So right now, you know, roughly speaking, once a year, we do a hard fork and we change the EVM state transition function. And in order to remain compliant, compatible with the EVM, you need to have some form of governance. And really what we're trying to build here is trustless roll-ups, where you don't have to trust the Security Council, you don't have to trust a governance mechanism or governance token.
Starting point is 00:12:06 And native roll-ups is all about providing a pathway to solve this. Now, David, going back to your comment around outsourcing, in 2020, we went with the roll-up-centric roadmap almost because we were forced to, right? We didn't really have the technology and the maturity to do everything at L1. And as you said, we've greatly benefited from all of the investments that came from VCs. But ultimately, I think, you know, the end game, as As Uma said for the native roll-up is directionally pointing towards the native roll-ups. And I think now it's a good time to introduce this idea and start walking towards this new direction. And as you said, ultimately the goal is for roll-ups to become one with the L-1, to kind of unify with Ethereum. And they can do so at the sequencing layer with base sequencing,
Starting point is 00:13:13 but they can also do it at the execution layer with native execution. And if you're both base and native, then arguably you're kind of a shard of Ethereum. And one way to think of native roll-ups is as programmable shards, as opposed to the very rigid and non-programmable execution sharding that was suggested in 2020. Maybe I'll characterize the early stages of the roll-up-centric roadmap. The arbitram optimism-based phase of the roll-up-centric roadmap is that the role-up-centric roadmap allows for teams to build a, build a layer two that has a high degree of tolerance from the Ethereum layer one.
Starting point is 00:13:56 There's not really too much rigidity that the Ethereum layer one extends to the layer two. They're very flexible. They are very customizable. they're very free from being like rigidly integrated into the Ethereum layer one. And I think that has allowed, it has incentivized teams to be able to go attract VC funding because of, well, like, Ethereum can't scale, Ethereum needs to scale. We are going to build a layer two on Ethereum and we are going to help scale Ethereum. That's been some of the incentive for VC investment into the layer two space.
Starting point is 00:14:27 And I think we also see why when we look at the sequencer fees or the chain fees that Arbitrum is getting that base is getting, the optimism is getting. It's a pretty profitable business model and that comes from a large degree of tolerance that layer two's and flexibility that layer two's are allowed to have versus the Ethereum layer one. And I think as we are learning about the benefits and cost of that model, we are learning that there's not a lot of fee value capture back to the Ethereum layer one that comes from this very flexible and free roll-up model. And I think that's when we started to learn about based roll-ups is like, hey, if we have based roll-ups, we can get rid of the centralized sequencer, and we can get some layer two is to consume more layer one resources,
Starting point is 00:15:11 including mainly sequencing from Ethereum layer one validators. And using more layer one resources is a little bit less flexible for the roll-ups, and it consumes more layer-one resources, so that's good for Heath Layer 1 value accrual. And then now we also have native roll-ups. It's like, okay, now we can also repurpose execution. And so my arc of understanding for the Ethereum Roll-Up Centric Roadmap is we started with this very flexible, free, high-tolerance constructions of roll-ups that allowed to do whatever they want and also keep a lot of the value that they capture, that they create. And now we're learning, okay, there's an incentive alignment issue with that between Ethereum Layer 1s and what could be, excuse me, Ethereum Layer 2s versus
Starting point is 00:15:52 newer constructions of Layer 2s, base and native layer 2s. And the theme that I've seen out of you, Justin, lately, over your last two years, is like, okay, there are newer types of layer two constructions that are closer to the heart of Ethereum that are closer to the economics of Ethereum and are more like technically aligned with Ethereum. Do you agree with that that long arc of construction of Ethereum layer twos and what would you add to that? So maybe to your surprise I actually disagree with this take. Oh, interesting. Okay. So I I think what native role labs are and base wallups are all about is increasing network effects for Ethereum. And if If Ethereum is going to be extractive in the process, then it's going to be very, very difficult for the L2s to opt in to base sequencing and native sequencing. Now, if we can take each independently, for base sequencing, what you stand to lose is MEV. But I have this thesis that MEV more and more is going to be extracted upstream by applications, wallets and users. And so it won't be a permanent source of revenue for L2s.
Starting point is 00:17:09 And so it's not much of a loss for them to become based, but it's a great gain because they gain the network effects of composability. Now, if we look at the native roll-ups, what they gain is the network effect. effects of security, also known as shared security. And that means that you can build money Legos that are trustless and that other people want to compose with. And so it kind of ties in with the network effect of composability. And going back to one of my previous comments, I expect the ultimate bottleneck to be data availability, not execution.
Starting point is 00:17:48 And so the fees that roll-ups will have to pay to access this native execution will be minuscule compared to the fees that they will have to pay to consume plain data availability. Now, you could argue that it is indeed very, very bullish for Ethereum because, you know, we're going to have a working system with high network effects. And indirectly, this is going to be very, very bullish for ether the asset, partly because it's going to need to more demand for the fee burn, but it's also going to create more demand for ether as money and as a pristine collateral assets across the whole ecosystem. Uma, you were nodding your head yes to basically everything that I heard Justin say,
Starting point is 00:18:27 anything you want to add to Justin's commentary. Yeah, I think I really agree with Justin's point of view that before we talk about Ethereum value accrual or extraction, first we need to think about Ethereum value creation, right? Like the roll-ups are customers of Ethereum the protocol. They are creating businesses that benefit by using Ethereum. So, for example, I talk about this a lot with, you know, looking at staking rewards. If you're trying to just build a chain and you want verifiability and security, one way you can do that is you can build an Alt-L-1. And if you look at Solana, they have billions of dollars of emissions given out.
Starting point is 00:19:08 Or you can have a chain that leverages Ethereum for DA and then use a ZK for verifiability. And then you also have verifiability and security in this decentralized block space. and it's actually more cost-effective for you to build your chain in that way than having to pay a bunch of validators, a bunch of staking rewards. So I think, especially as crypto enters, it's like product era and like more pragmatic era, I think builders are going to start thinking in this more kind of like product mindset of just evaluating like, what is Ethereum giving to me, what's the value it's giving it to me, and is it reasonable to pay Ethereum for the services it's providing?
Starting point is 00:19:45 So I think it's really important that like we is it the Ethereum ecosystem, think of things in that way, and really view Ethereum more as a product. And I view native roll-ups as basically giving Ethereum the product, another very important feature that gives its customers more value, and therefore, like, customers are more willing to pay Ethereum. And eventually all of that obviously flows back to Ethereum the asset in some form and Ethereum the ecosystem and is very good.
Starting point is 00:20:14 But it's actually a very simple value prop. So to make it more concrete, Like today, I think John Charberton had this really interesting statistic that only 2% of all of Ethereum is bridged to L2s, which is actually super low if you think about it. And I think the reason for that is like if you put your money on L1 and then disappeared for 10 years, probably you actually feel pretty good about the money still being there, Ethereum still being alive, it being maintained, etc. Whereas if you put your money into an L2 and you disappear for 10 years, right now there's a lot of kind of assumptions that you'd have to be comfortable with to feel good about that decision. Like you have to trust the governance council, you have to trust the security council,
Starting point is 00:20:57 you have to trust there's no bugs, you have to have a lot of trust in all these actors. And I think the goal of native roll-ups is kind of to provide this new feature that execute pre-compile, that if roll-up teams are able to use, then they're able to build systems where their end users feel much more comfortable, sticking money in a roll-up for 10 years, and having the kind of same security properties as Ethereum in the roll-up form factor, which is what I think is really needed to get more and more money bridged over to these roll-ups and people transacting on these roll-ups. So, yeah, that's like how I would kind of summarize how I view native roll-ups. Is this just a new product feature that Ethereum's adding that gives
Starting point is 00:21:40 roll-up superpowers and awesome security and makes their users feel more comfortable, which makes them enable to have a better product. And therefore, they should be willing to pay Ethereum a little more money in return, which is, it's just a very positive sum relationship, which is awesome.
Starting point is 00:21:56 Umar, would you say that when we have the current roll-up landscape on one side of things like arbitram optimism base, and then we have native rolloves on the other. Is this a trade-off space? Or do you think that this is a strict improvement into like roll-up designs?
Starting point is 00:22:13 Like, are we losing something by becoming native? Or do you think that this is just fundamentally a better type of roll-up construction and eventually all rolloves will look native? I think if you're building a type 1 EVM-compatible roll-up, like your goal is basically to build a roll-up where block space is fully Ethereum compatible and then also follows Ethereum's like hard forks and upgrades, which is the position that Arbitrum and optimism have taken today, then becoming a native roll-up, I think, is just strictly better. You get better security properties.
Starting point is 00:22:47 You don't have to roll your own fault-proof system. And you can really have a credible claim to having L-1 security, which is really awesome. But I think the beauty of the roll-up center roadmap is that it allows for infinite customization. So, for example, like, say you're building an SVM roll-up on Ethereum, or you're building something like mega-eth where it's not EVM-Com. compatible. I think they're like their database and state root computation is a bit different. Then those people can't be building a native roll-up because like it's just not compatible with the Ethereum spec. But that's really fine. Like those people can exist and like they're just
Starting point is 00:23:23 providing a different point on the trade-off space. So I think all option, going forward all options will exist. Like there'll be native roll-ups. There'll be like EVM compatible, non-native roll-ups. There's going to be like SVM roll-ups. There's going to be really fast roll-ups. Like there'll be all types of roll-ups. Okay, I want to try and understand native roll-ups because I think that's the purpose of this episode. And I think if my understanding is correct, that really the crux of this conversation starts at the fraud proofs mechanism in a layer two or the proof mechanism of a layer two. And in your intro, you said all these layer two teams have had to like roll their own proofs. We need to make one, each team arbitram optimism. They all going to make their own proofs, a base because
Starting point is 00:24:08 as part of the super chain will use the same proof as optimism, but it's going to use that same proof, and it's on a different chain. And I think what you're proposing for native roll-ups for how native roll-ups work is that actually we do this thing called an EVM pre-compile, and that
Starting point is 00:24:24 is basically internalizing that role for all roll-ups on top of Ethereum that want to use that pre-compile. I don't think I'd be able to teach anyone what a pre-compile is, so maybe we can start with that. And And then we can extend that to how that changes the construction of layer two roll-ups.
Starting point is 00:24:43 Maybe Justin, I'll throw this one to you. Can you define a pre-compile? What does it mean to have an EVM pre-compile? And how does it work? How does that change the game for generating proofs for roll-ups? Absolutely. So just zooming out to the problem that we're trying to solve here, as you said, we have L-2s and roll-ups that have so-called state transition functions, STF.
Starting point is 00:25:03 And what that is is basically taking some sort of input, which is characterized by a pre-state route and advancing the state to get a so-called post-state route. And really where the proofs coming is to prove that the state transition from the pre-state route to the post-state route is correct. Now, traditionally, as Uma and you said, you have to roll your own proof system. You have to do emulation that introduces bugs and all sorts of complexities. What if instead we have the L-1 validators just do the checking of the state-transes? on behalf of the L2s and the roll-ups.
Starting point is 00:25:42 Now, what is a pre-compile? It's basically an instruction for the EVM. So in any instruction set or in any CPU, you have instructions like add these two numbers, multiply these two numbers, push something to memory, read from memory, whatever it is. We usually call those up codes. But then there's some up codes that are all special
Starting point is 00:26:06 that are kind of more complicated, and we give them a different name pre-compile. And it basically is just a complicated instruction. And this complicated instruction is kind of a special one because it provides introspection. Introspection is kind of this computer science term whereby a system has visibility over itself. Intro means inside inspection.
Starting point is 00:26:31 It means like seeing. So you can see inside yourself. And basically what we're doing is we're allowing the EVM to see within itself. And actually in 2017, Vitalik had essentially this proposal of an execute pre-compile, and it was called the EVM inside EVM pre-compile.
Starting point is 00:26:51 And so the EVA, okay, so that was like an early concept of what we are now talking to be native roll-ups. Exactly right. And just to give you a little bit of a history lesson, the context back then in 2017 was plasma. And so this pre-compiled was meant to simplify the designs of plasmas.
Starting point is 00:27:08 And then kind of it got ignored and then eventually the proposal was closed. But then, yeah, we had the roll-up-centric roadmap in 2020. So it predates the roll-up-centric roadmap. But now, with the benefit of hindsight,
Starting point is 00:27:24 it makes sense to revisit this research topic and apply it not to plasmus, but to roll-ups. I want to bring up Layer 2B here because Layer 2B has you know, it's classic five-slice risk parameters. The five slices that go into risks around a layer two.
Starting point is 00:27:44 And I'm just looking at Arbitram 1 here. And I think the slice that we're talking about here is state validation. And when we say layer twos have had to roll their own code, they've had to roll their own proofs, we are looking at that state validation part of the layer 2B, like orange slice. And Arbitrum, they have a green slice because they have fraud proofs. and they work and the fraud-proof system that they have built is up to a certain standard that layer 2B calls it, okay, this is green, this is good, we like it, it does its drop, it works, it functions,
Starting point is 00:28:15 there's not too much risk here if we're getting it a green slice. I think what I'm hearing for you guys is that with native roll-ups, we just eliminate the slice entirely. In fact, we allow Ethereum, the Ethereum layer one to have like this state-sponsored infrastructure that allows layer 2s to just use the EVM pre-compile to do the state validation, and now layer 2s don't need to roll their own code. And so this risk vector for layer 2s, this whole entire slice just eliminates.
Starting point is 00:28:42 And for any native roll-up, the state validation, like layer 2-beat slice, is just not relevant because the Ethereum Layer 1 provides that service. Is that correct? Yep, that's correct. And you can think of native roll-ups effectively as unlocking a new level of security for this slice, kind of a bright green fluorescent.
Starting point is 00:29:00 color. And you can potentially imagine at some point L2B is introducing a new roll-up stage, maybe stage three, where you know, you have this premium level of security. Since we're on the subject, there's also a sequencer. There's a sequencer layer two slice and there's like sequencer failure and what happens if a sequencer goes down with a layer two. And if we're talking about base roll-ups, base roll-ups also eliminate the sequencer failure risk to a layer two. And so if we in theory have a native and based roll-up, does that mean that state validation and sequencer failure, those two of the five slices in the layer two-beat risk orange slice, those are just not relevant. Those are just provided by the Ethereum layer one.
Starting point is 00:29:46 Yes, that's exactly right. And actually, today, with the L2B criteria, you can have a green slice even with a centralized sequencer. So it's looking at different things. And so I think there really is an opportunity to have these two pizza slices be bright green and to have a state three. And going back to the state validation, yes, there is some software and some code involved, but there's also ancillary infrastructure. There's proven networks, there's watchtowers, in the case of for Proof games, there's security councils. All of that ancillary infrastructure just collapses to one line of code where you're just calling the pre-compile. And so not only is a massive security improvements, it's a massive simplicity improvement as well.
Starting point is 00:30:33 Can we talk about what happens when roll-ups become native? I think we touched on it a little bit, but I want to go over and trace that conversation. When all of these roll-ups are all using this enshrined EVM pre-compile, they're all using the same fraud-proof system. They're all on like kind of on the same circuit. This is when we unlock some composability benefits. Maybe, Uma, you can take us back to that part of the conversation. And let's like dive into that one. Talk about why all these native rolloves being on the same EVM precompile. How and why does that unlock composability across layer two's?
Starting point is 00:31:08 I think the composability is actually unlocked more by the based sequencing aspect than the native roll-up aspect. If all roll-ups use this native pre-compile, you just get better security properties. It's kind of this stage three thing that Justin's talking about. And then because they're all using the same proof system, they definitionally have to have the same state transition function. So it's very like homogenous uniform block space. But then I think Justin's a better person to talk about this. Once those native rollups start also using based sequencing,
Starting point is 00:31:43 that's when they can all do inoperability. And also these two things are orthogonal. Like you can be native and based. You can be native and non-based. You can be any combination of like in the quadrant. Yeah. Another benefit of native roll-ups around composability is the idea of making real-time proving so much easier. And the reason is that the L-1 has the ability to cheat a little bit with a proposal called delayed execution, where the state routes can be delayed by one full slot.
Starting point is 00:32:19 So this is a proposed future upgrade. and what that means in the context of native roll-ups is that the validators that do all the checking of these state transition functions, they have a whole slot to receive the proofs. And that means that those that are generating the proofs have a whole slot to produce them. And as it turns out, the technology for snark proving can be relatively easily brought down to reduce the latency to under one slot, 12 seconds, but it's much, much, much harder to do the more aggressive thing, which is to produce the proofs in the exact same slot, where the latency will have to be on the order of, say, 100 milliseconds. And so this is where there's a little bit of a synergy between the based and native roll-ups, where we can get this synchronous composability,
Starting point is 00:33:15 which requires real-time proving. But behind the scenes, we're cheating a little bit. It's next slot real-time proving, not same-slot real-time-proofing. I think I need a little bit more help understanding that because how to so as a user I'm guessing I'm just on a based native roll-up and things are happening very instantaneously. I'm not waiting for another slot on the Ethereum layer one, which comes in 12 seconds. I'm not waiting 12 seconds. Something is happening where it's still instantaneous to me as a user, but there's some tricks going on in the background. I could use a little bit more help kind of understanding the mechanisms here. Yeah, absolutely.
Starting point is 00:33:52 today, if you put a transaction on chain, it has to be valid to go in because the validation happens in real time in the same slot. With the delayed execution, we actually relax the rules a little bit. So we say, block proposer, you can propose whatever block you want, and it could be complete garbage. And what will happen is that only in the next slot will the block be validated and processed, and the invalid transactions will be pruned away. and will count as no-ops, so they don't change the state.
Starting point is 00:34:28 From a user's perspective, what you care about is that your valid transactions have made it on chain. They basically have been included in the DA layer of Ethereum. But from the perspective of everyone else, knowing exactly how this transaction executed, you have to wait one slot, especially if you are a like client. The way that like clients work is that they basically ask the Ethereum network, the L1 validators, what is the canonical state route? And so like clients have to wait an extra 12 seconds. But if you are a user on the wallet, nothing really changes because the wallet knows that
Starting point is 00:35:10 it's a valid transaction that went on chain. And so it therefore knows how the transaction will execute. Can we talk about how ZK proofs are involved here? I know they are because Uma's here and she's excited about it. I just don't exactly know where ZK is actually incorporated into a native roll-up a part of the tech stack. Maybe, Uma, you can take this part of the conversation. Yeah, definitely. So with the native roll-up, you're adding this execute pre-compile on L1.
Starting point is 00:35:39 So why can't we do this today? You could do this today, but the problem is, like, Ethereum has this block gas limit, right? of, I think it's 15 million target, 30 million limit. And if you add this execute pre-compile, you can't actually use arbitrarily more gas because there are reasons for the Ethereum block gas limit, which is to limit state growth, which is to make sure that, you know, at home validators
Starting point is 00:36:05 with like a certain hardware can actually process everything, et cetera, et cetera. So without ZK, you can add this pre-compile, but basically the amount it can be used for is very minimal. And so it's not actually scaling Ethereum. With ZK, though, and the delayed state route stuff that Justin is saying, maybe to keep it not go super into the weeds, what you can do is you can have this execute pre-compile. And then instead of having the validator,
Starting point is 00:36:33 all of their invalidators needing to re-execute all the transactions that go in that pre-compile, they can instead just verify a ZK proof of the correct execution of that pre-compile. Then you can have calls to this execute pre-compile that, use basically arbitrarily much gas and you just have the validators verifying a ZK proof. And so now you're actually really scaling the L1 in a meaningful way. Let me see if I, let me see if I understand that really quick. So we have this EVM execute precompile.
Starting point is 00:37:03 And if we just use it in like a vanilla fashion, we're actually not scaling Ethereum. We're just doing this. We're doing the same layer one execution, but with this extra step of being inside of the precompile and we're not actually scaling anything. So I think what you're saying is that in order to actually leverage the benefits of this EVM pre-compile and actually scale Ethereum, we have to use ZK Tech, ZK proofs, to scale and then use the pre-compile to get the benefit and scalability of ZK and then insert it into the layer one EVM via the EVM pre-compile. Is that how that works? Yeah, exactly. Justin, why would you add to this?
Starting point is 00:37:41 Yeah, one thing I want to stress is that it's possible to have two different flavors of the execute pre-compileged. The simple, naive way of doing it, as we must said, is just to use naive re-execution. And then the advanced way of doing it is with snarks. And one of the big benefits of snarks is that you can dramatically increase the gas limit, 10x, 100x, maybe even 1,000x. Now, the reason why I want to mention highlight the fact that we can have the simple version is that this may allow us to do a hard fork in the relatively near future. You know, possibly, you know, if there's a lot of demand from the community, you know, end of 2026.
Starting point is 00:38:20 And the reason is that it wouldn't require any fancy technology. 2026, is that in the relatively near future? In EF time, yes. And part of the reason is that we already know what we want to do in the next fork, which will happen in a few weeks. We also know what we want to do in the fork after that. We definitely want to do paredes. and then there's all sorts of things competing for the fork after that. And so, yes, Iffram does move slow.
Starting point is 00:38:50 One of the benefits of shipping the naive execute pre-compile sooner is that it can actually benefit the optimistic roll-ups immediately. And the reason is that optimistic roll-ups only have to call the execute pre-compile in the fraud-proof game, which by default never happens. And so even if the executive precompile has a very small gas limit, say a 1 million gas limit or a 10 million gas limit, you can still have optimistic roll-ups
Starting point is 00:39:20 with unbounded gas limits. The alternative to optimistic roll-ups are pessimistic roll-ups. The reason why I introduced this new term pessimistic roll-ups is that, you know, ultimately the enforcement can either happen through snark proofs or through re-execution, both of which are types of pessimism
Starting point is 00:39:43 where you want to know up front that this is valid. The pessimistic wallups will have to wait until we dramatically increase the R1 gas limit, and that will only happen when behind the scenes we're using snarks. Justin, you brought up the idea of layer two's leveraging this pre-compile. Does that all it takes for them to become native? Maybe we can talk about the topic of existing layer two, base, arbitral optimism, the big ones.
Starting point is 00:40:11 How do we want them to become native? Umah says it's just a strict improvement. How do we actually do that? What are the incentives to become native? Like what's the carrot? And then what's the difficulty as well? Is this like a technical challenge or is this relatively trivial? Maybe Justin, I'll start this one with you.
Starting point is 00:40:31 Yeah. So my definition of native roll-ups is one that uses the pre-compile to execute user transaction. Now, in order to do such a thing, you basically have to decouple the execution of user transactions, which is known as the EVM state transition function, from all the rest that it takes to launch an L2, which includes parsing the blobs, handling the sequencer, handling the governance, all sorts of system transactions that are peripheral to the user transaction processing. Now, unfortunately, today, the way that ruleups have been designed is that they've intertwined both the user state transition function from the system level code.
Starting point is 00:41:18 And so what needs to happen is basically for these L2s to refactor that code so that they have a very clean decoupling between the layers. So there is a decent technical hurdle to overcome for the current existing rule of us. That's what I'm hearing. Exactly right. And I also do want to stress that for some roll-ups, it's just plain impossible for them to become natives. So if you have a completely different virtual machine, whether that's SVM. Like an SvFELA, RELA virtual machine. Exactly. Move, Cairo, you know, wasam, you name it. It's just impossible. And so the way that I see it is that if it is possible for you to become based, sorry, native, meaning that you are trying to be. equivalent, then it's kind of a no-brainer. But if you're not, then it's just impossible. And so what I see happening is kind of this, this bimodal distribution kind of happening, where we see a very, very clean separation between the EVM equivalent roll-ups in one bucket and then the completely
Starting point is 00:42:24 different roll-ups in another bucket. It won't really make sense to be kind of halfway in between, where you're EVM equivalent, but you have a few bells and whistles that prevents you from becoming native. Okay, so if you are an EVM layer two, the incentives and maybe it's still like there's a decent hurdle like we just said, but the incentives to become a native roll-up are very large. And it's also possible for you to do so. And then if you're a move-based link, any other language, anything that's not the EVM, you just don't get the benefits of being interoperable part of the shared composability, the shared network effects that the EVM native roll-ups do. That's exactly right. You don't get the shared security. But, you know, maybe that's okay because
Starting point is 00:43:10 there's some virtual machines out there, for example, Cairo or Wasam, that are either extremely simple and, you know, there can be amenable to formal verification or are standards that are completely frozen. The core of Wasam or the core of Risk Five doesn't change. On the other hand, the EVM is a living thing, you know, that every six months or 12 months gets updated. What about the, maybe we could just define the incentives as, as to why a roll-up would enjoy becoming native, especially if you're an existing ecosystem like base and you already have huge network effects.
Starting point is 00:43:43 What's the motivation for why would they would become a native roll-up? I mean, as Uma said, it's a dramatic improvement to security, and that has downstream effects in terms of TVL. So today, we have less than 10% of all assets that are on these roll-ups. Really, we should have 90%, close to 100% of assets on these roll-ups. And I think it's correct to blame to a very large extent the possibility for bugs,
Starting point is 00:44:11 as well as the security councils that can just from one day to another just completely wipe out and drain a roll-up. By the way, you know, I have experience with security councils, and they use very, very similar processes and mechanisms to the recent hack that we've seen. The by-bit hack, yes, that use the safe. infrastructure. And so I'm actually a little bit surprised that North Korea didn't target, you know, a roll-up which has $10 billion of TVL versus just by bit, which only has $1.5 billion. That makes it very real for me. I see it as a defense mechanism. And I think part of the, like,
Starting point is 00:44:52 some of our just early ideas of bankless is that, like, strong property rights is so much of what underpins what is successful in crypto. Like Bitcoin and the current price of Bitcoin and the value Bitcoin, I think, comes from its commitments to the $21 million and strong property rights. And I think that we can extend that to, like, why is there only 2% of all eth on layer twos? It's because people, I think, don't believe in the strong property right assurances that layer twos can offer. So the claim here is that the bet the carrot to, for something like base or optimist or any,
Starting point is 00:45:23 any layer two to become native is that you have stronger property right assurances, equivalent property right assurances that the Ethereum 1 offers. And you can take that to the Ethereum layer too. You guys are both nodding your heads, yes. Anyone else want to add any color to that? Uma, what do you want to add to that? Yeah, I think the simple, like, one-liner is, become native, get more TVL, get more users.
Starting point is 00:45:48 And everyone wants that. And that's good for the roll-up. That's good for users. That's also good for Ethereum, right? Is there a network-of-fax conversation here? I remember when we were having our conversation, Justin, about base roll-ups, is that when there is just one based roll-up, there's not that much incentive for one chain,
Starting point is 00:46:08 one layer two to become a base roll-up. But as soon as there is already a chain that is based, the incentive to become a, you know, join in the base roll-up ecosystem becomes larger, and it becomes stronger and stronger, the more base roll-ups that they are because there's a composability network effect around base roll-ups.
Starting point is 00:46:27 The benefits of being a base roll-up increase as a function of base roll-up market share. Is there a similar property like that for native roll-ups, or is it mostly just about the security benefits that we just talked about? I'd say at first-order approximation, it's mostly about the security, but there is a second-order benefit, which is that as applications become more and more complicated, and they use more and more money-legos,
Starting point is 00:46:51 they're going to prefer money-legos that are native as opposed to non-native. Now, one good, concrete example of a money Lego is ENS. And what we're seeing is the emergence of app chains like NameChain. And from my discussions with NameChane, they want to become maximally secure, maximally neutral so that they're the best Money Lego possible. And, you know, they have this roadmap to become based. And they've signaled recently just a few weeks ago on sequencing call number 17, along with many other roll-up founders
Starting point is 00:47:27 that they're very, very much interested in native sequencing, and for them is kind of a no-brainer. And my thesis is that we're going to see more and more app chains, and I think it will make sense for them to become based and all native. One of the conversations that I want to bring up, so I'll bring it up now, is the idea, the trade-off space of customizability
Starting point is 00:47:48 when it comes to layer two is a native roll-up. So if a native roll-up just uses the EVM pre-compile, and it's just repurposing the Ethereum Layer 1, the EVM, that's great, but that also kind of ties them to the mast of the EVM. And upgrading the EVM has, it's possible, and it happens, like you said, Justin, it's an alive beast, but it also doesn't upgrade as fast or introduce the same level of features that I think current layer 2s are really hungry for. I think the big example here is account abstraction and the war to get account abstraction features into the Ethereum layer one, which I think started in like 2017, 2018, and then was just put on the shelf for whatever reason. And then we got it back off the shelf. We dusted it off.
Starting point is 00:48:35 Tried to put some account abstraction features into the layer one in 2023 or maybe even this year. We finally did it. But meanwhile, over that like six year long gap, layer two have been like, well, I'm just not going to wait for Ethereum. I'm going to just do my own native paymaster. I'm going to roll my own code. I'm going to introduce my own account abstraction features into my layer two because of the Ethereum layer one, EVM is just not upgrading fast enough. So I'm going to fork away from the EVM standard to build out my own features because I can do it faster. Now we're asking, okay, but there's benefits to becoming and all using the same EVM standard.
Starting point is 00:49:08 So we should do that. But there's a tradeoff here because now layer two is can't really access the freedom and flexibility of creating their own, you know, alternative flavors of the EVM to add the features that they want to add. Maybe we can talk about this tradeoff space. How can layer 2s still access customizability and get the benefits of security that we talked about with native roll-ups, but not necessarily like tie themselves to the mast and be beholden to the very long layer 1 EVM upgrade cycles? Sure. Happy to take this.
Starting point is 00:49:43 So one, some of the vectors for composability that L2's want is around tokenomics, governance, for example, or treasury management, sequencing, if they're doing something special like Unichain. But they don't necessarily want to change the EVM. And if you zoom out and you look at the top L2s, the vast majority of them are at least trying to be EVM equivalent. And every divergence is more of a technical limitation that they haven't yet been able to overcome rather than something that they fundamentally want. I mean, we've seen historically that chains like optimism kind of rewrote the whole code base to be maximally EVM equivalent. And we're seeing the same thing now with ZK Sync. They used to be merely solidity equivalence, meaning that if you wrote a program in solidity, you could run it on ZKSync.
Starting point is 00:50:43 But they quickly realized that that was not enough. They needed to be bytecode equivalent. And now we're seeing teams like Scroll that are EVM equivalent. but they've changed the state tree to be snark-friendly, and now they're moving back to the actual real EVM-EFARM state tree so as to maximize compatibility and minimize the devs friction that departing from the EVM introduces. I think what your answer is is that the EVM as a standard
Starting point is 00:51:17 is actually not where layer twos are expressing their interest to have composability, or customizability. They're doing it in other venues, other vectors that are outside of the EVM. And what you're saying is like there's evidence to show that any time anyone has deviated away from the EVM, they've corrected course and went back to just trying to use the EVM
Starting point is 00:51:39 for one for one as possible. And the areas of customizability are like on their token, on things that are outside the EVM. I think that's how I'm summarizing your answer. Yep, that's exactly right. And if you do want to fundamentally innovate at the virtual machine layer, because you want to have much more throughput or parallelization, just completely change the virtual machine,
Starting point is 00:51:58 move to something like SVM or move or whatever it is. But then you lose the benefit of becoming native. Correct. I think there's a lot of customizations that are outside of the EVM that are interesting that maybe are worth touching on. So for example, like what BASE did with FlashFox recently and what Unichane did with FlashFox is an example of a sequencer customization
Starting point is 00:52:21 with more sophisticated pre-comps and 200 millisecond block times, but that it doesn't deviate from the EVM spec. So you can have flashblocks but still be a native rollout. And that's still a better U.S. And so there's a lot of aspects that I think you can still improve UX meaningfully while leveraging being native. So on that conversation, you're saying, okay, we can have Unichain and Unichain could be native
Starting point is 00:52:46 and still leverage their flashblocks innovation that allows them to have super fast blocks, but that invalidates their ability to become based, right? Because they're doing their own sequencing, and so they can't be base because they want to kind of like roll their own code as it comes to sequencing. This is my understanding, is that right?
Starting point is 00:53:05 Yes and no. So it turns out that it is possible to have some sort of hybrid and convergence towards the two. And in some sense, pre-confirmations are prerequisite to becoming based, because base sequencing has these very long slot times. And so things like flashblocks are precisely the technology that are going to unlock base sequencing.
Starting point is 00:53:28 Now for UnitChane specifically, not only do they want to have these fast pre-confirmations, but they also want to be really opinionated on the ordering of the transactions, where they want to order descending by priority fee, this is something that you can do if you want, but that's going to damage the composability benefits that you get from being based
Starting point is 00:53:52 because it limits the opportunity for super transactions across L2s that are fully determined by the sequencer as opposed to being constrained by this priority fee rule. I'd like to talk about timelines for all of this. I know that there's not one single native roll-up project. Maybe the EVM pre-compile actually is the project. but maybe we can talk about just like with the rollout plan for this.
Starting point is 00:54:19 So like what's the timeline conversation? What's the roadmap conversation like? What's the necessary tech required? Does this require social coordination? Whose job is this to get this done in the Ethereum ecosystem? And I know not really a lot of these questions really make sense. I know if I ask an Ethereum developer, what's the timeline? They're going to give me a very loose, nebulous.
Starting point is 00:54:40 It's a rough consensus problem answer. But can we just map this forward as to like where we are now, which is we're talking about it on a podcast versus where we want it to be, which is a reality baked into the Ethereum Layer 1 with adoption. What does the path between AMB look like? Justin, I'll throw this one to you. So I think the major bottleneck that we have right now is that we're lacking a coordinator, a champion,
Starting point is 00:55:02 to take responsibility for this, work full-time, just like Tim Beko did to ship EIP-1559, because there's a lot of coordination to do with the L2s, with the execution layer clients, with the researchers, and all sorts of other entities. And so if you're listening to this podcast, and you'd like to make a massive contribution
Starting point is 00:55:27 to Ethereum Layer 1, well, there is a big opportunity here to make an impact. Now, assuming that we do find a coordinator to champion this, what we're going to need is this hard fork. and, you know, as was mentioned, any hard fork is going to take a time because we need to write an EIP, we have to go through the Oracle devs, we have to go through dev nets and test nets. And so it's at the very, very least, you know, a 12 to 18 month process. One of the things that FML1 is trying to do is accelerate the cadence of forks. So historically, we've done forks once a year. We're trying to shift gears and do two forks a year, which is very ambitious. But if we can do so, then we can project ourselves four folks in the future by end of 2026 and envision potentially, if there is demand for it at least, an execute pre-compile, which is backed by re-execution, not by anything fancy like snox.
Starting point is 00:56:29 And then if you want to project yourselves three years into the future, then we can envision this pre-compile, basically dramatically increasing its gamutely. limit and being backed by by by by by by by by by by by by by by stocks and the reason is that in by in three years time we should have all of the necessary ingredients to to do that we should have the delayed state routes which which which was mentioned previously we should have a very strong diversity of mature and real-time ZKVMs and we should also hopefully have um something called attester proposes separation, which allows us to put the burden of proof on sophisticated entities called
Starting point is 00:57:19 execution proposers. There are a lot of dependencies to get to the endgame, but I think it's important to start planting seeds with this initial version of the pre-compile backed by re-execution, which, as I mentioned previously, if you're an optimistic roll-up, as opposed to your pessimistic roll-up, doesn't have any impact on your business because you can have an unbounded L2 gamut. limit, even though the execute pre-compile gas limit, would be very small. I want to now finally turn to what I actually think is the most fun part of this conversation. Going back to Uma's tweet that really started this all,
Starting point is 00:57:54 Uma, you said, native roll-ups solve, eth value accrual. How do native roll-ups impact the economics of ether? Uma, I'll tell this one to you. Well, we're adding a new feature to Ethereum the product, which makes it more valuable. And so, okay, solve is a very traumatic statement, but I think significantly help, I think, is something for sure. On a more practical level, today I think you see there's kind of these alternative DA solutions popping up like IDIN DA or Celestia or Avail. And right now, if you're a roll-up, you can, you roll your improved system, you can use Alt-D-A, and it's all kind of the same to you and to your use. users, it's like a first glance basis. With native roll-ups, you have to use Ethereum DA to use
Starting point is 00:58:47 this execute pre-compile. And so there's just much stronger kind of lock-in to using Ethereum as DA. And as Justin has been saying, he thinks DA is going to be the biggest source of value cruel in the future for Ethereum. And so having the stronger lock-in, because you're providing this, you know, new feature, I think significantly helps ETH Value Cruel. in this world because roll-ups are much more likely to use each DA. Yeah, I would agree with, with Uma, that L2s are more likely to use FMDA and therefore become roll-ups. But I do want to stress that it is possible to be what's called an optimum, which is to post data with an alternative DA provider and at the same time have this optimistic fraud-proof game.
Starting point is 00:59:37 The one thing that is not possible is to be a validium where you do not consume IFRMDA and at the same time you have this pessimistic validation of the state. Now, the other thing that I want to highlight is that if you want to be a pessimistic roll-up, which I think most roll-ups will want to be in the future, they don't want to have this seven-day optimistic game. There is a little bit of an overhead to DA consumption. And the reason is that not only do you have to specify the L2 transactions and blocks, but you also have to specify the state accesses. So every time you read state, you have to put that data on chain. And it's kind of information that currently doesn't go on chain. And my ballpark estimate, even though I want to do a little bit more of an empirical study,
Starting point is 01:00:34 is that this is going to yield, roughly speaking, a 20% increase in demand for IFRAMDA. At first approximation, it doesn't really change the FIAC rule in the economics, but here and there at the margin, there is maybe a 20% delta. Guys, I think we've covered, I think, everything I understand or know to cover about Natives' roll-ups. But before we actually close this episode, I kind of want to just open this up. Is there any stone that I've left unturned, any topic that is worth elevating, or do you think actually we did a pretty good job here? I do want to highlight two advantages of native roll-ups that we haven't mentioned yet. The first one, kind of maybe surprisingly, is quantum security.
Starting point is 01:01:19 So today, most of the ZK roll-ups are not post-quantam secure. And the reason is that they take proofs and then they shrink them to elliptic curve-based proofs. so that they don't pay as much L1 gas when they verify the proofs. With native roll-ups, there are no proofs that go on chain. And so there's no risk that they are quantum insecure. And so what would happen is that automatically, when the L-1 validators upgrade their setup to be post-Quantum-secure, the roll-ups automatically also become.
Starting point is 01:02:01 post-quantum secure. And so in some sense, the L2s don't have to worry about post-quantum security at all. The other benefit that I wanted to highlight is this idea of proof aggregation. In theory, we have this amazing technology, which means that if it only has to verify a single proof for all of the roll-ups, but we have a human coordination problem. We have multiple L-2s with different teams that just can't, for some reason or another, come to consensus on the exact proof system to all use because of competitive dynamics and all sorts of other dynamics. With the native roll-ups, what we can have, basically,
Starting point is 01:02:46 is aggregation of all of the proofs across all of the native roll-ups happen behind the scenes by the R1 validators. And so ultimately, this is an optimization, which could lead to much less gas being spent by the L2s to settle. Because today, if you want to verify a proof on chain, it's going to cost you, let's say, 200,000 gas. And if there's, you know, 100 roll-ups, that's a lot of gas just to verify the proofs with the native roll-ups. That cost is going to dramatically shrink. My question to you, is succinct relevant here at all?
Starting point is 01:03:28 Or are you just native roll-up pill? Then you thought it was really cool. Well, I think succinct and ZK and the ZKVM we're building will be used to actually make the native roll-up pre-compile scale and have a limited gas. So that's kind of like the tie-in to succinct. But beyond that, I just thought it was a very cool design. I think it makes a lot of sense. And I think basically like the more activity and transactions that get put on roll-ups, the best. better it is for us because like every transaction is going to be ZK proven and like I see
Starting point is 01:03:59 Ethereum and roll-ups winning as like succinct winning. So that's why I have a vested interest in making sure that happens. Oh actually I do have one thing I wanted to mention which is we have this awesome website, eFproofs.org, which is basically the equivalent of L2Bs for ZKVMs. And what we're trying to do is encourage the ZKVMs to reach real-time proving and to lower the cost of generating the proofs. So what this website shows is the cost in sense to generate the proof and the latency in seconds. And where we're at right now is extremely promising because the cost to generate a proof is on the order of sense already. It's something like, five or six cents to prove that a main net L1 EVM block is valid.
Starting point is 01:04:58 And the latencies, as you can see here, the proving times are on the order of call it two minutes. And two minutes is 10 slots. And so we're roughly an order of magnitude away from a latency standpoint towards the one slot real-time proving. And the amazing news here is that this website right now, only support single machine proving. So these proving numbers are on a single GPU. And so you can imagine once we have support for multi-machine proving
Starting point is 01:05:31 with a cluster of GPUs, we're going to be able to bring the proving time much, much lower. And several founders of ZKVM projects have publicly stated that they believe they can get next-lock real-time proving this year, and maybe Uma can talk about this more. Yeah, we're, I think, the top ZKVM on ETH proofs in terms of latency. And as Justin was saying, right now the latency for proving an Ethereum block on a single GPU is like a few minutes and the costs are super cheap.
Starting point is 01:06:09 And already if we want, with SP1 and what we've built, if we distribute the proving across many GPUs, the latency can be as fast as like 20 to 30. So we're actually already very close to this 12 second real time next slot real time proving benchmark that Justin is saying. And we definitely think we'll be able to hit that this year with algorithmic improvements, engineering improvements and a bunch of other stuff. And so, yeah, that's something to be excited about and be on the lookout for. Can we talk about what is unlocked with real time proving when we hit that event horizon?
Starting point is 01:06:44 When we hit that threshold, what does that unlock for us? What doors are opened? With real-time proving, actually working and having basically proofs of Ethereum blocks within the next slot, you can add this native roll-up pre-compile and just jack up the gas limit of that, you know, very, very high. So you can basically have like the L-1 gas, effective L-1 gas of the L-1 plus all the native roll-ups at, you know, 10x or 100-X, the throughput of Ethereum today, which is super awesome. And, yeah, and I know Justin's timeline for native roll-ups, being added to the L1 is late 2026.
Starting point is 01:07:20 And I think, honestly, the ZK stuff will be ready a lot sooner than that. Because we're not bottlenecked to buy all Portevs and all the Ethereum shenanigans. So, yeah, I think we could just ship the native role of Precampal and ZK, you know, hopefully by 2026 for sure. Justin, this is a whole different arena for a question that I'm going to ask you. But I just want to get your perspective on it. It seems to be that all of Ethereum's problems all collapse back it down to there's some social coordination issues with the all core dev calls and shipping upgrades. And I think maybe that's the motivation behind the faster cadences for hard forks, which I think can encourage some faster coordination and allow room for faster coordination. But nonetheless, it's a human, like, why is Ethereum in the place that it is?
Starting point is 01:08:11 Well, the answer is, it's like, well, Ethereum is a decentralized ecosystem. them. We've done the research. That's in the rearview mirror. We've done, we can do the engineering. That's not the hard part. It's actually social coordination. And that's always seemingly the bottleneck for Ethereum making progress. Maybe you can just like kind of comment on that analysis and where do you think Ethereum goes with tackling that problem? Absolutely. I 100% agree with you that coordination and champions are the number one problem. And actually recently, a different foundation research team just hired two new coordinators, Ladis Laos and Will. And already, they've only joined for a few weeks, but
Starting point is 01:08:49 already we're starting to see a different kind of pace once we have that kind of support. Now, it's all well and good, you know, to propose these big ideas like, you know, base sequencing, native execution and the beam chain. But in order to actually deliver these things to main net, we're bottlenecked on coordination. And, you know, I can only do so much. And so what I've chosen to focus on is the beam chain and I'm really hoping to delegate both the base sequencing and the native execution to other champions. Now, going back to your questions around governance, what we're trying to do is a couple of things. The first one, as I mentioned, is to try and accelerate the pace of the forks so that if there's small things that we want to
Starting point is 01:09:43 that we want to push, they can be pushed with lower latency. But at the same time, there's this other effort, the beam chain, which is also about tackling the governance problem, but it has a completely different strategy. What we're trying to do with the beam chain is basically only focus on research and development and technology for a period of years, and then only at the very end kind of enter the game of the very expensive social layer where you have to go through the or core devs, the dev nets and the test nets.
Starting point is 01:10:17 And so you can think of it as being a batching optimization at the governance layer. And not only that, it allows us to make bold and ambitious upgrades as opposed to targeting the small and medium upgrades that are currently being shipped with oracle deaths. And so if you zoom out, we're kind of shifting gears and having these two parallel strategies. And so if you're willing to zoom out and look at the bigger multi-year picture, I think it's very easy to have high conviction on Ethereum. Justin, Uma, thank you so much for joining me today on the show
Starting point is 01:10:53 and helping me go through native roll-ups and everything that they can do for the Ethereum ecosystem. Really appreciate it, guys. Yeah, thanks for helping us. Thank you. Bankless Nation, you guys know the deal. Crypto is risky. You can lose what you put in. But nonetheless, we are headed into the frontier.
Starting point is 01:11:06 It's not for everyone, but we are glad you are with us on the bankless journey. Thanks a lot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.