Bankless - Dencun 101: Tim Beiko Explains Ethereum’s Upgrade and Beyond

Episode Date: March 13, 2024

Ethereum's next hard fork is here and we're joined by Tim Beiko from the Ethereum foundation to walk us through and explain what it all means. He shares a comprehensive guide on each and every EIP al...ong with what comes next, Dencun, Pectra, and beyond. If you want to be up to date with the state of all things Ethereum this is the episode for you. ------ 🏹 USE PODCAST24 FOR 10% OFF https://bankless.cc/Citizen2024  ------ 🎧 Listen On Your Favorite Podcast Player:  https://bankless.cc/Podcast  ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2   ⁠  🦄UNISWAP | SWAP SMARTER https://bankless.cc/uniswap  🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo     🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/toku     🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle     ⚖️ARBITRUM | SCALING ETHEREUM ⁠https://bankless.cc/Arbitrum    ------ TIMESTAMPS 00:00 Start 04:44 Naming "Dencun" 08:40 Overview on Blobs 11:31 EIP-7514 (Epoch Churn Limit) 14:42 EIP-6780 (Self Destruct) 19:27 EIP-7044 (Signed Voulentary Exits) 20:49 EIP-7045 (Max Attestation Inclusion) 21:49 EIP-4788 (Beacon Block Route) 25:31 Opcode Changes 29:58 What's Next 37:53 Future EIPs 41:37 Max Effective Balance 45:22 Inclusion Lists 50:40 Additional Noteworthy EIPs ------ RESOURCES Tim: https://twitter.com/TimBeiko  ETH Magicians: https://ethereum-magicians.org/u/timbeiko/summary  ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 The way I've been explaining this is like we are getting Stone Age blobs with Dengoon 4844, but then the blobs can evolve into like sci-fi blobs independently of any sort of hard fork. And so the blobs are going to be able, is that true? Welcome to Bankless where we explore the frontiers of Ethereum. Today on the show I brought on Tim Bako. It is hard fork day. It's either today or tomorrow, depending on when we release this episode. Dankun is here.
Starting point is 00:00:30 The blobs have arrived. We unpack what is. Beyond the blobs in this episode with Tim Beiko, however, because there are other EIPs after EIP 4844 that are going into Dankoon. And then there are all some EIPs that did not make it into Dengun that will come in the next hard fork, which is Pectra. And then after Pectra, there are going to be further EIPs. And so Tim Beko on the episode today walks us through all the EIPs that are coming now, coming soon, and coming later. So if you want to really dive into the 300 level, 400 level, decently technical future of Ethereum, this episode is for you. Emphasis on this being a pretty technical nerdy episode.
Starting point is 00:01:08 Really for the people who are like, you know, listeners of the Daily Gway Ethereum core devs, you know, people who have been, I want to explore the bottom depths of the Ethereum rabblehold. This episode is for you. Bankless station, we're going to get to our conversation with Tim Baker right now. But first, I'm going to talk about some of these fantastic sponsors that make this show possible. If you want a crypto trading experience backed by world-class security and award-winning support teams, then head over to Cracken, one of the longest-standing and most secure crypto platforms in the world. Cracken is on a journey to build a more accessible, inclusive, and fair financial system, making it simple and secure for everyone, everywhere, to trade crypto.
Starting point is 00:01:43 Cracken's intuitive trading tools are designed to grow with you, empowering you to make your first or your hundreds trade in just a few clicks. And there's an award-winning client support team available 24-7 to help you along the way, along with a whole range of educational guides, articles, and videos. With products and features like Cracken Pro and Cracken NFT Marketplace and a seamless app to bring it all together, it's really the perfect place to get your complete crypto experience. So check out the simple, secure, and powerful way for everyone to trade crypto, whether you're a complete beginner or a season pro. Go to crackin.com slash bank lists to see what crypto can be.
Starting point is 00:02:17 Not investment advice, crypto trading involves risk of loss. Selo is the mobile-first EVM-compatible carbon-negative blockchain built for the real world. Driving real-world use cases like mobile payments and mobile defy and with Opera MiniPay as one of the fastest growing Web3 wallets, Sello is seeing a meteoric rise with over 300 million transactions and 1.5 million monthly active addresses. And now Sello is looking to come home to Ethereum as a layer two. Optimism, Polygon, Matter Labs, and Arbitrum have all thrown their hats in the ring for the cello layer two to build upon their stacks. Why the competition? The Sello Layer 2 will bring huge advantages like a decentralized sequencer, off-chain data availability,
Starting point is 00:02:53 secured by Ethereum validators and one block finality. What does that all mean for you? With SELO layer 2, gas fees will stay low and you can even pay for gas natively using ERC20 tokens, sending crypto to phone numbers across wallets using Social Connect. But SELO is a community governed protocol. This means that SELO needs you to weigh in and make your voice heard. Join the conversation in the SELO forums.
Starting point is 00:03:13 Follow SELO on Twitter and visit sello.org to shape the future of Ethereum. Are you launching a token? Is it already live? How are you managing the legal and tax obligations for providing token grants to your token? team. It's no secret that token management gets complicated. Between learning all the legal language and tax obligations in every country that your team is in, token grant management can feel like an obstacle course. But it doesn't have to. That's where Toku steps in. Toku provides practical
Starting point is 00:03:37 tools to handle token grants, allowing for effective oversight of token distributions and payroll tax compliance for employees, contractors, advisors, and investors. They also handle tax withholdings through their real-time tax calculations that can be done by Toku or integrated into any payroll E-O-R providers in any jurisdiction. Toku is a trusted provider of protocol labs, D-Y-D-D-X Foundation, Mina Protocol, and many more. Get started for free and make token compensation simple
Starting point is 00:04:02 at Toku.com slash bankless. Bankless Nation, I'm here with Tim Beko, the coordinator at the Ethereum Foundation leads the all-core dev calls. Tim, how you doing? Good. Thanks for having me. Yeah, exciting times in Ethereum. We are the week before the hard fork.
Starting point is 00:04:16 We'll actually air this either the day before or the day of the hard fork. So happy Hard Fork Day for everyone listening. Tim, I want to go through Deng Kuhn and the high-level EIPs will just talk about, we'll just run through all of them just to really unpack what's going on in Dekun. And then we'll talk about what didn't make it into Deng Kuhn, which means like what are the EIPs still in the all-core devs mem pool that we know we want, eventually included just didn't make it into this particular hard fork.
Starting point is 00:04:43 Sounds good? Yeah. Cool. But before we do that, can we actually talk about the whole naming convention thing? Dengun is a weird word. I don't think it's a word. The next one after this is Pectra or Electra maybe. Can we talk about how the naming convention works on these hard forks?
Starting point is 00:04:59 Like, what does Denkun mean? How did it come to be? Yeah, yeah, that's a good place to start. So on Ethereum, you know, before the merge, we had both the proof of stake chain and the proof of work chain sort of updating independently. And both of those sides had like a different naming scheme, you know, to the name upgrades. So on the proof of work side, we started. We had, you know, like frontier, homestead, metropolis.
Starting point is 00:05:24 And then for a couple of years, we just used variations of Istanbul's city names. So Byzantium, Constantinople, and Istanbul. And then we sort of ran out. So we started using DefConn City names. And so then we had the Berlin Hard Fork, or Berlin Hard Fork, London, just following the DevCon city orders. And so now we're up to Cancun just following DevCon city names. And we had a little, actually, exception there. We called the Merge Paris as a shout-out to ETCC, which was the first big community-hosted conference.
Starting point is 00:05:58 So we use those to specify the names for the execution layer upgrades. On the consensus layer, though, they liked using the stars instead. So they use the name of stars and they go alphabetically. So the first one was called Altair, then it was Bellatrix and then Capella and now Deneb. But in practice, if you're just like an Ethereum user, the execution and consensus layer upgrade happens at the same time. There's like different code on both sides. So we need like different name to refer to like the actual changes to the proof of
Starting point is 00:06:27 stake side of thing, the actual changes to the EVM. So we use like Cancun and the Neb to refer to those, you know, sub changes. But because the upgrade all activates at the same time, we decide to match those names together. So we have Denkoon, which is a mash of the Neb and Cancun. And then the previous four, Chappella was a match of Shanghai and a Pella. So yeah, we've been using these. And in the future, if we had a fork that was like just one side of the chain, then we
Starting point is 00:06:57 probably just use like a city name or a star name. But when they happen together, yeah, we couple the names. Okay. So it is actually possible to fork the consensus layer and not the execution layer? Yeah. And in practice, the fork basically happen independently. So we tell the consensus layer at this epoch, you know, fork, we tell the execution layer at this timestamp fork and the timestamp just happens to be the same as the epoch start time.
Starting point is 00:07:26 But there's no, there's no, you know, like reason for them to like have to fork and lockstep. And we want to preserve this because imagine there's just like a bug in the EVM that we find tomorrow. We can just ship a hard fork on the EVM and the consensus layer doesn't need to know about it. And vice versa. Yeah. Okay. But the people who are running the CL and. in the EL, the consensus layer and the execution layer are the same people for the most part.
Starting point is 00:07:53 Yes. Yeah, yeah, correct. So like if you run a validator, you run a node, like you have to write both. But it's just, and in this hard fork, you need to upgrade both. But there's no, there's nothing that forces both to have to be upgraded. So if say we had an emergency hard fork on the consensus layer, we could have an update just saying like, look, upgrade your consensus layer node, your execution layer node is fine. Beautiful. Okay. All right. So that gets the naming out of the way. Now, because we have the CL and the EL combined, we have earthly cities and astral stars being portmanteaued into the same word. And so that's how we came up with Denkoon. And then Petra is the next one. Yes. So Prague and Electra have been merged into Petra. Yeah. Okay, beautiful. All right. So Dengoon is known as the blob hard fork, the hard fork that we get the blobs. EIP 4844. It's the cool one that gives Ethereum's own enshrine data availability layer. We've talked about blobs on bank lists. We've had an entire episodes dedicated to blobs. Listen to our episode with Domithi about blobs if you really want to dive into blob specifically. I don't think we really need to dive into that one because we've covered it so extensively before. But Tim, like, since we're so close to the hard fork day, like what do you want to elevate about?
Starting point is 00:09:10 Blobs. People are like speculating about like gas fee reductions on layer two's, but I know we don't really know. Is there anything before we move beyond blobs that you want to talk about? Yeah, we did talk about them. I think the one thing I'll say is what's neat about this upgrade is it sort of sets the stage for full dank charting after. So you can think of this upgrade as like almost the front end to blobs where we we have all the new sort of infrastructure and scaffolding that the network needs to use them. So we have like a new transaction types, which L2s can like migrate to to use it. And then in the future, when we have more scaling capabilities with stuff like full
Starting point is 00:09:51 bank charting, we can just do that in the background. And L2s will not have to like upgrade. They'll just be like more blob space that they can access on the network. So I think that's like a really cool thing where, you know, all the L2s have to like do work to support this now and they've all like done it and aren't like the final stages of testing. but hopefully they never have to do anything again that support blobs and there just ends up being more blob space coming on. So I think that's something that's not discussed enough where yeah, we really set like the entire architecture now and we can just keep expanding in the background. Right.
Starting point is 00:10:28 The way I've been explaining this is like we are getting Stone Age blobs with Dengoon 4844, but then the blobs can evolve into like sci-fi blobs independently. of any sort of hard fork. And so the blobs are going to be able, is that true? Depending how you approach it. You might need a hard fork, but it's more that even if there is a hard fork, the L2s won't have to do anything. So for now, they post their data and call data, so they have to switch the type of transactions that they make.
Starting point is 00:11:00 But in the future, when we have more blob space, the blob transactions of like next week will still be valid and will like integrate with all that automatically. Right. Okay. So the future call data, which we call blobs, is just going to be able to fit more stuff without, and the layer twos won't really care about, like,
Starting point is 00:11:18 how big they are. They will be able to leverage them all the same. Correct. Yeah. Okay. All right. So that's blobs. We'll actually talk about Pure Das,
Starting point is 00:11:26 which didn't make it into Dancun. We'll talk about that towards the end of the show, but this is like one of these, like, blob evolutions, blob blotions. Max, add max, epoch, churn limit. That's EIP 7514. That's also coming into Dankoon. What should we know about that? Yes. So right now on the beacon chain, when validators come in, the amount of new validators
Starting point is 00:11:51 that can start validating per block is a function of how many validators are already there. So as more and more validators come in, we accept more and more each block. And the reason, you know, for that change originally was just you don't want to like overwhelm the validator set with new validators in case of like an attack or something. But as you have a bigger and bigger validator set, you can sort of bring more people in quicker. Just to state that really simply, the size of the rate of change of validators is a function of the total size, total supply of validators. Correct. Yeah, I can validate it. Yeah. And this effectively since the launch of the beacon chain has been like up only because there's been more and more validators. and we've like so there's more and more validators and we've therefore accelerated the rate at which we add new validators.
Starting point is 00:12:45 Recently, I don't know, in the past year or so, there's been a lot of like discussion and debate around, you know, should there be a max amount of validators on the network? And does the even at the current rates of growth, is the amount of validators going to end up being too much in terms of like the amount of messages that they gossip on the network and sort of the requirements that that implies in terms of bandwidth. So the idea with this proposal is that instead of making the number of validators each block a function of how many validators are there, we simply cap it at eight per block. So we have that as a ceiling. And this basically slows the rate of growth.
Starting point is 00:13:33 And one lets client teams deal with like the income. increase in validator set size so that they can have a bunch of performance improvements and whatnot to deal with that. And two, it sort of lets the entire community have a discussion around like, okay, what's like the right share of validators that we'd want to have on the network and ideally not end up having that discussion in a world where like, I don't know, say we decided we never want more than 50% of eat to be staked for whatever reason, but then there's already two-thirds of the eat steak.
Starting point is 00:14:07 That's kind of an awkward place to be in. So this just gives us a bit more breathing room to figure out the right path forward. Okay. So there is going to be a cap. So on the rate of change. And when is that cap hit? So we're already, I don't know if we're past it already, but we will be at, like, at the heart fork, the cap will be eight, basically.
Starting point is 00:14:29 Okay. Okay. So eight validators per block, eight times 32 each per block is the constraint. And the idea is that just, just want to, we want to buy ourselves some time before Ethereum discovers its long-term equilibrium with Eastake. Yeah, correct. Cool. Okay, awesome. Self-destruct only in the same transaction, EIP 6780. What does this do? Self-destruct has always been one of these like weird words that I see every now and then. People have like strong feelings about it. What, what's self-destruct and why,
Starting point is 00:14:56 what is this EIP doing? Yes, so self-destruct is pretty straightforward. It's how you can delete the contract from Ethereum. So you have to specify this in advance, but, like if you want your contract to destruct, there's this op code which will delete it. And the rationale for this op code was that if we want people to clean up after themselves, it's nice to give them a mechanism to do it. In practice, though, it hasn't been used much for that.
Starting point is 00:15:26 It's been used in a bunch of really weird funky ways, but it's not something that's like keeps Ethereum state size, you know, smaller than it should. be. So, so like, yeah, the value of the op code is, is not like what sort of we expected in terms of the protocol. And the biggest problem with self-destruct is it's the only op code where when you call it, you don't know how much computation you're going to run. So if you call self-destruct on like a tiny contract, self-destruct gets the address passed in as a parameter. And then it sort of finds this address and deletes everything that's there. So if it's a small contract, it's super quick.
Starting point is 00:16:07 But if it's like a huge contract, imagine you called it on like some massive NFT contract with like millions of balances, then it's like in that same amount of time, you now have to delete all of this. And maybe an analogy is, you know, on your computer, when you move a file to the trash, it's sort of instant. But if you delete like a thousand pictures, you know, it says like moving them to the trash and it's like loading. So self-destruct has like that behavior. And that's bad for Ethereum because it we want like the gas that we change. charge for upcodes to reflect the cost of computation. And for this one, we sort of can't. And now it's kind of bad, but not the end of the world because of how Ethereum is structured.
Starting point is 00:16:47 But we have this long-term goal to move towards stateless clients. And this requires changing how we store data on Ethereum. And by virtue of like how we change that self-destruct that becomes like insanely long to run every single time. We just can't like afford that. Yeah. So basically removing it is the first step towards that. And so what this EIP does is when you call self-destruct, it will not destroy the contract anymore, but it'll still send the money back to like the address you specify. So this is the other thing that self-destruct does. If you delete a contract, you get to say where to send the money.
Starting point is 00:17:26 Obviously, we don't want anybody's funds to be trapped. So when you call self-destruct after the fork, all it does is it transfers to Ethan that contract. but it doesn't actually destroy the contract. And that said, there are people who use self-destruct in a bunch of really weird ways. And one of those edge cases is sometimes people will, like, create a contract and destroy it in the same transaction. Sometimes they do that to burn some eath. I believe that's how optimism was like withdrawal bridge works. It'll like create a contract with some ETH in it and then self-destruct it and send the ETH to zero, address zero or something.
Starting point is 00:18:02 And that's like their weird trick to make it work. And those cases we can still support because you don't actually have to write this contract the disk because it's all in the same transaction that it's happening. So this is why the EIP has like a super long name. It's like, okay, we managed to get the functionality that we want to be like deactivate it. But then we get to preserve this edge case, which was relied upon in a lot of like critical flows. So no contracts breaks. Yeah. Interesting.
Starting point is 00:18:29 Sometimes when I talk to people like you about this, I once again realize how deep some of these like like EIP rapid holes go. And we've been trying to do this for a year. So it's nice to actually see a chip and to know that's like, okay, we found like the sort of edge cases that we can handle and right. Yeah. Not break things. And this is the final equilibrium of self-destruct.
Starting point is 00:18:50 This is the last EIP that self-destruct will be relevant or? Yeah, I think so. I mean, it's hard to predict the future, but I think it does everything we want at this point. Okay, cool. So we're confidently putting it on the shelf. Maybe we pull it off, hopefully not. Yeah, yeah, like, yeah, yeah.
Starting point is 00:19:06 So, but we always have to preserve this property of, like, sending people's money back. Because if imagine you had a contract that you thought you were going to self-destruct in a year and there's a tons of heat in it, like, we can never stop you from getting that feedback. So, like, that functionality will always stay there. Okay. So this is minimum viable self-destructors. Yeah. Yeah. Yeah.
Starting point is 00:19:26 Okay. Cool. All right. Moving on. Perpetually valid signed voluntary exits. I'll say that again. Perpetually valid signed voluntary exit. It's EIP 7044. What's this?
Starting point is 00:19:37 Yes. So on the beacon chain, you can pre-sign an exit message. And up to now, I don't know why I wasn't involved in that design, but like these messages were only valid for like the next two hard forks, I think. And this is and I guess, you know, again, I'm not sure why they made that decision at first. But in practice, it turns out like people use these messages today to build like trustless staking constructs where imagine I'm like a staking operator for you and you want to always be able to withdraw your funds. What I can do is I can sign an exit message with your validator, give it to you. And then if I start doing something badly, you can just go and publish that message and your validator gets exited.
Starting point is 00:20:21 And you don't need me to like do that for you. The problem with that with the current messages is it only lasts two hard forks. So you need to like trust that I sort of give you a new one every hard fork or something so it doesn't expire. So this EIP just makes a small change to make those messages valid forever. And it can it can allow you, it can allow you to have like a one time sign message where, okay, you receive this. You know you can always exit your validator, even if you're not the one operating yet. Okay.
Starting point is 00:20:46 Seems pretty straightforward. Yeah. Yeah. All right. EIP 7045, increase max out of station inclusion slot. Oh boy. This is like the most technical beacon chain EIP. You know, he shot Danny Ryan on the show.
Starting point is 00:21:02 He can tell you all about it. But TLDR is it just increases effectively the max distance that you can include an attestation for in your block. There is some security rationale for this, which again, proof of stake researchers can get into. As a user, as a validator, this all just happened automatically. It just changes how far you go when you're looking at including the attestations. So if you're in a tester, you just have more time. You have more space to attest?
Starting point is 00:21:32 Yeah, correct. No, sorry. So if you're collecting attestations from other people, you can collect them farther. So it means that the like first attestation of an epoch basically gets like twice as long of a valid period. It can be included for. Yeah. Okay. All right.
Starting point is 00:21:51 This one's pretty cool. EIP 4788 beacon block root in the EVM. So beacon block. That means we're talking about the beacon chain. the root of that block is going into the Ethereum virtual machine. So this one is spanning the consensus and the execution layer, correct? Correct. What's going on over here?
Starting point is 00:22:08 Yeah. Yeah. So today, ever since the merge, both chains run alongside each other. But there's no ways to make proofs about the state of the beacon chain on the execution layer. So in a smart contract without an external oracle. So if you want to say like, you know, this was the state of the beacon chain at like slot X, you have to rely on an external oracle, which is not great. So what this does is it creates a new contract on Ethereum where every block we just store
Starting point is 00:22:36 the latest block hash for the beacon chain blocks and have the header there so that you can trustlessly make proofs against it. So this is really helpful for like a bunch of, say, staking pools that want to make proofs about like the current state of the beacon chain and whatnot and not have to rely on an external oracle. So it just makes, yeah, a bunch of like, yeah, constructions more trustless. Yeah, so this makes, this is actually, I think, one of the, one of the big, I'm going to make this the second biggest EIP inside of this hard fork after 4844, mainly just because like it connects DFI, Ethereum's DFI application layer to the Ethereum consensus proof of stake staking layer. Yeah.
Starting point is 00:23:15 And so, you said like staking pools, like this affects the rocket pool ODAO node node. There's this small Dow inside of the rocket pool system that connects basically this information, the state of the Ethereum consensus layer and the ETH in the Saking contract to rocket pool. This also is helpful for EGEN layer and AVS is because they need to know the state of ETH on the beacon chain. So that when AVS is make a slashing event and the slashing event occurs, other AVS is can know about it. Yeah. Yeah, basically it removes any sort of Oracle risks. around that information, whereas now all of those projects have to trust some Oracle to tell them what's up on the beacon chain. Yeah.
Starting point is 00:24:00 Yeah, Vatelik actually put out a recent tweet about how he was like having a change, not like a whole hearted like 180 pivot change of mind, but like a change of mind about like complexity at the layer one where he's saying like, oh, I used to be like a layer one complexity complexity, minimalist. And now I've become like a little bit more into the idea of complexity inside of the layer one because it reduces complexity at the layer two. And I see this as an example of it. What we are doing, what the Ethereum layer one is doing here is enshrining an Oracle.
Starting point is 00:24:29 It's producing an Oracle inside of itself. Like, it's taking on more roles so that less roles can happen at the execution layer. Yeah, yeah. And, yeah, 100%. And I think this one is nice because it's not very complex because it already, the L1 already has this information. it's just like on the wrong side of the chain. And then we every block, the beacon chain already passes information to the execution layer.
Starting point is 00:24:56 Like every time you get a new block, your validator asks your execution engine, like, is this a valid block? And so they're already talking every block. So it's just like adding one more bit of data and then having the execution layer, like save a copy. And then, yeah, that has a huge impact in terms of like creating like a more trustless source of information on chain. Yeah, really, I just like the idea of just like a stronger bridge between the state of the consensus layer and the state of the execution layer. Yeah, yeah. Beautiful.
Starting point is 00:25:28 Okay. There's three minor ones that we're just going to burn through really, really quickly. 1153 transient storage opcodes 5656M copy memory copying instruction and then 7516 blob base fee op code. What's going on here? Yeah. Maybe to start from the last one, because it's the simplest. So blob base fee, we have this op code. on chain to get the base fee for gas.
Starting point is 00:25:50 And it's useful if you want to do things like, imagine an L2 that wants to reimburse people's gas when they submit a fraud proof. They can just say, you know, send them blob base fee times, you know, like the gas that they spent. So they can have that all trustless on chain. Now that we have blobs,
Starting point is 00:26:08 you want to have an upcode to get the blob base fee. So pretty straightforward to expose that. Mcopy is just memory copying the upcode within the EVM. So if you're very much into like language design or EVM optimization, it gives you a bit more, a bit more options. And then 1153 has two new opcodes, T-Store and T-load, which are storage upcodes that disappear at the end of a transaction. So right now, when you save data on Ethereum or read data from Ethereum, it sort of is there forever and you have to read the disk, which is more expensive. So with transient storage, you just keep those in memory, delete them at the end of the transaction, and so it's cheaper to execute than therefore the gas price is lower.
Starting point is 00:26:56 Mantle, formerly known as BitDAO is the first Dow-led Web3 ecosystem, all built on top of Mantle's first core product, the Mantle Network, a brand-new high-performance Ethereum Layer 2 built using the OP stack, but uses Eigenlayer's data availability solution instead of the expensive Ethereum Layer 1. Not only does this reduce Mantle network's gas fees by 80%, but it also reduces gas fee volatility, providing a more stable foundation for Mantle's applications. The Mantle treasury is one of the biggest Dow-owned treasuries,
Starting point is 00:27:24 which is seeding an ecosystem of projects from all around the Web3 space for Mantle. Mantle already has sub-communities from around Web3 onboarded, like Game 7 for Web3 Gaming and Buy Bit for TVL and liquidity and on-rass. So if you want to build on the Mantle Network, Mantle is offering a grants program that provides milestone-based funding to promising projects that help expand, secure, and decentralized Mantle.
Starting point is 00:27:45 If you want to get started working with the first Dow-led layer 2 ecosystem, check out Mantle at mantle.xyZ and follow them on Twitter at ZeroX Mantle. Arbitrum is the leading Ethereum scaling solution that is home to hundreds of decentralized applications. Arbitrum's technology allows you to interact with Ethereum at scale with low fees and faster transactions. Arbitrum has the leading defy ecosystem, strong infrastructure options, flourishing NFTs, and is quickly becoming the Web-free gaming hub.
Starting point is 00:28:11 Explore the ecosystem at portal.arbitrum.io. Are you looking to permissionlessly launch your own Arbitrum orbit chain? Arbitrum allows anyone to utilize Arbitrum's secure scaling technology to build your own orbit chain, giving you access to interoperable, customizable permissions with dedicated throughput. Whether you are a developer, an enterprise, or a user, Arbitrum orbit lets you take your project to new heights. All of these technologies leverage the security and decentralization of Ethereum. Experience Web3 development, the way it was always meant to be. Secure, fast, cheap, and friction-free.
Starting point is 00:28:42 Visit arbitram.io and get your journey started in one of the largest Ethereum communities. Uniswap is revolutionizing the defy space, not just by enabling swaps, but by empowering you to swap smarter, with a comprehensive suite of products for faster, safer, and more informed swapping. Say goodbye to pop-up wallet extensions. The uniswap extension is coming soon, and this extension is not a pop-up. It is a sidebar in your browser that persists no matter where you are on the web. This means you can swap, sign, or send, and receive crypto anytime, anywhere without obstructing your browser window. But that's not all.
Starting point is 00:29:14 The Uniswap web app now features limit orders. So you can buy and sell any token at your price on your terms without having to watch the market. And the best part, limit orders are built on Uniswop X, which means no gas fees. Also new to the web app is the data and insights pages with real-time candlestick charts, price data, transaction logs, and detailed pool data, all integrated into the Uniswap web app. All of these new releases come together to create one platform to help you, Swap Smarter every time, no matter where you are, on web, mobile, or on the extension. Click the link in the show notes to sign up for the extension waitlist and download the mobile app.
Starting point is 00:29:45 Start swapping smarter with Uniswap. All right, well, round of applause for all of our EIP finalists that made it into Deng Kuhn. Congratulations for all the EIPs that made it in there. There are some EIPs that did not make it in there. Which are the ones that got rejected? Which are the ones that didn't make it into the gate that are worthy of note? So there's a lot to be clear. Yeah.
Starting point is 00:30:06 We have a whole Ethereum magician's threads where people propose them. There's probably like 20, 30 proposals. And, you know, we can include maybe something like eight. And then... So this is like the EIP mempool. And so like Dengoon is like the block that was formed, which is going to be merged into Ethereum. And so like we have like seven or eight or nine EIP transactions that are being merged. Yes.
Starting point is 00:30:29 But the rest are still in the mempool. And so the next hard fork is going to be Electra. What are the names for... What are the two names for Electra again? Prague and Electra. So Pextra. Prager and Electra. Pextra is the merged name.
Starting point is 00:30:40 Yeah. Petra is the Merge name. Yeah, the Portminton name. And so like still kind of like finding its identity, right? Like we don't, right now zero EIPs are formally in. No, we have a few. We have a few. Oh, really?
Starting point is 00:30:51 Yeah. Yeah. So, um, and maybe yeah, to zoom out a bit. So over the past couple years, we've sort of moved to this cadence where we have, um, one big fork, one small fork. So we, we have the, uh, the merge, which was obviously like, a huge fork and then we had withdrawals right after, which is a bit of a smaller one. And now we have 4-4-4-4.
Starting point is 00:31:11 Big one. Yeah, another big one. So for Pectra, we're planning to do a bit of a smaller fork and to start working on the next larger fork in parallel, kind of like we've done with this one. So we've been working on 4-44 for like probably two years at this point. And then in parallel, we were able to ship withdrawals as well. So Pectra is going to be a smaller fork, but we already have like a decent. an idea of what the bigger forks after that will be.
Starting point is 00:31:39 So on the execution layer, we'll be up to Osaka at this point. We have this transition to vertical trees, which we've talked about a lot, which changes all of Ethereum's storage layout for the state. So that's the thing that we're going to do like two force from now. And we're starting to work on right now. But because there's like so much work on it, there is like, and you can, can't have everybody just working on the exact same pieces of code. We do have some extra bandwidth to do another fork before. And then similarly on the consensus layer, they're working on effectively like full
Starting point is 00:32:16 dang sharding and there's a bunch of proposals around how to get there. Again, that's going to take some time. So they also have the bandwidth to do like a smaller fork in the meantime. So this is what Petra will be. And it seems like maybe the forks after Petra will actually be separate because it seems like the big thing we're doing on the execution. layer, vertical trees is pretty independent from the big thing they're doing on the consensus layer. Oh, interesting. Maybe we'll end up with an EIP that we add as well that, you know, forces both to be together.
Starting point is 00:32:45 But we sort of moving to this regime where like we have like these two big initiatives happening in parallel. And then in the meantime, we're also trying to the ship a bunch of like smaller but still useful things. So for Pectra, we actually have four EIPs included already. So the BLS 12-381, pre-compile. So this is something people have asked for since I think 2017, 2018, which is just using the same curve as the beacon chain as a pre-compile on the execution layer enables a bunch of use cases. So we're going to do that. And that's like what that means is like the signing curve. So when we sign a message. Yeah. And from what I understand, there are particular signing curves inside of like the the, what did the enclave in your phone. Is this? No, that's not. Yeah. No, that's
Starting point is 00:33:34 No. What BLS is very good for, and the reason we use it on the beacon chain is like aggregating a bunch of different signatures. So any use cases where like you have a bunch of different signature and you want to make a proof about like all of them and verify all of them, BLS is very efficient for that. And then if you want to verify signatures from validators, basically, you'll be able to do that. Oh, this is the Justin Drake innovation that brought the minimum ETH stake from 1500 down to 32. Yes. And so now we're doing now we're just taking that innovation down to the execution. layer.
Starting point is 00:34:05 And when we proposed, when it was first proposed in 2018, it was still kind of a new curve. There wasn't great, like, library support for it. So people were a bit reluctant to, like, add it to the proof of work chain. But now, you know, the entire beacon chain depends on it. Like, if there's a bug in this, you know. Ethereum already depends on it. Yeah. So it makes sense to just add it.
Starting point is 00:34:24 So that's the first one we've included. Then the other big change that we're going to do is execution layer. triggerable exits. So having a smart contract initiate, having a smart contract effectively initiate a withdrawal on the beacon chain. And again, this is valuable for a ton of like staking pools and whatnot where you'd want like any case where you want like a non-chain action to trigger a withdrawal. You can imagine like a Dow voting on it or stuff like that. I would just say that I'll categorize that as further strong linkages between the defy layer and the consensus.
Starting point is 00:35:03 Yeah. And that's like the theme. So another thing we're doing is basically removing the sort of delay period. So like right now when you when you make a deposit, when you make a deposit to the deposit contract, you still have to wait. I think it's like 2,000 blocks for the deposits to be seen by the beacon chain. And this in proof of work like before the merge, it was to avoid any reorgs. So we, you know, we were pretty confident there wouldn't be like a 2,000 block reorg and so the beacon chain you know once it saw 2,000 blocks in the path that your deposit was in, it would credit you. But today they're like talking every block so it's kind of useless to have this 2,000 block. So we're removing that in Pectra as well. So it'll be just like the next block.
Starting point is 00:35:50 And then there's another small, small proof of stake change that's been included at 7-549. I'm actually not too sure what that does. It's around like committee, committee attestations. So that's on the approve of stake side. So we already have these four EIPs that are all quite small, but like, you know, pretty useful that are that are included. Then we have these two big ones that are sort of vertical trees is pretty much confirmed on the execution area side. I think how to approach dank sharding is still being like debated on a consensus air side and, you know, what's the right focus to have there. But we have like these two big things set for like the fork after that, a couple of small things now. And then there's also a bunch of other proposals, which we can get into that we're.
Starting point is 00:36:33 trying to think through and figure out should it be in the next four of the fork after and stuff like that okay so uh we get to dankoon like now uh electra call it a year uh maybe less uh yeah actually timing timing timing on electric nine months 12 months um look it's really hard to predict these things but i think people would like tim give me a day yeah i think people would like to see at before the end of the year um okay it's always nice to fork before devcon and you come you feel like you're you know right your job um right and yeah so so we'll see i mean it It depends on what we end up including units, but like the idea is to not spend like several years on that fork. It should be relative.
Starting point is 00:37:10 Of course. Of course. Okay. And so then once we get past Pectra, then all of the ETH heads on Twitter are going to be like, the verge is coming. The verge is coming. Because that's going to be like the next. So we had, you know, EIP 1-559, the merge. Now we have blobs, which we're getting.
Starting point is 00:37:26 And then after that, we're all going to be stoked for the verge. And for those who didn't listen to our episode with Mike Noider and Domitie, the verge kind of like increased. the scalability of the Ethereum layer one by about like 3x, a modest but like, modest but like, you know, healthy increase, but really it brings in statelessness, which makes Ethereum just like super prolific, I'll call it, like packets going between more devices relevant to Ethereum all over the internet. Cool. But then there's also three EIPs that I know that are out there that I think are pretty
Starting point is 00:37:58 damn meaningful that I want to get to before we wrap up this episode, Tim. inclusion lists, max, effective balance, and peer das. Where are these in the EIP Mempool? So again, I'll start from the last. So peer das is like one of the proposals for how we get the full dang sharding. So this is starting to be prototypes right now. And the idea there is we have these. So these blobs give us like temporary storage on Ethereum right now, right?
Starting point is 00:38:28 where before the blobs, all the data that stored on the theorem has to be stored forever. It costs a lot of money to store data forever. The blobs allow you to store data for only a couple of weeks, but every node still has to keep a copy. So it's cheaper than like nodes keeping a copy forever, but it's still every node keeps a copy. And the idea with full dang sharding is we want to have nodes only store part of the data on the network, where you store like a bit of the data yourself.
Starting point is 00:38:54 And then what you do is you ping the rest of the network. You ask them for proofs of data and you do this enough times that you have like a high probabilistic guarantee that like there's no data missing. The same way, you know, like when we use like hash functions, we trust that like a private key only maps to like a single private key. And you know, the odds that like there could be a double match is like one in like two to the 256 or something. So the idea with full dangshadding is you basically do that as well where you're saying like I only have a small part. of the data. I've asked everybody else in the network and I've gotten, you know, this many responses. And I know that like given these responses, the odds that the data, that any part of data is missing anywhere is, you know, a really tiny probability. And this, and if we can do this, then it means
Starting point is 00:39:43 you go from like nodes storing all the data today to nodes all the data today forever, to node storing all the data for like a few weeks today to like node storing only a part of the data for only a part of time. And this is how you can scale the number of blobs without scaling like the validator compute requirements. So peer das is like a first proposal, or maybe not the first, but like one of the latest proposals to like actually implement this based on the current like
Starting point is 00:40:11 validator architecture. And so similarly to how people are starting to work on the verge and prototype that, we are starting to see people work on like peer das. It's a bit earlier in development. So like maybe that's not the final thing that we end up shipping. But it's at least like a very promising step. And I will say, if you look on ETH research, Danny Ryan has like five ETH research posts that he's like authored. Every one of them has been like a shift in like the entire Ethereum roadmap, like sort of the like the quick merge post and like the, the, the Romanic wrote the roll up centric post.
Starting point is 00:40:47 But then Danny wrote the like, how do we actually like implement this post? So like Danny have like a really good hit rate with ETH research posts. So, yeah, I don't know if like, we'll ship Pyradas exactly as how it's spec, but it seems, it seems very promising. Cool. And of course, Pyradus is a part of the story of the evolution of blobs. These are like blobs growing up from little blobs to big blobs. Yeah, yeah. Exactly.
Starting point is 00:41:10 And it's not actually, sorry, just maybe to clarify this, it's not that the blobs are bigger. It's that. But the total data availability is bigger. Yeah. Yeah. Metaphor of them. Yeah. There's more of them, right?
Starting point is 00:41:20 So you think of it as just like more blobs showing up. Right. The blobs are actually getting smaller, but there's many of them. But when you aggregate all of those smaller blobs together, it becomes like big, big blobs. Yeah. Yeah. Many, many, many tiny little sharded blobs form big blob. Yeah.
Starting point is 00:41:37 These are technical terms. Max effective balance, max eb. What does this do? Yes. So right now on Ethereum, you can only have a 32Eat validator, which causes like a couple problems. Like, well, I guess maybe I'll start with simple about that is like we can treat all validators the same on the network, right? Like when we shuffle validators, they all have, we all treat them with like a 32 weight and so you can like shuffle among one validator equals one a validator. Yeah, exactly. But then the downside for that is twofold. One is we have these actors on chain that obviously have more than one validators, you know, staking pools, but even like a bunch of people have more than one validators. And every validator needs to send. a bunch of messages over the internet. They all have to say, like,
Starting point is 00:42:25 oh, I attested to this or like, I saw this, and then send all those messages. And this creates a ton of bandwidth requirements. And right now, I'd say, like, running a validator, you know, it takes maybe like two terabytes of disk space. It takes, like, a relatively recent computer. You know, like, I could run one on my MacBook easily. But the biggest bottleneck is probably, like, bandwidth.
Starting point is 00:42:47 Like, it takes, you know, from, like, a couple terabytes to maybe low tens of terabytes of bandwidth per month. And this is like something where, you know, in a bunch of places, even if you can buy a MacBook and buy like a two terabyte hard drive, you maybe can't get like that much bandwidth. So the idea with max EB is if we can tell people like, oh, you have 10 validators, you can put them together and then just sign like one message instead of 10. We can really lower the bandwidth requirement. And that like makes it easier to run a validator. The other thing that's really nice with max eB increase is it could potentially allow any amount of ETH-sized validator so that right now the other disadvantage is you can only stake in 32-Eth increments. So say that I have like 50th, well, okay, I can stake 32, but then what do I do with my other 18th?
Starting point is 00:43:36 And this is the sort of thing that drives people to like liquid staking tokens where it's like I can get, you know, 50 ETHs worth of like Rocket Pool staking tokens or Lido's staking tokens and not have to think. about that. So if we could move to a world where like, okay, I have an arbitrary number of ETHs that I want to stake and then I can actually stake on like a reasonable internet connection because like we've sort of aggregated these validators together. Yeah, that's a really positive thing for Ethereum. So there's a ton of challenges in doing that and this is why it's like not a no-brainer to include it in the next fork. Like if it was easy, we probably would have done it already, but it is being considered and we're trying to think about like, okay, are there like simplifications we can do in this spec that like maybe we get like a first version of it out in the next fork and like make it more complicated and provide more functionality in the fork after that.
Starting point is 00:44:29 So that's a big area of just like research and development right now. Right. And it also allows solo seekers to compound their reward. So 32 can immediately turn into 32. Like one. Oh, great. Yeah. Yeah. Actually, yeah, that's a great point. Yeah. If you only have 32, then yeah, it would compound. Yeah. Right. And yeah. And so like rather it's like right. Right. Right now. when you have a 32 eth, you get your eth rewards, but they don't earn more rewards. They're not staked. Yeah, yeah, that's a good point.
Starting point is 00:44:54 Yeah. Whereas if you use a liquid token, like, it's immediate, right? Well, it's not immediate, but because there's so many users, right? Like if all the, right, it's amortized. Yeah, exactly. Yeah. Yeah.
Starting point is 00:45:05 Yeah. Yeah. It's effectively internet with scale, immediate with scale. Yeah. Yeah. Yeah. Okay.
Starting point is 00:45:10 The cool thing about max, max effective balance, like there's also minimum effective balance. Minimum effective balance, minimum effective balance is going up to 2,000. Yeah, I think that's correct. For inspect is 2048. Yeah. Yeah, cool.
Starting point is 00:45:22 And then last one, inclusion list. This is a big one. Inclusion list. Where is this in the MMPL? Again, this is one where it's just about the tradeoff of like complexity versus the value. So like again, it's like if we could have, you know, perfectly working inclusion list tomorrow and it was like a one line change. Obviously we would have done it. I think the biggest questions here are like, if you do something that's in protocol, can people circumvent it?
Starting point is 00:45:46 And what does it like? Can we just define, sorry, what inclusion lists are? Yeah, that's like a setback. So inclusion list is specifying a set of transactions that you as the current proposer believe should be included either in your current block or like the next block. And then, you know, there's many flavors of this, but you can imagine the most extreme flavor is something like somebody else specifies an inclusion list for you, like the previous proposer.
Starting point is 00:46:12 And then your block isn't valid if it doesn't contain those transactions. So it's like the previous validators like forcing you to include those transactions, assuming, you know, their gas prices high enough. And so the goal with this is just to improve censorship resistance. Because you can imagine a validator just like seeing transactions that are censored in the MMP pool and like wanting to force those into the network. And so there's like two problems that inclusion lists can address. One is validator censorship.
Starting point is 00:46:42 Like do we want, say that you have, you know, a non-censor. validator A and then afterwards the validator B is censoring, do we want A to be able to force B, not the sensor? And this is probably the part that adds the most complexity, like having like the validity of a block dependent on an inclusion list. But then the other use case is a validator forcing an external block builder to include some transactions. So right now, the way MEV boosts works and the whole like PBS infrastructure works is a validator has to choose either to build their block themselves or to delegate 100% of the blocks contents to like an external builder. And this means that if the builder is censoring, they sort of give up on censorship
Starting point is 00:47:31 resistance. They're sort of forced to choose between censorship resistance and like a potentially higher profit. And one thing that that Mev Boost does today to deal with this is they have like this min bid flag. So if you're a validator, you can say, unless an external builder gives me this much ETH, I'm going to build my block locally. So you're sort of saying, like, I'll be censorship resistant. Unless there's like a ton of MV right now, then I don't really care as much, just give me the most paying block,
Starting point is 00:47:58 which is still like a valid approach because, you know, most of the time there's not huge spikes of MV. So it's, you can like get censored transactions in them. But a version of inclusion list that you could imagine is one where we just force the builder to give you a block that includes your transactions on your inclusion list somewhere in the block. So you're saying, you know, most of the MEVs at the top of the block, like, that's fine, you know, build me a super profitable block.
Starting point is 00:48:23 But make sure that like the last four transactions of this block are these or that somewhere in the block there's these four transactions. And otherwise, I'll just pick my block locally. So I think this is a big challenge with inclusion list is like, do we want a sort of like validator to validator set up where they're each like forcing people to do stuff? or do we want like a validator to builder setup where it's really about the block I'm going to propose? The validator to builder setup is much simpler to implement. But then the validator to validator set up, one, you know, gives us like sort of full censorship resistance at the validator set. And then it also improves on things like based roll-ups. So a lot of like the pre-confirmation in base roll-ups relies on inclusion lists.
Starting point is 00:49:12 So again, this is like a tradeoff between we could do like this more complex, more powerful version of it or maybe just like a small quick fix now and eventually, you know, expand on it. And this is what people are debating. Just shooting from my hip here, I do really like the idea of very strong censorship resistance that also enables base rollups. I do enjoy that like synergy and serendipity between these two properties. Is this like a the hard fork after Pectra kind of timeline or is that kind of unknown. So the simplest version, you don't even need a hard fork. You can do the simplest
Starting point is 00:49:45 version of this just an MEV boost, right? Where you just say, and then again, it's like, you know, you rely on MV boost for trust here, but like we sort of do already for a lot of better stuff. So I also like full validator set censorship, but I also like shipping things. So if I, you know, was like the dictator, I would probably ship the MV boost change. And then the thing as well, we get out of that is you can you can get like a, you data, right? Like, maybe this is a terrible idea. Like, there's some flaw in it that we haven't thought about for whatever reason. But like, if it's only a change in MV boost, it's like, okay, whatever, we can fix it. We can revert it and whatnot. But we didn't like enshrine that in
Starting point is 00:50:25 the protocol. So, yeah, I'm sort of biased. If there's ways we can test things quicker in like a more controlled environment, I'm all for that. But I agree that like long term having something that's like more robust at the protocol layer is extremely valuable. Sure. Okay. Those, those Those are all like my big EIPs that I want to talk about that I know are in the MNPool. Are there any other like valuable EIPs that we haven't brought up that are worth elevating? Yeah, let me check the list real quick. Okay, so we have this thread on Ethereum Magicians called Prague slash Electra Network Upgrade Metathread, which I should upgrade the Pectra now that tracks this.
Starting point is 00:51:01 So basically you can find a whole list of everything that's being proposed there. And this is what we're discussing sort of every week on all CORE devs at this point. aside from all the stuff we talked about, maybe one big thing that's also being discussed is EIP 4444 or 44s, which is about stopping to serve historical data from Ethereum as main peer-to-peer network for all the pre-merged stuff. So the idea is like right now all the nodes have to serve data about all the way from Genesis. And we'd like to move this to like other data providers so that when you sort of sync in Ethereum node, you can know like for sure what's the latest valid chain. And then you could import your historical data from something else. Like imagine you just torrent it, for example, and then verify that that history actually matches up to the head of the chain.
Starting point is 00:51:56 So this is something people have been working on for a while. And yeah, what to keep. Is this a snapshot or is a snapshoting Ethereum? effectively. You can think of it as like snapshoting history. And the thing that's important is like when you sink the chain and you sink like the current head, you know, you want to make sure that this is secure. And so the nodes like do this already. But then when they backfill the history, they're always checking that like this history actually ends up at this current head. And so it doesn't really matter. If you're confident in like your current head, it doesn't really matter where you get the history from because you can always verify. that is correct. And so right now we're kind of using Ethereum as like a weird sort of torrent seeding network for this when it's just like not the most efficient thing. Like literal torrent networks are more efficient. Or you can imagine like going to internet archive and just downloading, you know, the first million blocks or, you know, the first 10 million
Starting point is 00:52:51 blocks or something. So we're working on just like having a good format for exporting that data, clients importing that data, having a couple places that like serve this data. So they just like again, removes like the burden in terms of bandwidth on the peer-to-peer network. And so that's a big piece of work. EOF, so like a ton of upgrades to the EVM, people are still considering. That's a whole other like bucket of proposals. There's a budget proposals around account abstraction and EOA improvements and whatnot. It's another big one.
Starting point is 00:53:25 And then the other one, the other like bucket I would say is around just like the format of transactions, so how we want to encode stuff. But now we're getting like really in the weeds. And aside from that, there's a bunch of other like ad hoc EIPs, but I'd say like history, expiry, EOF, account abstraction, and then sort of a long list of miscellaneous EIPs is what we're thinking through. And if you scroll to that ETH Magicians post, I sort of have a, a little diagram showing like all the different EIPs proposed, where they fall, are they like CLE, EL, a mix?
Starting point is 00:54:01 and we're slowly working our way through that list. Well, Tim, I feel very informed about the horizons of Ethereum and what's beyond them. So thank you so much for coming on the show and walking us through all these EIPs. Yeah, thanks for having me. This was great. The link that you just mentioned, Tim, I'll get that from you, and I will post that into the show notes. And so if you're listening to on the podcast or on the YouTube, you just go to the show notes and find that link. Tim, thank you so much, my man.
Starting point is 00:54:25 Yeah, thanks for having me. Bankless Nation, you know the deal. Crypto is risky. Defi is risky. EIPs and hard forks. They're also risky. You can lose what you put in, but we are head west. This is a frontier. It's not for everyone, but we are glad you are with us on the bankless journey. Thanks a lot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.