Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - t1 Protocol: Unifying Ethereum's L2 Liquidity Through Real-Time Proving - Can Kisagun

Episode Date: June 27, 2025

While L2 rollups did help scale Ethereum, they also created siloed ecosystems, all fighting over the same liquidity, users and devs. t1 Protocol is building layer-2 infrastructure to achieve seamless ...cross-rollup interoperability through real-time proving, powered by TEEs. t1's low-latency with 1-second block times provides faster preconfirmations, significantly improving UX, all while maintaining full Ethereum composability.Topics covered in this episode:Can’s backgroundWhy Enigma/Secret Network built on CosmosSolving Ethereum’s liquidity fragmentationt1’s rollup & real-time proving in TEEsSequencer setup inside the TEEDealing with other rollup trust assumptionsIntegrating new L2sPermissionless TEEsPotential attack vectorsTEE alternativesAsset issuance on mainnet vs. L2st1 developmentPartnerships & BDSolana vs. Ethereum UXTEE misconceptionsEpisode links:Can Kisagun on Xt1 Protocol on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Friederike Ernst.

Transcript
Discussion (0)
Starting point is 00:00:00 a roll-up or like a state machine that's running inside a trusted execution environment. And over time, we decentralized this and make it permissionless. The idea is the TEAS provide real-time proving. And the roll-up itself provides programmability. It would be very cool to have a roll-up that would settle instantly back on the L1, such that we could have this fast and cheap execution environment that is for L1 assets. With the T.E, the proving is instant because, I mean, obviously it's a different trust assumption, right? However, as we're running the runtime inside the trust execution environment, we can create an attestation from the T.E.
Starting point is 00:00:42 And prove to, you know, whatever platform that we want, this is the execution that took place instantly. When you enhance this proving capability with actual programmability, you unlock a lot of cool use cases and applications that, that just pure proving does not give you. And this way you can go towards being more of a hub rather than just like a proving network, if that makes sense. Welcome to Epicenter, the show which talks about the technologies, projects and people driving decentralization and the blockchain revolution. I'm Frederica Anz, and today I'm speaking with Chan Kisagun,
Starting point is 00:01:21 who is the CEO and co-founder of T1, a real-time-proving roll-up that tries to eliminate fragmentation in Ethereum. Before the interview, I'd like to tell you about our sponsors this week. If you're looking to stake your crypto with confidence, look no further than Corse 1. More than 150,000 delegates, including institutions like BitGo, Pintera Capital and Ledger trust Corus 1 with their acids. They support over 50 blockchains and are leaders in governance or networks like Cosmos, ensuring your stake, is responsibly managed.
Starting point is 00:01:54 Thanks to their advanced MEV research, you can also enjoy the highest staking rewards. You can stake directly from your preferred wallet, set up a white label note, restake your assets on eigenayer or symbiotic, or use their SDK for multi-chain staking in your app. Learn more at chorus.1 and start staking today. This episode is proudly brought to by NOSIS, a collective dedicated to advancing a decentralized future. NOSIS leads innovation with circles, NOSIS pay, and Metri, resh, reshaping, open banking, and money, with Hashi and NOSIS VPN. and they're building a more resilient privacy-focused internet.
Starting point is 00:02:32 If you're looking for an L1 to launch your project, Nosis Chain offers the same development environment as Ethereum with lower transaction fees. It's supported by over 200,000 validators, making NOSIS chain a reliable and credibly neutral foundation for your applications. NOSISDAO drives NOSIS governance where every voice matters. Join the NOSIS community in the NOSISDAO forum today. deploy on the EVM-compatible NOSIS chain or secure the network with just one GNO and affordable hardware.
Starting point is 00:03:06 Start your decentralization journey today at NOSIS. I.O. Welcome to the podcast, John. It's a pleasure to have you. Hi, Frederica. It's a pleasure to be on. Super nice. Maybe before we kind of talk about T1 and what kind of problem you're trying to solve, which actually ties in really nicely with the previous episode that we did on Ethereum recently with Tomash and Shaouye. Maybe let's talk about you for just a little bit. So you're here to talk about T1,
Starting point is 00:03:43 but before T1 you did Enigma. Tell us about that. Yeah, so I got into the space in 2015. At the time, I was a grad student at MIT and read the Bitcoin white paper, got really hooked into it. And even though I was in the business school, I was like, okay, let me learn as much as I can. And that was my first introduction through the Bitcoin Club there. And then a friend from that Circle had this research paper called Enigma. after graduation, I joined him as a co-founder in 2017.
Starting point is 00:04:30 And yeah, our goal was to bring smart contract privacy to Ethereum using multiply computation. We were looking to be like a layer two back in the day when these things didn't really exist. Fast forward a while, we pivoted into using trusted execution environments. the time there was only Intel SJX as early as 2018 because we realized in a permissionless and adversarial setting MPC has some problems, mainly being if people colludes off of chain. There's no way to detect it and therefore punish people or slash people. So we worked on, you know, with T's for a while.
Starting point is 00:05:16 Around 2020, we pivoted our naming and our tech stack. we became a Cosmos SDK-based Layer 1, renamed to Secret Network. We launched private smart contracts back in September 2020. At the time, we were the only smart contracting platform on Cosmos Land after Terra. We all know how that went. And yeah, we introduced primitives like private preserving tokens, like private ERC20s, private AMMs, and really try to build a privacy-first defy ecosystem. I, during my time there, which was until late 21,
Starting point is 00:06:04 I led most of the fundraising efforts, product, direction, business development, and ecosystem growth efforts. And yeah, like towards the end of 21, I decided it was time to step down of that journey. took some time off. I was quite burned out and eventually got back into the building mode. At the time, flashbots have made a lot of cool work with trusted execution environments and EVMs. And from my days at Secret, one of the big things for me was I wanted to be close to the Ethereum ecosystem. And we were nuts. So I was very interested through talking to like T.E.
Starting point is 00:06:56 Our gurus like Andrew Miller, I was like, okay, let me look into this flashbots, things a bit more. I did a research project with them, mostly exploring the application design space with T's and, you know, and Ethereum. And then considered building with flashbots for a while, but then they were really focused on block building. and what I wanted to do was different. And that's kind of how T1 got started. So Enigma, or later the secret network, kind of like you started off as kind of like trying to build this network on top of Ethereum and then move to Cosmos.
Starting point is 00:07:37 So kind of like while there were quite a number of projects that kind of moved that way, what precipitated that move for you guys? And why, what was it that? had kind of pulled you back in the end? Well, I mean, to be quite frank, we were focusing on building a privacy platform. And in the beginning, we were trying to solve two problems. One was the privacy or private execution component, and the other one was like consensus
Starting point is 00:08:09 components. Because back in 2017, like nothing came out of the box. You could just like fork Ethereum and there were like a ton of projects that did that, but I want to say around like early 2020, Cosmos SDK or Tendermintest SDK was robust enough. At the time, I'm blinking on the name. PolkaDOTS was coming up with their own architecture and we decide to focus on not rebuilding the whole consensus framework from scratch, which, you know, we tried and that resulted in all of tech debt, and we wanted to get to market fast and
Starting point is 00:08:54 build something that worked. So then we decided, like, let's just like take this thing that works and and then, you know, run smart contracts inside TEs where we could use the like the Cosmos SDK for the layer one consensus and do the computations inside TEs. Okay, yeah, no, that, That makes sense. After your break, you kind of came back to Ethereum and you found things somewhat lacking. So what problem are you looking to solve with T1 now? When I was getting back into things, one thing that really bothered me was I had to go to centralize exchanges to trades. because when you are selling large quantities, the difference in execution is significant.
Starting point is 00:09:53 So I was like, I've been building in this space since, what, like, four, seven, eight years now. And, like, I should not be trading on a centralized exchange and then withdrawing my funds back instantly, right? And thinking about, like, why this is a problem, I got into the Dex realm, the Dex problems, all that stuff. And to me, the solution was we need something that is cheap and fast in terms of execution, but is not fully removed from L1 assets, which is where DeFi takes place.
Starting point is 00:10:30 You can have cheap and fast AMMs on roll-ups, but because of liquidity is lacking, it just doesn't make too much sense. And that was my first, let's say, hints that, you know, it would be very cool to have a roll-up that would settle instantly back on the L1, such that, you know, like, we could have like this fast and cheap execution environment that is for L1 assets. That's how I started thinking about, like, what I should be working on or what problem I should be solving.
Starting point is 00:11:04 And then quickly I realized, you know, this is not just a Dex thing. this applies to the broader DFI ecosystem. And like going down that rabbit hole, it also became clear to me that like, hey, okay, this is cool. However, we also have already like, you know, tens of rollups that are out there. And the cat is out of the back if, you know, if that makes sense. And like, why don't we have a solution that kind of like abstracts, like what platforms, from you're running on, but focuses on the application experience. Like the example I give is, if I'm taking an Uber to go from A to B, like, I don't care
Starting point is 00:11:49 where the Uber is running. I just care to go to my destination. And I think that's what crypto is lacking today. And that's kind of what we want to bring back. Okay. So maybe if I kind of repackage this a little bit, so kind of if you look at the liquidity in the Ethereum ecosystem today, kind of depending on kind of like what you're looking to do,
Starting point is 00:12:12 kind of you do it on base or arbitrum or elsewhere. And then kind of you might want to do something else. So you kind of need to bridge your assets kind of like from arbitrum to base and the other way around, which will take a long time, right? So kind of you still, you have this fragmented liquidity in a way. And often kind of the solution. to fragmentation is kind of having some sort of standard in terms of kind of like how does message passing work, how do you kind of, how do you communicate more or less synchronously
Starting point is 00:12:56 between the different roll-ups, right? But then kind of like when you look at standards, I mean, I'm sure you know kind of like this old engineering joke, right? So when you have 14 competing standards. Someone says, let's make a universal one and unify them all, and then you have 15 standards, right? So how do you go about that? Let me put it this way. We're taking a very pragmatic approach and we're not in the business of creating necessarily standards. We follow what the EF, like L to Interrupt Working Groups are working on. However, one thing that's important for us is everything we built needs to be fully permissions. And what I mean by that is, like, if we look at the journey of, like, you know, solving fragmentation
Starting point is 00:13:45 on Ethereum over the past two years, you know, people were really excited about shared sequencing. And shared sequencing requires roll-ups to, you know, participate or, like, you know, forego their centralized sequences, therefore, they're sequencing revenues and participate in this shared sequencing construct. And while that sounds excellent on paper, it did not pick off or it didn't work because the incentives did not align. So, you know, all of the shared sequencing companies over the years have pivoted one by one. What we're saying is we can build an infrastructure and have this infrastructure enable cross-chain interactions, user experiences, whatever you name it. without the buy-in of a single roll-up or without the buying of a single application.
Starting point is 00:14:43 Obviously, if these, like, you know, the stakeholders buy into it, it's more robust. But we are focused on, like, I am a Defi user. I suffer from this problem on a daily basis. I am interested in making the lives of people like me and myself better without, like, dealing or, yeah, without, you know, burning down in these like bureaucracy, bd circles, if that makes sense. And I'm happy to unpack what that means. Yes, I think you'll have to.
Starting point is 00:15:16 So I'm a little confused here. So kind of if you look at T1, is this kind of is this primarily a roll-up in its own right or is it primarily kind of an interoperability solution? So what we have is we have a roll-up or like a state machine that's running inside a trusted execution environment at first and over time we decentralize this and make it permissionless. The idea is the T's provide real-time proving and the roll-up itself provides programmability.
Starting point is 00:15:56 This is in a way different because a lot of companies are, let's say, doing zero-knowledge-proving. however, they do not focus on programmability. They just focus on proving something that happened on A to B. However, when you enhance this proving capability with actual programmability, then you unlock a lot of cool use cases and applications that just pure proving does not give you. And this way, you can go towards being more of a hub rather than just, like a proving network, if that makes sense. Okay, I think you'll have to kind of give us an example here.
Starting point is 00:16:39 So kind of what kind of thing can't you prove kind of just via ZK, but can kind of address with a T.E? Yeah, so like, for example, yeah, let's just like talk through use cases and maybe that's easier. Intent bridging is huge today, right? this is also EF's preferred solution to short-term interrupt. Across who, you know, we are in close conversations these days is using their optimistic Oracle to prove. Or let's take a step back.
Starting point is 00:17:20 Let's rewind a bit. How does intent bridging work? Let's say you want to go from an origin chain to destination chain. You lock your tokens in the origin chain. some market maker or solver who sees that you've locked your tokens send you the funds you want to receive on the destination chain then there's a process of proving that this solver paid you step took place and once this happens then the solver can withdraw the funds that you escrowed on the
Starting point is 00:17:52 origin chain right so this there's a proving stage in the middle there are different ways you can prove this like you know across the most popular bridge, uses their optimistic oracles, it takes 60 minutes. You know, you can do like a multi-seek proving, which is like, you know, really trusted. You can do as your knowledge proving, which would take, you know, on the order of minutes. And we can do this proving with TEs that would take, like, you know, on the order of seconds. Now, proving that the solver paid you and then therefore allowing the solver to unlock your funds is just, you know, passing proofs. Another thing you can do while you have proving and also smart contracting capability is,
Starting point is 00:18:40 like, this is a completely different use case that we're very interested in. And like, the use case is cross-chain vaults. Like today we love earning yields via volts in a single chain. Wolds allow us to get the highest yields for a token that we're holding, right? That's like what's why earn kind of like started back in 2020, 2021. Let's consider this scenario. Let's say you have USDC on AVE on optimism. From a risk perspective, there's no difference realistically having your USDC on AVE optimism,
Starting point is 00:19:21 arbitrar more base. And the yields between these roll-ups or AVE USDC yields between the, these roll-ups vary a lot. And, you know, no one bothers with this because the user experience of, you know, manually tracking the yields. And then when, you know, different roll-ups gives you higher yields, withdrawing from the original one, then go into a bridge, then bridging, then waiting for the funds to arrive, and then deposit is so painful that no one does it. Right. And when we have not only proving but also programmability, what we can do is we can create logic, let's say like an individual vault for you, which you know chooses which yield protocols
Starting point is 00:20:10 you are okay with and which roll-ups you want to have exposure to, such that we can monitor in an automated way inside a T1 state machine where your funds should be based on the yield and reshuffle your funds in a completely non-costal deal and an automated way. And this requires programmability. This is not something you can do just by passing proofs. Okay. I mean, in principle, I could set up a vault on Ethereum main net that kind of looks at all of these different yield opportunities on the varying L2s and kind of.
Starting point is 00:20:52 withdraws and deposits automatically, right? I don't see why in principle that's, that is, I shouldn't be able to program that, right? No, but Ethereum cannot read the yield on base or arbitrum. So I would kind of, so the thing is kind of like I would need some sort of oracle, right? And these exist, right? Yes, you would need some sort of oracle, but also how do you, like, move, your funds from your EOA on Ethereum to are there on base with, you know, in an automated and non-custodial way.
Starting point is 00:21:33 Like that doesn't exist. But if you have a smart contract wallet, it could exist, right? Because you can kind of roll these transactions together. And, well, you can with a smart contract wallet allow someone to deposit your funds to the base bridge contracts. And then once your funds are on base. Yeah, then you need another wallet to kind of handle it on that set. Okay, okay, I think I'm beginning to understand. Yeah, I understand now.
Starting point is 00:22:05 So kind of the tagline of T1 is that kind of like you do real time proving, right? So how much does that differ from other systems? So how long would it take on a different system to kind of, of do the same thing. Sure. So I think there are some metrics on eatproofs.org on how long it takes to prove an L1 block. And I think it's currently on the order of like single digit minutes of I want to say five-ish.
Starting point is 00:22:42 I'm not fully sure. With the T, the proving is instant because I mean, obviously it's a different trust assumption, right? So we're fully acknowledging that. However, as we're running the runtime inside the trusted execution environment, we can create an attestation from the TE as this, you know, as computation is run and prove to, you know, whatever platform that we want, this is the execution that took place instantly.
Starting point is 00:23:15 Okay. But you still need something kind of akin to a, a sequencer or literally a sequencer, right? It really depends. If you want to run a smart contracting platform like we want to do, yes, we need a sequencer. And, you know, if you were like alluding to oracles, like, you know, if you just want to have an Oracle network, then not necessarily.
Starting point is 00:23:46 But in our case, there's a sequencer. Okay, so can you walk me through the sequence of setup and how it works inside the T. So I'll start with like what's coming up short term and then what the long term vision is. We have both me and my co-founder, Orest, his background is scroll. So he knows the ZK Roll-Rop land really well. We have this, I guess, guiding principle. that we want to be very pragmatic with what we're building, and we want to be solving problems that exist today.
Starting point is 00:24:23 And as a result, our first tab at T1 is akin to a centralized sequencer that we have today in the market, but the centralized sequencer is running inside a trusted execution environment. What we do differently than other roll-ups is T's are already like, you know, beefy node infrastructure. So we run inside our trusted execution environment, full nodes of partner roll-ups, or what we call partner roll-ups.
Starting point is 00:25:01 So imagine a full node of base, a full node of arbitram. This allows us to read from those roll-ups, in addition to reading from Ethereum, aggregate their states on T-1, and prove the aggregate states on, on L1. So that's how we're reading, let's say,
Starting point is 00:25:22 the state of other roll-ups or ethym included in the same node infrastructure. So it's the same trust assumptions versus having like a smart contract base Oracle, if that makes sense. So that's like a V1.
Starting point is 00:25:41 Basically, a user can send a transaction to T1. The T1 sequencer, like, you know, base or optimism or arbitram sequencer, sequences these transactions, executes them, and the results can be,
Starting point is 00:26:02 you know, T1 state updates, and we can have withdrawal requests go back to L1, or we have at least what we call proof of reads. This is basically you know, more like an Oracle call, like saying like, hey, this like happened, this like, let's say market maker fill happens on base, and going back to like this intent bridging kind scenario, if that makes sense. And update the states. So that's how our system works. So we have
Starting point is 00:26:37 the differences to recap is we run our sequencer inside a T and we have full notes of other roll-ups and we have deposit contracts not only on Ethereum, like the current roll-ups too, but also on other roll-ups, and we can accept deposits from other roll-ups as well. Okay. How do you deal with the fact that the other roll-ups are not necessarily finalized, right? So kind of like they are only finalized kind of like once they've been committed to L1 or kind of they've been kind of proven. So kind of like for an optimistic roll-up, who kind of do you guys absorb the risk?
Starting point is 00:27:23 Or how does this work kind of in terms of trust assumptions? Yes. So great question. When we look at data on like, let's say, arbitrum reorgs, I want to say in the last four years, arbitrum reorg less than 20 times. And these reorgs are usually like one, two block heights, which for arbitram is less than two seconds, right? So we think or our approach is this is an insurance problem.
Starting point is 00:27:58 Like, you know, in real world, we have insurance against these edge cases. And there's no reason why we cannot treat these situations like, you know, like an insurance on T1 as well. we can have different parameters for different roll-ups as to when they finalize. But yeah, like, the short answer is, yeah, we parameterized that risk based on historical reorg potential and kind of absorb that risk to give users better UX. Okay. Yeah, that's super interesting to hear. So kind of like if you look at the spectrum of L2s,
Starting point is 00:28:41 kind of the security models vary a lot, right? So it's kind of the question, kind of like, how is it architected in the first place? How do they go about data availability? How often do they commit to L1? Do they have fraud proofs? Do they even post data to L1? Or are they validity?
Starting point is 00:29:04 And then you kind of have these sovereign roll-ups that aren't really roll-ups at all, but that never kind of check in with Ethereum. So kind of do you kind of do the risk underwriting for all of these by yourself? So if I kind of want to bridge to a roll-up that you don't have, that you haven't done this for, I'm out of luck, right? So kind of say, because in principle I can set up a roll-up tomorrow. How do I get integrated into T-1?
Starting point is 00:29:37 Sure. So to be very honest, like right now we're looking at base arbitrum. We have a base full node that's running inside our node infrastructure. In our like devnet environments, we're adding an arbitram one. Our focus in the short term will be optimistic rollups and and most likely OP stack plus arbitram. We are not interested. We are not interested. interested in validiums in the short term and with the optimistic roll-ups like our our thinking is as soon as the blobs go into the l-1 and they're finalized because of the current trusted sequencer assumption or like the the world we live in we can deem things finalized so the the in flight risk for t-1 is you know let's say the two minutes for optimism to post the globe and then the 13 minutes for L1 to finalize. We think this is only going to get better with like, you know, the three sorts finality discussions that we have on Ethereum and, and all that.
Starting point is 00:30:50 But we don't have, like, we haven't looked into how to deal with validiums now at this point. We already think that there's a big enough market just to go after actual like, you know, roll-ups. And you already alluded to the. fact that while right now kind of you're running in a single trusted execution environment, you will increase the number of trusted execution environments in due time. And will this ever become fully permissionless?
Starting point is 00:31:22 So can I run a trusted execution environment for you guys? That's the goal. And as I mentioned, our goal is to, like, we have identified a couple of use cases, you know, like some of these I alluded to earlier on, that make the lives of existing users and existing cross-jian application significantly better. Our goal is to prove in a trust minimized by TEs in a trust-minimized setup that these actually make sense
Starting point is 00:31:54 and people want to use them. And only then, you know, embarking this journey of getting things fully permissionless because it is a lot of money, it is a lot of effort, and I think, and having been in the space for a while, I don't want to build something that, like, you know, is cool, but, like, no one uses, right? So the goal is to get there, but the goal is to first prove that what we're building, you know, has actual demand. And we also have to, like, you know, introduce additional, let's say, defense in depth measures as we go fully permission.
Starting point is 00:32:35 because in the early days, like, you know, either a foundation runs a T or like, you know, a set of trusted actors run the T. And the main attack vector with T's is physical access. And, you know, we can trust that these entities are not going to, you know, physically break into, like, I think people, if when people can prove that they're running, T's in cloud, like a Google Cloud kind of environment, it's very, I would almost assume it's unrealistic to break into the T.E. But like when things are permissionless, this was especially a problem with the earlier generation of T's SGX, which you could run in your like Lenova
Starting point is 00:33:28 laptop, but physical attack becomes possible. So when we make things permissionless, we have to introduce additional defense in-depth measures to ensure that our system is still robust. But then kind of like you're forcing people to kind of rely on cloud infrastructure providers and kind of like we've seen them kind of revolve to people running nodes, right? So for instance, Hetzner has de-platformed everyone running nodes on the infrastructure. Yeah, I mean, I'm not. aware of that infrastructure company and at the like the reality is in the long term we don't have to yet these like new uh let's say generation trust execution environments are like not your computers so even if you buy the bare metals um and and and you run it like you know in your own data
Starting point is 00:34:30 server it is like it's not going to be the etym vision of like you know let's verify with our raspberry pies so and and the the only reason why we have to add additional defense in-depth measures is because when we make it permissionless we don't have to uh limit it into cloud but in the early days when it's like let's say a foundation running it there's already some trust uh assumed and that that trust could be further, I guess, limited by or like strengthened by running things, let's say, in Google Cloud and ensuring that we're not going to have physical access to it. Yeah.
Starting point is 00:35:19 So there's a number of attack factors here, right? So kind of like there could be the TE could be compromised or kind of like there could be some sort of factor that kind of lets people act. access the trusted execution environment. You could also kind of run modified software, right? Instead of the software you're meant to run. Is there a way of proving that you're actually running the right software? Yeah, I think that's called like reproducibility.
Starting point is 00:35:52 And the whole idea with the tease is you, when you're setting your environment up, you attest to the bytecode that's running inside the T and you create a signature. That signatures is what's attesting to the bytecode you're running. And every time you do a computation, you also create an ECDSA signature. And the idea is the signatures you are creating on the go as you're running new computations should be matching the initial signature that you've created and you've attested to. This process is called remote attestation, and that's how you make sure that you're running the same software in a non-tempered way, if that makes sense. Yeah, that makes complete sense.
Starting point is 00:36:45 And then kind of the other attack vector would be kind of like the trusted execution environment itself kind of not, I mean, being broken in a way that kind of like it's no longer trusted, right? So for instance, if you were to kind of break Intel SGX and kind of I could read kind of like the actual computation happening in there, you'd be compromised, right? That is true. I think there is an attack vector where there are two attack vectors. One is physical access as you're describing. And one is our own access pattern leakage. That's a problem that happened with secret network. and as far as I'm aware, the latter is more for privacy or leaking private data. And then, yeah, so those are like two possible attack factors. Okay. Do you kind of see trusted execution environments as a temporary bridge? Or do you think kind of like this will be so baked into the system
Starting point is 00:37:51 that kind of you can no longer replace it with? a different mode of proving? The way I see it is, you can use whatever method do you want to use. The important thing is how fast can you prove things and what that means for the users who are using your system or these cross-chain application interfaces. The reality is we care about proving fast because it gives their superior user experience.
Starting point is 00:38:26 I am not like, you know, necessarily tied to trusted execution environments, but the reality is we do not believe for the next three years, there's nothing. Like, like, there is, you know, interest that real-time Zer Knowledge Proving is going to be fast, but I believe otherwise, and I'm happy to get into why. That's my belief. our T's are not going to be removed from the equation for another three years. So we lean in very heavily into them. And when we want to strengthen T's with defense in depth,
Starting point is 00:39:07 we basically go with the idea is how can we keep things fast and how can we make things safe? So just like one for us, like the midterm solution is when we make T's, permissionless, we require some consensus to be reached among TE notes. This is also what secreted. And this would allow us to also go very fast. And in the event that there is a single TE that is misbehaving, we can attribute that misbehavior and we could slash the the stake, even if there's a single honest T.E. actor, right? And to us, this bake with T.E. is when things become permissionless is a more sustainable path to going, allowing real-time proving than going something with zero-knowledge proving. And then in our architecture, like, you know,
Starting point is 00:40:18 there's always this like attack vector if attacking your proof of system, proof of stake system is more profitable than the stake itself. Yeah. You know, you will attack it. And for us, this like, the real attack vector for a network like T1 is when there are withdrawals, right? If the withdrawal amount is larger than the value at stake, it could be a single withdrawal or it could be cumulative withdrawals, then you are at the risk of, you know, being, I guess,
Starting point is 00:40:49 like rock pulled or breach, whatever you call it. So what we do or what we want to do in the future is we want to have this bespoke or periodic zero-knowledge proof generation model where we create these zero-knowledge proofs, not say every block or not say like every X time, but based on the cumulative withdrawals from the system. Let's say we have a billion dollars of economic security. When we have 200 millions of withdrawals, we can create as your knowledge proof and we can verify that all those withdrawals up until that point were valid. If they're not, we can slash the system and we can make sure whoever cheated gets their money or like, you know,
Starting point is 00:41:38 is, or like let's say the amount that's stolen is recompensated to the stake. And that way, like zero out of budget and then go again. And again, and this would be a way to keep things cheap because, you know, like zero knowledge proving, even though the cost is coming down, it's never going to be as cheap as a hardware thing. And fast and permission is in the short run. And we think this is like we've seen in the market that users care about costs, right? we who have been in in crypto or in the blockchain space since 2017 we have very high ideals and we should strive for those ideals and I think it's also important to realize the newcomers have a different set of priorities
Starting point is 00:42:32 and if we miss those users then then we have the risk of like missing this like okay let's take this to a billion users narrative Yeah, no, I totally follow. But I also kind of got from your answer that kind of like, while you think of TEEs as crucial right now, kind of like if in, say, five years' time, you had a much faster ZK-proving framework, you would be happy to kind of switch over to something less trusted, right?
Starting point is 00:43:09 Exactly. We don't care about, like, you know, how we achieve something, we don't, we care about what we achieve and why achieving that is meaningful. So yeah, you know, if in five years your knowledge proving is real time, then of course, like, we have no, yeah, like, you know, I'm not married to TEs in that sense. So do you use one type of TE or kind of like in principle, you could make the system much more secure by using different kinds of TEE because then whenever one kind of has kind of like zero day
Starting point is 00:43:51 or whatever kind of like you could still fall back on the uncorrupted notes, just like we have different clients, right? Exactly. Yes, in the short term, we will use a single TE because that allows us to go to market faster and solve user problems faster. In the long term, yes, that's something that we're looking into. And also, I think there's a misconception on this front, which I believe is important.
Starting point is 00:44:25 If a lot of people talk about multi-vender solutions as a solution to the whole thing, however, like multi-vender solution still requires trust on a single entity to be running the whole thing. when you do have, let's say, like, you know, as in our future or long-term vision, when you do have different nodes, you know, execute and form consensus, then you don't care whether it's a single vendor or multi-bender. Because you already are verifying the computations by some sort of, you know, some sort of like re-execution. I think multi-vender people, you know, use that in like a,
Starting point is 00:45:16 in environments where, again, there's trust in a single party, but they're like layers of defense. And we want to remove trust from a single party first, then adding like, you know, layered trust to the single party environment, if that makes sense. Yeah, that makes sense. Maybe let's talk about one more thing kind of like, before we talk about kind of like the ecosystem and kind of like what it looks like for you guys right now.
Starting point is 00:45:43 So initially kind of like for Ethereum, the idea was that all assets would be issued on L1. And then kind of they could be acted upon kind of on roll-ups. But kind of in order for them, in order for you as a user to kind of be able to kind of force an exit to L1, you need assets that are natively issued on L1, right? So kind of like if you have an NFT that's natively minted on base, you have no recourse, right? So how do you see that kind of in terms of T1's security promises? I think all assets should be issued on L1 if, you know, we want to say that they inherit the security of Ethereum. And at the same time, you know, this is a like an asset issuer and user relationship and decision.
Starting point is 00:46:48 I, you know, I don't hold any assets that are not issued an L1 if that makes my stance kind of like somewhat clear. And at the same time, like, you know, I don't feel it's right for someone to, you know, if there is an asset issuer on T1, like, I don't think it's right for us to say, no, your assets can't be issued in L1. I think they have the responsibility to communicate to their users or holders what risks that they're taking. But at the end of a day, it's like, I don't think it's, it's our business to dictate that. Okay. So for T1, what's live today?
Starting point is 00:47:30 So today, we have a Devnet that, uh, entails a custom like rust environment runtime that's running inside TE and then inside the CEE you also have an hell1 node and a relayer and what we're doing is we basically can deploy any applications any EVM applications on the rollup
Starting point is 00:48:00 and we can prove the roll up state back to Cepolia so you can imagine like you know You can do things like you can start from Cepolia, the L1 test net, deposit funds to T1. You can do, you can use AMMs or applications on T1 and you can exit back to or withdraw your funds back to the L1 in the next L1 block. So within like we're talking on the order of seconds. You can deploy your own applications to this kind of setting. we also are using the setting to show how we can do in the intent example I just shared earlier in this in this podcast. We can allow, we can prove faster repayments based on the ERC7683 standard.
Starting point is 00:48:50 We can prove that a solver paid a user, therefore allowed the solver to get repaid faster. This is really attractive for them because when they're getting repaid faster, they can meet the same user demand with much less capital and therefore be more profitable. The solvers that we're talking to tell us time and time again, if they are getting repaid faster, they're going to offer better rates for the users. So we can show that. And we also have, like, you know, again, leveraging solvers. We can allow someone on L1 to swap against liquidity on T1 without seeing
Starting point is 00:49:32 he wants to exist in the background. So that's kind of like what we have today. We are aiming to go to a main net in Q3 this year, late Q3 realistically, with a, let's say, a mini version of what we have today that is optimized for proving these cross-chain reads to serve this ERC-7683 or intent, bridging and cross-chain ideas.
Starting point is 00:50:06 Because, again, you know, we can make this cheaper for users. We can make the system more profitable for market makers. It is, you know, a way to make both the demand side and supply side better off. So we're building that to be ready for production. We're aiming to integrate that with, you know, existing bridge providers to allow better user experience. And then in Q1 of next year, we want to become this fully permissionless roll-up that you can deploy whatever app you want on mainnet. We have that today on a DevNet version, but we are taking an intermediate step in terms of going to mainnet.
Starting point is 00:50:51 And then we go for the full roll-up vision. But you're targeting other protocols and applications, right? kind of like I as an end user would never interface with T1 directly. Yes. And to be very honest, like we can build some of these use cases in-house just to like, you know, the proof, you know, the dogs will eat dog food kind of situation. But yeah, ideal strategy for us is to make existing protocols, let's say like across, better off with real-time proving and then programmability in a phase approach.
Starting point is 00:51:37 How easy is it for protocols like across to kind of integrate T1? What we are going to be allowing in our initial, let's say, main net that's coming out this summer, or Q3, is to allow anyone to call T1 to prove, you know, something happened on a different roll-up. So in this in this scenario, you know, across, for example, could from their origin chain get this, um, proof of fill, um, from T1 that shows that, hey, Solver A, you know, paid your user on the destination chain. And therefore, allow to solve or to get repaid faster. This would be basically like just a smart contract upgrade on their behalf to recognize
Starting point is 00:52:41 the proof that we're already providing to their base chain. So they wouldn't need to necessarily touch T1 in any way. Okay. Do you incentivize partnerships? So kind of like kind of in terms of interoperability, kind of like there are a number of ideas and solutions out there and kind of it's as so often
Starting point is 00:53:06 it is a coordination game right so kind of you need everyone in the same boat or you know you need a significant number of players in the same boat how do you how do you get critical mass
Starting point is 00:53:22 yeah I mean our goal is the incentive is we make cheaper transactions or we enable cheaper transactions for existing bridge users. We believe this is a significant incentive. And, you know, we can, we have market makers and solvers in our network who are, for example, incentivized by getting repaid faster and, you know, tell us that they would be willing to provide liquidity on a bridge that we built.
Starting point is 00:53:57 Now, we don't want to maintain a bridge necessarily, but if no one, you know, if people are not convinced by faster, cheaper payments for their users, which we believe should have good enough incentive, then, you know, maybe we put it out there and see how our bridge is getting used and see, like, you know, whether aggregators, like the alchemies and third webs of this world are willing to offer this solution because it is providing. cheaper fees and see in that direction. We're not meaningful enough to offer grants to people who are like 10 times our size. We just want to organically provide something to them. Cool. Yeah, let's hope that works. So maybe let's kind of like zoom a little bit further out. So you could argue that Solana has solved the user experience problem that you guys are tackling.
Starting point is 00:54:57 at least within kind of, you know, a single ecosystem. And it's very much kind of pulling ahead in terms of user activity and at significantly lower cost than Ethereum. Why double down on Ethereum in this context? Great question. I would argue that, you know, Solana will have the problems Ethereum is having today. you know, we are seeing roll-ups on Solana or ideas of roll-ups on Solana. So I think this is a problem that we're going to, you know, see in the Solana land as well.
Starting point is 00:55:39 Why in Ethereum? Because I believe Ethereum's values of, like, censorship resistance and verifiability are things that resonate with us. the fact that, you know, like Ethereum is truly unstoppable is important to us because, you know, if we're building a new open finance, this open finance platform cannot, and I don't mean to like, you know, criticize anyone necessarily,
Starting point is 00:56:16 but this open finance cannot go down when we have, you know, a meme coin lounge. Like, Ithium had those issues with the ICOs that was back in 2017. The network would get congested and things would not work. But today, Ethereum works. And Ethereum would work in the edge case that you mentioned, what if all the data centers, the platform validated, Ethereum would still work.
Starting point is 00:56:43 We think Ethereum captures the values and the ethos that got people, like me excited back in the day with the Bitcoin white paper and that's why we're building on Ethereum. That's a very heartening answer. So people like you kind of have it
Starting point is 00:57:04 a little bit difficult in our ecosystem, right? Because people who kind of value this kind of ethos, they are typically critical of TEs and reliance on TEs. What do you think, think is the one thing that people in our space kind of get wrong about trusted execution environments? I would argue it's not about trusted execution environments, but like it's a criticism to like,
Starting point is 00:57:35 you know, Ethereum community at, you know, to some extent. I don't think we have and maybe like, you know, the new EFDX direction is, is an improvement. or like, you know, it's a change from this. We don't know who our users are, right? Like, for years we had this narrative that, hey, L1 is not for users. L1 is for data space or like DA. And like, no, like, you know, L1 should be for Defi users. I guess L1 should not be for gaming users.
Starting point is 00:58:11 But we need to have the, let's say, we need to be able to differentiate between different user types and we need to be able to know what user types that we're serving and how. Like, you know, there was this criticism that do EF people even use Ethereum applications? And like, if they're not, that's a problem. So trusted execution environments are not perfect. We acknowledge that they're not perfect. However, today, the only alternative that gives the same user experience is a trust me bro
Starting point is 00:58:47 setup, which is so much riskier than trusted executioner environments. Like, I will give you an example. Wormhole, which is a, you know, multisig bridge. And I believe it was like that back in 2022, and it still is. I had to get out of Terra. I had a ton of UST. And the only way I could sell that was Uniswap by going from, you know, Terra L1 to Terra ERC20. token. You know, it took me like four hours to bridge my tokens because the system is a
Starting point is 00:59:23 multi-six system and like I don't assume ill intention. But in that four hours when I moved my USD from Terra to L1, like I lost, excuse my language, but a shit ton of money because the USD was free-falling and, you know, like that kind of stuff happens with multisix. Like, and that stuff would not happen with a TE. Like, yes, you know, T's are not perfect, but at least you don't have that kind of situation. So my calling is, yes, this is like not the ideal solution. And, you know, in this world we live in, this is the only way to solve this problem. So we're pragmatic. If people don't like it, they don't have to use it. But we think it's like, you know, it's a 10-100x better than what exists out there. And it's like a 10-100x faster than what you're,
Starting point is 01:00:17 what your ideal solution is. So this is the world, this is the now we're in and we want to serve the now. That's a great notion to kind of end on. So where can we send people who want to kind of build with T1 on T1 using T1? So first of all, follow us on Twitter. It's the Handless T1 Protocol. And we have our devs. net portal live.
Starting point is 01:00:49 Our website is t1 protocol.com. And we have devnet. Dot T1Protocal.com, which is our devnet portal. There you can deploy apps on T1. You can, you know, bridge funds to T1, exits back from T1. We had more than 100K non-bought users in the last month and a half since we launched the devnet. So yeah, like, you know, more people are welcomed.
Starting point is 01:01:15 And if you want to build with us, you know, you can drop me a message on Twitter. My DMs are open. I'm not sure if we're going to share my Twitter somewhere. Yes, we can. But excellent. Fantastic. Thank you so much for coming on, Sean. Thank you so much, Federique, for having me.
Starting point is 01:01:37 This was a lovely conversation.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.