Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Near One: Scaling the Agentic Internet. AI X Crypto - Bowen Wang

Episode Date: July 17, 2025

‘Attention Is All You Need’, co-wrote by Illia Polosukhin in 2017, laid the foundation for arguably one of the most consequential tech breakthroughs in our recent history. 1 year later, Illia foun...ded Near AI, which later became Near Protocol. They were visionaries ahead of their time and, although AI took several more years before becoming a viable product, the experience of scaling databases would later prove valuable and applicable in blockchain world. As a result, Near Protocol aims to become the infrastructure layer for AI apps and the agentic economy. In order to achieve this, scaling was paramount, thus Near is one of the first blockchains to implement execution layer sharding, asynchronous execution and stateless validation, which brought the finality time down to 1.2 seconds, with a block time of 0.6 seconds.Topics covered in this episode:Bowen’s backgroundNear’s pivot from AI to blockchainsThe role of Near OneNear’s tech stack upgradesOptimizing network architectureStateless validation & block propagationSharding & asynchronous executionMessage passing between shards & shard ‘equality’Challenges of implementing stateless validationApplications benefiting from Near’s finality speedIntent-based infrastructureAI use cases on NearExpanding Near’s ecosystemDevelopment challengesNear’s vision and goalsEpisode links:Bowen Wang on XNear OneNear on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Friederike Ernst.

Transcript
Discussion (0)
Starting point is 00:00:00 Why crypto is generally interesting for like AI agents and so on, is that it allows the possibility of AI agents to own assets, the monetary means, to actually do things. So we actually developed this thing called a shade agent, which is roughly this architecture where you have this AI agent running inside TE trusted execution environment, and it has this controller smart contract on NIR that controls its access to different tokens. And then you have this paradigm where AI agents basically running autonomously,
Starting point is 00:00:30 and can act on user behalf and do stuff with your assets in a non-custodial way. In the future, there will be proliferation of those AI agents, and you need a kind of scalable platform to handle a lot of those transactions or interactions between them. So even just before this release, we had about like 1.1, 1.2 seconds block time and 2-3-second finality, which was still faster than Serrano's, and also faster than e-therms. This release just allowed us to reduce the time. by half, which is even more impressive. Welcome to Epicenter, the show which talks about the technologies, projects and people driving decentralization and the blockchain revolution.
Starting point is 00:01:09 I'm Friedricha Anz, and today I'm speaking with Bo and Bang, who is the founder of Near One, which is one of the companies behind the Near Blockchain, which we will get into in just a minute. Before that, let me tell you about our sponsors this week. If you're looking to stake your crypto with confidence, look no further than course one. More than 150,000 delegators, including institutions like BitGo, Pintera Capital and Ledger trust course won with their assets. They support over 50 blockchains and are leaders in governance or networks like Cosmos, ensuring your stake, is responsibly managed. Thanks to their advanced MEV research, you can also enjoy the highest staking rewards.
Starting point is 00:01:49 You can stake directly from your preferred wallet. Set up a white label note. Restake your assets on eigenayer or symbiotic or use their SDK for multi-chain staking in your app. Learn more. course.1 and start staking today. Hey guys, I want to tell you about NOSIS, a collective of builders creating real tools for real people on the open internet.
Starting point is 00:02:10 NOSUS has been around since 2015. In fact, it started as one of Ethereum's very first projects. And today, it's grown into a whole ecosystem designed to make open finance actually work for everyday people. At the center of it all is NOSIS chain. It's a low-cost, highly decentralized layer one that's compatible with Ethereum and secured by over 300,000 validaires. So whether you're building a DAP, experimenting with DFI, or working on autonomous agents,
Starting point is 00:02:34 NOSIS chain gives you a solid, neutral foundation to build on. But NOSIS is more than just infrastructure. It's also tools that people can actually use. Like circles, for example, lets anyone issue their own digital currency through networks of trust, not banks. And then there's Metri. It's their smart contract wallet that makes it easy to access circles, manage root currencies, and even spend anywhere Visa is accepted, thanks to their integration. with NOSUSPA. All this is governed by NOSISDAO, where anyone can propose, vote, and help guide the network.
Starting point is 00:03:06 And if you want to get involved, running a valid is super easy. All you need is one GNO and some basic hardware. To learn more and start building on the open internet, head to NOSIS. I.O. NOSIS building the open internet one block at a time. Loan, thank you so much for coming on. Yeah, a long-time listener, first time on the show. Great to be here. Fantastic. Maybe we start off with kind of your supervillain origin story. So kind of how did you break into crypto and how did you end up working on NIR? Yeah, it's actually a super interesting story. For people don't know, NIR started its AI company.
Starting point is 00:03:53 I mean, people often say NERA are too early about things. They're very right. It's pretty much in the origin, like back in 2018, which we wanted to do code generation. So basically, back then it was called Neuroprogram synthesis, but basically the idea is to translate natural language into code, basically all the LAM2s that allows you to do very effective today. But back then, we were very ambitious about doing it. And how I got into NIR is basically I saw a paper. Alex and I Leah published at I Clear one of the machine learning conferences.
Starting point is 00:04:24 And I was actually doing very similar research when I was in college. And I saw their paper and I basically reached out to them. And yeah, that's how I got into near for AI stuff. And, yeah, then we had this major pivot that happened in August 2018. And that's basically the start of a near protocol. What made you guys pivot from AI to blockchain infrastructure? Yeah. I think I actually wrote a blog course about this.
Starting point is 00:04:57 I think people can still find it. But let me just give a high level summary. So I think one thing is that back then, it was indeed way too early for the code generation idea. and train some models. It's just like the performance was not on the level where you can productionize this thing in the product. It's like good research project, but we estimate at the time that in order to get it to some kind of production level
Starting point is 00:05:30 tooling, you probably need a few more years of research and development. And also, it's really hard to compete with large companies, just because the deficiency on the data side. And at the same time, we were looking at, you know, if we have this two, where can we actually first use it? So smart contract become one of the areas we look at. And the reason is that at least back then, I think still true today, that smart contracts relatively self-contained,
Starting point is 00:06:00 right? Like a lot of them are just a few hundred lines of code, or even less than 100 lines of code. And they serve, like, you know, specific purposes that are relatively, you know, you can describe it pretty well to some extent. And I think, basically, we think that's a natural first step to, if we have this two, to actually apply. And then we look more into smart contracts and then, you know, well, how they actually work, right? What powers them?
Starting point is 00:06:28 So back then, yeah, Ethereum is pretty much one of the only power of smart contract platforms. And so we look at how ESARN works, and we saw that, well, there are a lot of things that could be improved. And given, you know, Alex and Ely's background, they're like, yeah, maybe we can build something that results in a better smart contract platform. So, like, you know, all those factors combined led us to this massive pivot. So in the time that kind of like you were busy building blockchain stuff, all of that AI stuff has come to fruition, right? Kind of like in some way, kind of like you pivoted to something that you thought would be faster, but then kind of like, I mean, blockchain is still much farther away from the mainstream than say chat GPT.
Starting point is 00:07:20 And I mean, I know Elia was also instrumental in the transformer, kind of like the transformer model. So was there ever any fud as to kind of whether this. whether this pivot was well-timed and well-placed? I don't think so. I think it is a good pivot because, you know, we have built one of the best blockchings out there and we were quite successful in this area. And even if we marched along the railroad,
Starting point is 00:07:57 what we're very likely to happen is that we won't even, like we won't be able to still compete with those. large companies and maybe what will happen is that we run the funding before like the the L-1 board actually comes along in 22 right so like it's really you need to like go through those dark years before you see light at the end of the tunnel um so i don't think so i think i think we all think is this this is the right decision that we make so so we said in the intro that you are the founder and and CEO of Nia 1 so what does that What does NIA1 do and how does it kind of fit into the NIA ecosystem?
Starting point is 00:08:38 Because there's also the NIA Foundation and Pagoda, right? Pagoda no longer exists. Yeah. So NIR One is basically the entity that does technology, research, and development in the NIR ecosystem. So we work on obviously protocol research development, but also we build technology on the champs abstraction stack, including chain signatures and Omnip Bridge. So basically to provide this unified experience to end users, regardless of what the product chain they won't interact with.
Starting point is 00:09:10 Yeah, that's basically a summary of what we do. Cool. So maybe let's dive into the tech. So you guys recently announced an upgrade that lets you have 600 millisecond block times and 1.2 second finality, which is insanely fast. So it's kind of like 10 times the speed of Solana. It depends on kind of like, I mean, finality in Ethereum is slow and kind of like you can do things beforehand and so on.
Starting point is 00:09:40 But kind of like in terms of finality, it's 600 times faster than Ethereum. So kind of walk us through the technical requirements and kind of the technical stack that makes this possible. Yeah. So I think it's actually not just any single optimization or any single feature that we implemented, that led us to this point. I think it's an ensemble of different technological improvements we made over the past several years.
Starting point is 00:10:11 So even just before this release, we had about like 1.1, 1.2 seconds block time and 2 to 3 seconds finality, which was still pretty good, right? Because the finality is still faster than Seranos and also faster than elycerums. And this release just allowed us to reduce the time by half, which is even more impressive. So I think, yeah, I can first talk about what we did in this release specifically, and then I can talk about seeing some of the things more general that we've had for a long time. So in this release, I think it's actually a very interesting story, because when we launched naturally 201, so basically makes the system fully sharded
Starting point is 00:10:55 and also a stimulus validation design back in August 24, so about a year ago, we actually ended an optimization that was originally in the system that supposedly led to longer latency, but because we actually in that release, we significantly improve the latency by actually holding basically holding the state of each shard in memory. I can talk more about how that actually is possible.
Starting point is 00:11:28 So we actually netted some improvement in that regard. But still we have this problem where, like, in the old design, what happens is that you as a chunk producer, so trunk producer is basically a broad producer for a specific chart. So when a trunk producer produces a chunk, it doesn't apply it immediately. It first sends it out to all other, validator, chunk producer in the system before it even applies it.
Starting point is 00:11:58 So this is notably very different from, I think, pretty much all the other blockchains I know, because if you look at how it's zero works, how Savannah works, what they do is that you always have post-stage route, meaning that, or Serrano doesn't have the stage route for every block, but, but what it does is that you first, you as a blog producer, you produce a block, and then you apply it, and then you kind of have the actual block, which the post, which is a state route of after this block. But what do we do is different.
Starting point is 00:12:25 We use a pre-state-rude, meaning that in the block, what is basically says that what is the state route as of before this block? So we don't actually need to apply it before the block can be produced. So this allowed us to do this interesting optimization where, like, we disseminate the chunk
Starting point is 00:12:45 before the block producer even applied. So it means that the application of this chunk is actually parallel in the system. pretty much there's no overhead as to like, the other guys need to wait for this producer to apply this block before they can apply the block. Basically, ignoring some network latency, then basically everyone applied at the same time.
Starting point is 00:13:08 And this basically makes it much faster. And we ended that in the 90s 8-2.0 launch because now we actually need state witness for each chunk, meaning that you actually need to present this Merkel proof of the state so that the validators can actually validate it. So you cannot just say, like, okay, I produce a producer chunk and I disseminate it right away because you don't have to stay witness. So that doesn't work immediately.
Starting point is 00:13:38 There's a lot to unpack here. So maybe let's start with kind of the way that the network is architected. So kind of in Ethereum, kind of there's kind of, there have been very, long debates about kind of how long block time should be and kind of to optimize it such kind of that kind of like network splits are reduced right so kind of you want to make sure that kind of the block kind of propagates through the network properly why is this different for you guys or is it different for you guys um I don't think it's really different And I also think that there is no fundamental reason why
Starting point is 00:14:22 E-Sermis block time has to be 12 seconds. I think it could, in fact, be much faster. I mean, that is what people are discussing this stage, right? So I don't think there's any fundamental limitation either on, you know, network bandwidth or some, like, you know, the round, like the, how fast it takes to actually apply a block and so on. In fact, I think the Syrian wild biolators are trying to raise the limit to 60 million gas. So, yeah, actually, I don't have a good understanding why I said to 12 seconds.
Starting point is 00:14:58 I think, yeah. So actually, I think I remember, I think kind of like it's from when we still had a proof of work, because then kind of you kind of needed to disseminate the block, basically because he didn't know who was allowed to build the next block, right? Right. So kind of, and then kind of you had big anchor rates, kind of like if you had too slow propagation. But you're right. Kind of like for obviously, if kind of like if you have proof of stake, then that shouldn't be an issue. So what are the requirements for running a node? How many, how many nodes are there? Is it kind of something that a hobby is? So kind of like, I run an Ethereum node and kind of like I'm not a professional DevOps person.
Starting point is 00:15:45 And for me, that's possible, whereas kind of like there's no chance I could ever run a Solana node, for instance. Where does Nia sit on that spectrum? Correct. No, I think that's also like one of things that we really value, right, the ability to, for everyone to run a node. And especially a node that participates in the consensus because we think it's important for the decentralization of the network. And actually, the reason data we got is that it takes a 12. $27 a month to operate a near chunk of validator. So this is because after the Stereus validation change that we launched as part of
Starting point is 00:16:25 in 2021-N-A-N-A-Las year, the validator don't need to store any state locally. So they validated by having the state witness. I mean, we can talk more about the Stelis validation design, but this was actually a Vitalik's original idea he put forward in 2017. And it is really for, it is really beneficial. to the decentralization of the network. Now you can actually make the note super lightweight because it doesn't really need to have the state.
Starting point is 00:16:51 I mean, state would be the most expensive thing you already note has, right? Like take up a lot of disk space as well as potentially memory. But because we don't need to have that, you actually get state witness for each block, and then they validate it based on that, and then they can throw it away afterwards. Maybe just to interject it, just to kind of,
Starting point is 00:17:11 I'm not sure whether everyone's familiar with the term. So kind of like stateless validation, It just means you don't need the entire state in the memory, but kind of like you just need the transactions that you're processing and kind of the previous macro route and then kind of you can you can see that kind of like this is fine without actually knowing. And I mean, obviously the state is massive depending on, yeah. Correct. Yeah, yeah. So just to kind of, yeah, I should have explained how state witness works. Basically, it's a proof that the state that the transaction touched during the execution indeed
Starting point is 00:17:49 belonged to the previous state. So there's like the producer, the chunk producer in this case is not being malicious because you can actually verify this indeed belongs to the previous state route. And you can also verify that as a result of processing this chunk, the post-state route actually match what they say it should be so that this state transition is valid. And as you said, the kind of the key insight here is that even though the state itself can be very large, what you touch during the execution of one block is limited. So you can actually have this rather small state witness that is distributed to the validators and they can validate the state transition based on that. Okay.
Starting point is 00:18:34 I understand that part of the optimization. You also touched upon something that I didn't fully grasp earlier. So that was the fact that kind of validators send out blocks before kind of they apply them. Yes. Can you explain why that is secure or kind of as a near validator? So any way I can use this to attack the. to attack the network. So kind of
Starting point is 00:19:09 of what's the special thing that kind of allows Nia to kind of propagate these blocks before they've been applied? There's nothing about really about security here. It's mostly about this concept whether you're using previous
Starting point is 00:19:25 state route or post-state state route. So, okay, yeah, this is a bit tricky to explain. I'll try my best. So in a theorem, how it works is that you need to have, in order to produce a block, you need to actually say like, what is the state route as of after applying
Starting point is 00:19:42 this block? So basically the block producer needs to have the block, it depicts the transactions or like some builder send them transactions. They need to apply it. They need to say like, okay, as of applying this block, what is the state route afterwards, right? So then they have the same code together as the block, and then they distribute that and then people validate it. Now, in near what is different is that each block doesn't say what the post, what is the state route as of after applying this block. They just say what is the state root as of before applying this block. So because of this shift, the kind of the things on the consensus level doesn't really
Starting point is 00:20:23 change, but it allows the validators to kind of, instead of applying it first, they can can send it out and then like everyone applying at the same time. And then basically like as of like as they're endorsing this block, they're endorsing basically what the previous state of this blog is. And then it's always shifted by one in that regard. Okay. I understand that. But are there any disadvantages or why don't other blockchains do it like this?
Starting point is 00:20:56 Well, actually, I think I'm blanking on the reason. But it has something to do with the synchronization. execution nature that EVM and other chains have. Okay, yeah. Don't worry about it. So maybe tell us about the asynchronous execution in NIA. So kind of like how is it different for you guys? Yeah.
Starting point is 00:21:21 So on NIR, how it's different is that, so let's say you have one smart contract that's calling another smart contract. Then, you know, I'm assuming how this works is that this call would be synchronously executing in one block. And then you'll get the return result. And now, how it's different is that those two smart contracts can live on different charts. And because they can live on different charts, you cannot actually have this synchronous execution directly. So what actually happens is, let's say you have some smart contracts. A that is being called, and then it generates another call to Smart Contract B.
Starting point is 00:22:05 And what actually happens is that there is some block where the Smart Contract A execution happens, and it generates a receipt that sends to the shard where the Smart Contract B is all, and then the execution would continue there. But it will not continue in the same block. You will continue, for example, in the next block. Okay, yeah. So I think there's a part that we kind of glossed over earlier a little bit. And that's the part that NIA has full sharding.
Starting point is 00:22:38 So kind of you kind of have full execution sharding, kind of the way that kind of, we thought originally kind of it would play out for Ethereum in the, in it kind of like for scalability. And I mean, that's still the plan, right? But so far Ethereum doesn't have it. So kind of like you have different execution shots that kind of talk with each, that kind of, that talk with each other asynchronously. So tell us how that's architected and how you would compare it to other kind of federated chains, such as kind of like the IBC ecosystem, for instance. Yeah, so it's actually pretty different paradigm there.
Starting point is 00:23:30 I think the, so what NER allows is basically this native composability between different shards, meaning that if you want, if you are a smart contract on, let's say, shard A, you want to call some smart contract on Shard B or some smart contract on Shr C, you don't really care about it. You just call it directly and then developers don't even think about when they develop the smart contract. But, and basically, yeah, the developers also don't really think about which sharp they deploy their smart contract on. It's basically the system automatically figures out with shard on the smart contract on. And basically, the sharding is transparent to the developers. They don't really think about this. But how does the system kind of figure out which chart my smart contract should be on, right?
Starting point is 00:24:23 Because kind of like, if I interact with a specific set of smart contracts that are all on one shot, kind of I can do that synchronously if I'm on the same shot, right? No, no, no, that's actually not true. So the system is, yeah, the system is completely asynchronous. This is so that we don't encourage developers to try to figure out how to colludicate their smart contract. Okay, okay. Interesting. Yeah, and another reason, another reason is follows. So we have this mechanism called resharding, right? So this is a mechanism where you split one shard into two shards.
Starting point is 00:25:04 And this could happen because one shard is overloaded, right? You want to kind of have a way to react to that. And basically there's this mechanism can split a shard into two shards. And if you actually depend on the collocation of two smart contract on the same shard, then, like, you know, this mechanism potentially break it. And then it's really bad because let's say you depend on them, and then one day you discover it's actually not on the same chart, then like your assumption, you know,
Starting point is 00:25:29 let's say we have synchronous execution and then it suddenly now become asynchronous. And it might potentially break a lot of things. So that's why we decided to basically stick to the async execution paradigm. Okay, so basically the, and then kind of like when you split them, kind of like you more or less decide randomly which bits of the, how the shard. has divied up? It's not random.
Starting point is 00:25:54 It's basically based on like gas usage and state size. So roughly you can try to find the middle. So half and half. Yeah. Obviously it's discreet. You cannot like make exactly 50, 50. But like yeah, rough. Okay.
Starting point is 00:26:09 So talk about how these different shards manage to communicate with each other. So basically the concept is called receipt. So let's say you have smart contract shot A, you have smart contract chart B, then they want to communicate with each other. So there is obviously on the smart contract level, developers is right. I call that smart contract. But what actually happens is that there is a receipt that's generated in the system that's going to be sent to the destination chart.
Starting point is 00:26:40 And then on every chunk, which is a shard block, it says what is the outgoing receipt route, basically is a Merckor route of all the outgoing receipts. And then on the destination chart, they can check that there's a morgo proof basically allows the trunk producer to check that it's indeed coming from the other shard. And like it matches the out-coyne receipt route so that everything is like there's no receipt that being generated out of nowhere. So everything is basically matches correctly. And then there's a mechanism basically for the chunk producer of one shard to basically get the receipt. from other shards, so they actually fetched them, so that they would get it for, basically,
Starting point is 00:27:28 after one block, they would get them. So, yeah, this is kind of on the high level how it works. Obviously, it's kind of complex in the details, but that kind of conceptually how it works. Does every chart have a complete list of all the other shards? And if so, how is it updated? So kind of like, if I'm a chart, how do I know I'm actually getting information from a legitimate other shard? How do I know kind of like someone's not just sending me a message?
Starting point is 00:28:04 Yeah. So the coordination of all shards and the validators is global, basically. So all the validator understand globally what is how many shards there are. And then at every block who is producing for their shard. And, oh, I think I forgot to mention this part. One of the key innovations of nice trade or original designs is that there's only one chain. There's no separate blockchain per shard. And then each block contains the header of every chunk from every shard.
Starting point is 00:28:39 So obviously, you cannot put the entire chunk into blocks because that would be humongous. So what actually happens is that each blog contains all the chunk headers. and people actually reach consensus on that. And that's how you actually know that what is actually the various roots that from other shark, because it's included in the blog, and then everyone has access to block use. They're relatively small. And the entire Vyodor said, which consensus on the blocks,
Starting point is 00:29:07 so that's how they know that, yeah, those are legitimate information. So I understand why you're setting it up such that developers don't, try to collocate their smart contracts. But that also creates a lot of overhead, right? Kind of like if you kind of think about kind of team communication or say kind of like, say, I mean, it's in essence kind of it kind of increases the amount of message passing that you kind of have to have between the shards a lot. So does that kind of, I mean,
Starting point is 00:29:49 If you think about it kind of in terms of super naively and kind of like say, okay, every smart contract is kind of like a person, kind of doing some work, you kind of, you want to co-locate teams together so that they can kind of talk with each other and don't need to send each other letters, right? And in a way, kind of you're enforcing remote work for everyone. and kind of you're making everyone send letters just so doesn't that kind of make the system less efficient than it could be? I think there's definitely like an argument for that but I think there is fundamentally a balance between kind of I would say scalability
Starting point is 00:30:42 and maybe convenience or, you know, you could say like latency of execution that needs to be struck. And I wouldn't say there's a lot of overhead in this. Let's say if you compare what I described to basically a system where you have synchronous execution within a chart, the kind of messaging overhead doesn't really increase that much because if there's some receipt going from the shard to the same shard, it doesn't really send external messages. It's just like all the chunk producer know after they apply the trunk,
Starting point is 00:31:30 which shard the receipt would go to. If you go to the same shard, then they don't actually need to send it out to other people. They all have the information. But yes, you're writing in the sense that this does mean that it needs to be executed in the next chart, instead of in the next block instead of in this block. But I think it is a highly debated topic even within the team. I think it's basically like, yeah, do you allow people to have the convenience or I guess maybe not necessarily
Starting point is 00:32:00 convenience is the right word? It's more like have the ability to unlock this synchronous execution partially or you stick to a paradigm where it's like, from like design level is like uniform, you don't really need to think about the special case. I think we've taken the latter approach just because from the design point of view, it's,
Starting point is 00:32:24 you can say it's simpler because you don't really have any special case in that regard. And it also, as a said, it works well with things like, you know, what do you need to do with resharding and then so on. And I think we are like actually have some ideas about potentially allowing multiple smart contracts to be on the same account so that you can have kind of synchronous execution within those smart contracts.
Starting point is 00:32:54 So, yeah, and actually that's how basically how, kind of how Aurora works. It's like an EVM interpreter running a smart contract on the air, and then it internally holds a bunch of EVM bytecodes, and it can then do like synchronous execution. between those smart contract completely emulating the EVM experience. Tell me a little bit more about the shards. Are they all created equal? So, kind of, is, for instance, is gas the same cost on each shard? Do they all have kind of like, in principle, the same size limit and so on?
Starting point is 00:33:35 Because in principle, kind of like, you could almost imagine it like really, estate where kind of you say, okay, one's kind of like the premium shard and then kind of like you have lower, yeah, lower tier real estate elsewhere, where kind of rent is cheaper and kind of, yeah, I mean, in principle, and I mean kind of like we've seen this kind of like with kind of like parametization of kind of different execution environments in other ecosystems. So tell me kind of like what path you chose and why you did so. Yeah.
Starting point is 00:34:22 So yeah, that's also another very interesting question. We basically, we chose throughout that all shards are created eco, you know, in terms of gas limits, like how they work. like various things. There's not really, no, like, no charters are really special in the sense. And again,
Starting point is 00:34:46 I think it comes down to the philosophy that we really want this to be transparent to developers. They don't really have to care about which shard their smart contracts deployed on. And they really don't really think about sharding when they do smart contract development. They just know that they write some code
Starting point is 00:35:06 that could be compiled to WebAssembly, and then it will be deployed on their account, and then magically it will work. So I think from that point of view, it definitely reduced a lot of cognitive barrier for smart contract developers. And it also allows us to, if we think about how it scales, right,
Starting point is 00:35:28 we scales through having more shards. It also allows you to not having to deal with some special cases where maybe we can scale some shards, but not other shorts, it becomes really strange and really just hard to deal with from a design point of view. I understand the trade of
Starting point is 00:35:46 that you're making. Tell me a bit more about stateless validation. So kind of, I understand why this is clearly superior. But tell us about kind of the technical
Starting point is 00:36:06 lift that you guys had to make to kind of implement this. Why don't we have this yet on Ethereum? I actually think someone developed a stimulus client for Ethereum, but it doesn't get adopted because not sure why exactly. But I think for Ethereum today to transition to Stelis validation, yeah, probably involve a lot of... Well, okay, I should say, I probably don't know it's very well enough to understand all the things that it may break
Starting point is 00:36:42 or the problems that may create. But I think for NIR, one of the, I think it's almost, you could say, yeah, so it benefits us in many different ways, right? So one is that it makes the sharding design kind of more, simpler in the sense that now with a serious validation, we can have violated rotation on every single block. So we don't have to worry about the security problem
Starting point is 00:37:18 that you might address with fraud proofs and so on because the validator rotate on every single block. It is basically probabilistically, basically probably probabilistically every shard is secure because the assignment is random based on this randomness beacon, and they rotate on every single block. So that makes, that simplifies a big pain point of the design.
Starting point is 00:37:51 Because if you think about what optimistic role-ups are doing today, the fraud proof is still like, I was in not completely, I mean, it's kind of resolved from the technical point of view, but it's really scary to turn it on. And I think still, like, no major L2 have completely turned it down in a permissionless way. Yeah, so that was also scary for us as well. And I think even compared to Altusian, like, more ways like that can be messed up in our case. So it's even more scary, you could say.
Starting point is 00:38:27 Because you need to think about, like, what do you do with the rest of the state on other shards. You need like basically simultaneous day roll back all shards, which is quite annoying. So, yeah,
Starting point is 00:38:39 so that's one piece. And another big piece is that what I talked about initially, which is that it actually, it actually works really well with sharding in the sense that because we have sharding,
Starting point is 00:38:53 we can limit the size of each shard to some reasonable number, let's say 40 gigabytes or 50 gigabytes. And you can have this dichotomy of chunk producers who you only need a relatively few of them because it cannot do anything malicious. They exist for censorship resistance reasons. So they can run more expensive hardware. And you can have a lot of validators. They actually collectively validate the city transition.
Starting point is 00:39:20 But they don't really need to roll any expensive hardware because they receive state witness, which are relatively small. So now you can have this dichotomy where on the one hand you have this chunk producers that can run. relatively expensive. I wouldn't say it's super expensive. It's still consumer grade, let's say 64 gigabytes of run, which allows them to completely hold the state in memory as they are executing the chunks. So that would be really fast.
Starting point is 00:39:48 And then on the wider side, because they receive state witness that are relatively small, they also, when it comes to execution, the state is also held in memory. So this actually addresses one of the biggest performance bottomack that we had before, that comes from state access. Because as you know, like, you know, when you have a database, whether it's like level DV or like RocksDB or like you cannot guarantee really like fast execution for every state access, just like not possible. And
Starting point is 00:40:17 some of them, and then some of the state access will potentially become quite slow if you miss, miss the cash. And that actually become, and that means you cannot really reduce the gas price, a guess cost of certain operations. And yeah, that was actually a big performance bottleneck for us. And yeah, this basically addresses that major performance bottleneck. And it really works really well with sharding because it said, if you don't have the sharder setup, you only have, let's say there's no concept of sharding. Everyone needs to, all the producers need to track the entire state.
Starting point is 00:40:55 And at some point, it probably become unrealistic for you to hold, let's say, 300 gigabytes of stating memory. Yeah, absolutely. Cool. I think we've talked about a lot of the tech stats. Maybe let's talk about the kind of applications that kind of you're starting to see with finality this fast. Because in some ways, it's a paradigm shift in kind of what you can actually run on blockchains, right? Yeah, yeah, I think exactly.
Starting point is 00:41:33 I think so if you look at a lot of the applications on here, a lot of them are consumer-focused, and though we're seeing more and more kind of chan-obstracted applications that are working on here. So, you know, some of the largest application on near, including Kaiching, which is this kind of platform that allows users to earn points by watching ads on your phone, and also sweatcoin, which is work to earn an app.
Starting point is 00:42:03 And I think a lot of consumer applications, like, people care. They don't really care about what blockchain runs on or even whether it runs on on blockchain, but they do care about like the experience, at the end-to-end experience they get, right? So when you have like faster block time, faster finality, that's definitely very beneficial to them. And also what we're seeing is that with those chain-abstracted applications that settle on near, for example, we have near intents is one of the decentralized
Starting point is 00:42:30 platform for multi-chain asset trading that we're seeing on here, the extremely short finality really means that it can settle really, really fast. And that is very, very appealing to people as well. So the crowd that really you kind of have to convince to build on near our developers, right? Kind of because, as you say, users are not going to care, right? kind of like, I mean, we care because we've been in this, in this ecosystem for a long time and kind of like we're interested in the tech itself. But this is very much not a regular user, right?
Starting point is 00:43:10 Kind of all of this will be abstracted away from the user. So kind of what are the value propositions that you have for the developers? So kind of like, why would someone say, okay, I'm building this application on NIA rather than on Solana on Ethereum? Yeah, I think definitely the kind of block time, fast block time, fast finality is very appealing to them, especially in certain financial use cases where they want to make sure there's a fast finality guarantee that there's this fast finality guarantee that, you know, the transaction has already happened, has been settled. But also like the cost is another big consideration for them, right? If you compare the cost on NIR,
Starting point is 00:43:54 to cause on its theorem, there's like a pretty big difference. And basically the cheap transactions is another appealing factor for the developers. And now with the work more on chain abstraction, we also like developers think about what they can access from near this kind of into this multi-chain world. I think that's another appealing factor to them. You talked about the intent-based infrastructure. Tell us a bit more about that. Yeah, so basically the idea is that, so we have built this intense-based decentralized infrastructure right now for multi-chain assets, trading and settlement.
Starting point is 00:44:37 But it's pretty general purpose, and it can potentially be used for like kind of slow settlement or like clearing use cases, but also potentially a settlement of physical goods trading, as well eventually. And we kind of envision this to be this settlement layer for this internet commerce eventually because a lot of things, a lot of different things can be expressed in tense. But obviously, to get there, you need more complex system that involve arbitration, like, what happens when there's dispute and all this kind of stuff. So right now it's started as multi-chain. a trading platform.
Starting point is 00:45:25 So basically you can, it's currently an RFQ platform where you can say, like, I want to trade one Bitcoin for one Ethereum, please gave me a code. And then there are a bunch of solvers, which are trying to solve for this code
Starting point is 00:45:38 and provide the user with the code. And then the settlement happens on here inside the smart contract. So basically when this intent is executed, the smart contract is updated. And yeah, it happens really fast. I've used it. It's super impressive.
Starting point is 00:45:55 So I was totally blown away. Thank you. But it seems like there isn't all that much usage yet. Why do you think that is? A lot of it is related to distribution. You need to get to the right distribution channels because it's not that. If you think about what people do today,
Starting point is 00:46:20 they, you know, a lot of them, let's say they want to get token swaps and so on. Outside of centralized exchanges, what they really do is that they go to their wallet. A lot of the wallets today have building swap functionalities, and they just do it there. So if you have completely new platform, it's really hard to get just a good amount of distribution,
Starting point is 00:46:45 and that's why we're partnering with different distribution channels, and we had Khyber Swap Integration Life, and there are a few large ones that are in the pipeline. So I think, yeah, if we get to get integrated with a number of major wallets and other similar distribution channels, then that's where we'll see a lot more usage. Yeah, that makes sense. You also kind of have, kind of despite the fact that kind of like you initially pivoted away
Starting point is 00:47:16 from the AI angle, kind of, this is known as kind of like the AI agent. blockchain in some ways. Why are the fast block times important for AIs? Or kind of like, what are the use cases you guys see? I think a number of different use cases. And I think one is this kind of autonomous AI agents that we're thinking about. So it's actually a pretty interesting paradigm where you can have, well, just to start, I think why crypto is generally interesting.
Starting point is 00:47:54 for like AI agents and so on, is that it allows the possibility of AI agents to own assets to have the monetary means to actually do things. Because if you think about the traditional financial system, I think it's very, very unlikely that AI agents can pass KSC and get bank account and so on. So crypto, in that sense, give them this unique advantage that where they can actually own assets and do things with them. And I think, Anir specifically, we're working a lot on, so we actually developed this thing called a shade agent, basically,
Starting point is 00:48:33 which is roughly this architecture where you have this AI agent running inside some TE trusted execution environment, and it has this controller smart contract on NIR that controls its kind of access to different tokens. basically what it can do with, what token the agent can control. And then you have this kind of paradigm where the AI agents basically running autonomously and kind of can act on user behalf and do stuff with your assets in a non-custodial way. Because the reason why it's non-custodial is that anyone can spin up this agent.
Starting point is 00:49:21 And if the agent is inactive, the user, there's a way for user to withdraw the phones. So there's no worry about lots of phones and so on. And this gives rise to some, I think, very interesting autonomous AI agent use cases, whereas it's not really wrong by any, controlled by any, like, cooperation government and so on, almost like exists completely independently and it just like acts according to some program and does things. So I think that's potentially very interesting use case. And we also imagine that just like to kind of expand on the AI agent use cases, we expect that in the future there will be basically proliferation of those AI agents
Starting point is 00:50:09 and even one user may have many of them. And you basically need and they would like trade between themselves or do other things, settlement and whatnot. And you need a kind of scalable platform to handle a lot of those transactions or interactions between them, whether that's financial or otherwise. So I think that's why it is important to have this scalable blockchain that is fast and also cheap to enable a lot of the AI use cases there. does putting
Starting point is 00:50:47 AIs on a blockchain that you can't stop worry you do you find that worrisome I think because kind of like you know kind of like in this entire AI
Starting point is 00:51:05 Doom kind of discussion kind of like a lot of people kind of just end up saying you know what if they turn on us we'll just pull the plug and this is something that's that's kind of that's precluded if you kind of run them on unturnoffable
Starting point is 00:51:22 infrastructure, right? Yeah, I would say like, I mean, the first thing I would say is like, you cannot really stop that from happening. It's like the technology is there and it will make progress. And I think it will happen like one way or another.
Starting point is 00:51:41 I think instead of like purely worrying about what that consequences may be. I think we should actually actively explore the frontier of the technology and understand what benefits we can get from having such paradigms. And, you know, in a lot of the technology revolutions in the past, people are always afraid of the new things, new possibilities. But I think it turns out that, you know, we've benefited a lot from the development of technology in the past several hundred years.
Starting point is 00:52:13 that may that's true but it may also be kind of like selection bias right so kind of like had we not we probably wouldn't be talking here um so um there's a lot of stuff going on um kind of in nears ecosystem right so kind of like you have AIs you have defy um we talked about those you also have an NFT ecosystem you have DAWS and so on what what do you feel is working and what what's harder than expected? I think, you know, what is working is that we've seen that we've built this very scalable blockchain, this smart contract platform that actually has a lot of usage and it doesn't crumble, you know, under this heavy pressure, I think. We have like about 3 million daily active addresses and sometimes 10 million transactions a day and sometimes even more than that.
Starting point is 00:53:13 And I was seeing that the sharding architecture actually works well. So there was a time, I think, in 24, so last year where there's sudden increase of traffic on the network. And we were able to address it by splitting some shards into two shards. And we've actually seen that this does actually address the network congestion problem. And that gave us validation and hope that we're on the right path. This is how you can address the scalability problem for blockchings. And it is, and what we've built have proved that point. What has been more difficult?
Starting point is 00:53:54 I think, yeah, I think on some of the AI use cases, it's not that easy to build out the product that people want. For example, you know, we talk about confidential model inference and model training. I think those are like very great technological promises, but they do take time to realize. And I think it's just that there is this mismatch between what people expect versus, you know, how fast that can actually be done. Because some of those, I would say like hardcore technology does take time to get built, just like sharding. it took a number of years. Yeah.
Starting point is 00:54:42 So in building such a complex protocol, are there any decisions that kind of in retrospect you wish you would have taken differently? Or if you had to do it over, I mean, you can't really change it after the fact, right? But kind of like if you had to do it over, Is there anything kind of like where you say, I would have decided this differently? Yes. So I would say how the, yeah, a couple of things.
Starting point is 00:55:19 One is like the, I think a big one that comes to mind is how the storage, the storage mechanism or storage economics mechanism works on here. So I know that this is also debated in the serum community, like, you know, the I think there was change, right? Initially, you have this, you burn, your gas, and then you get storage, and then there's this refund mechanism, and then people, people abuse that in very, in very interesting ways. And, yeah, so, so, NIR, we initially had the storage rent idea that actually, I think,
Starting point is 00:55:58 Solana uses today. But then there was basically some problem that we observed. So we decided to switch to the storage-staking mechanism, Basically, you need to lock token for storage. And then when you, let's say you decide not to use storage, you can remove the data and then you get the token back. Basically, you just lock it for the time period that you want to use a token. So I mean, that sounds reasonable.
Starting point is 00:56:26 The problems in reality, people don't really delete much data from the blockchain. And I mean, that by itself is okay. but it actually creates this developer experience problem where a lot of times you need to like deposit some token into the smart contract to pay for storage because obviously you cannot just like make smart contract pay for every user storage. I mean, that always your vector of tag.
Starting point is 00:56:52 So now it becomes this, the developer have to deal with this thing where like on some like, you know, smart contract costs the user needs to attach storage deposit and they need to calculate how much deposit they need to attach exactly to pay for. for the storage, and that has been a common pincoigne for developers. So, yeah, I think that is definitely something we kind of regret in retrospect. And this is something really hard to change.
Starting point is 00:57:21 Because if we change it, pretty much breaks everything. I see that. Is there a way to kind of just automate kind of the calculation of how much you kind of have to deposit? Yeah, the problem is that sometimes dynamic. It depends on the payload. If it's completely static, then yes, yes, I think that's less of a problem. But yeah, the problem is that sometimes, like, one smart contract calls another,
Starting point is 00:57:44 then you figure out how much token they attach to call the other smart contract. And things like this become very annoying. Oh, yeah, yeah, I see that. So kind of zooming out from today, if you think about kind of success for NIA, in say a long time kind of in blockchain terms, so say five years. What does success look like for you guys? Yeah, I think basically we have this dream of making, you know, user-owned AI or user-owned internet, basically having near be a core part of the new paradigm of how internet is going to shift.
Starting point is 00:58:28 So basically we believe that the future of interaction. which internet will be through some kind of AI interface rather than through a browser. And basically we're building this entire stack from the protocol to the intense framework to various like AI infrastructure and application that we hope that we can power this future where people interact with internet
Starting point is 00:58:53 through their AI system. And we want people to, in that world, we want people to control their data, have their own like a system that, works on top of their data, then instead of having everything being controlled by certain large corporations. Yeah. That's certainly a good vision.
Starting point is 00:59:13 And I think kind of like, if you look at kind of the way the internet is going, it's much needed, right? So kind of, if I kind of had a crystal ball and I, I mean, I were to tell you kind of this doesn't materialize, what do you think the root cause will be? I think there could be many, but I think one of the big ones is that many people believe that the large AI companies today have already reached some kind of escape velocity. That becomes really hard to turn around and have, like, for example, open source version that can meaningfully compete. And I think a related point is that today there is no incentive structure for open source
Starting point is 01:00:10 or AI. And it's basically like there's not just that they're behind, but there's no incentive to really push it forward and compete with the biggest company there. And that's actually one of the things we wanted to solve with what we're doing on the AI side to have the proper incentive mechanism that blockchain can provide for like model training and fine tuning and so on. But yeah, it's definitely like, some people would say that you're completely crazy. Like, how is it possible that you can even imagine that this can be down?
Starting point is 01:00:39 Right. So I think that's definitely a possible way to fail. That's a little bit of a Darwin note to kind of end on. So, but I think kind of like asking yourself if this doesn't succeed, what's likely the cause, it's often a good way of kind of addressing kind of the potential flaws in the system, right? So where do we send people who are interested in using NIA,
Starting point is 01:01:15 building on NIR, kind of learning more about NIA? Yeah, so go to NIR.org. So our website, we're actually in the process of launching a new website that will have more content about what we're doing on AI, on chance traction and so on and so forth. Cool. Fantastic. Thank you, Bowen.
Starting point is 01:01:33 It's been an absolute pleasure. Thank you. I really appreciate you having me all.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.