Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Illia Polosukhin: Near Protocol – From AI to High-Throughput Blockchain
Episode Date: January 6, 2024What began as an AI company trying to seek solutions in order to pay remote (unbanked) workers, Near AI became, in 2018, Near Protocol. Its sharded design was inspired by modern database architecture ...and large language model (LLM) training. Near Protocol aimed to solve the scalability trilemma, through a modular approach, combining data availability sharding with stateless validation. By abstracting away archaic blockchain standards, Near basically enabled decentralised full stack development and, in terms of UX, a distributed custodial solution via chain abstraction and account aggregation.We were joined by Illia Poloshukhin, co-founder of Near Protocol, to discuss Near’s journey, from AI company to high-throughput L1 blockchain, and how LLM training influenced the modular design choice.Topics covered in this episode:Illia’s background in AI & MLScaling large language models (LLMs) and the role of attentionStochastic Parrot vs. Understanding spectrumFrom Near AI to Near Protocol and the role of LLMsHow Near abstracted the blockchain away and enabled decentralised full stack developmentDefining ecosystem standards to improve UXChain abstraction, account aggregation and interoperabilityChain threshold signatureNear’s intent layerNear’s modularity, Nightshade sharding & stateless validationEigenLayer integrationEpisode links:Illia Polosukhin on TwitterNear Protocol on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Meher Roy & Felix Lutsch. Show notes and listening options: epicenter.tv/529
Transcript
Discussion (0)
This episode is brought to you by Gnosis.
NOSIS builds decentralized infrastructure for the Ethereum ecosystem.
With a rich history dating back to 2015 and products like Safe, CowSwap, or Nosis chain,
NOSIS combines needs-driven development with deep technical expertise.
This year marks the launch of NOSIS pay, the world's first decentralized payment network.
With a Gnosis card, you can spend self-concernel.
at any visa-accepting merchant around the world. If you're an individual looking to live more
on-chain or a business looking to white-label the stack, visit nosispay.com. There are lots of ways
you can join the NOSIS journey. Drop in the NOSIS Dow governance form, become a NOSIS validator
with a single GNO token and low-cost hardware, or deploy your product on the EVM-compatible
and highly decentralized nosis chain.
Get started today at nosus.io.
Course 1 is one of the biggest node operators globally
and help you stake your tokens on 45 plus networks
like Ethereum, Cosmos, Celestia, and DYDX.
More than 100,000 delegates
stake with CoreS1, including institutions like BitGo and Ledger.
Staking with Chorus 1 not only gets you the highest years,
but also the most robust security practices and infrastructure
that are usually exclusive for institutions.
You can stake directly to Quarice 1's public note from your wallet,
set up a white label node or use the recently launched product, Opus,
to stake up to 8,000 eth in a single transaction.
You can even offer high-yield staking to your own customers using their API.
Your assets always remain in your custody, so you can have complete peace of mind.
Saking today at chorus.1.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization into blockchain revolution.
I'm Felix and I'm here with Meher.
Today we're speaking with Iliad, who is the co-founder of NIR and CEO of the NIR Foundation.
NIR is a sharded layer one blockchain.
So welcome, welcome, Elia.
Welcome back on Epicenter.
It's great to have you for a second time.
Yeah, thanks for having.
And congrats on 10 years.
epic achievement in the space yeah thanks so much yeah like you said it's it's basically 70 years
in crypto so we've we've all aged a bit yesterday in the episode we record yesterday the 10 years
episode they had like the slide show and you could see the progression of meher and brian and
sebastian like from their youth to their 40s or late 30s so yeah that's great um cool
Yeah, we actually wanted to start unconventionally with your background.
But in your case, it's a very interesting background in AI and machine learning.
So we wanted to first sort of talk about your work there.
You're one of the authors of the original Transformers paper.
Can you, yeah, maybe start by telling us about your start in the AI and the ML space?
for sure yeah
I mean so I started
tinkering with AI I think even in high school
I was actually excited about neural networks
as a concept
and I worked for
machine learning company that was
a pretty old school machine learning company
starting from first year of college
but when I saw
kind of deep learning resurfacing in
2012
13 there was this kind of
seminal work at a time, which
now feels like da, but
back then was very
exciting, which was
they trained in neural network to
encode and
like encode the image and then decode
back into the same image. So pre-training,
what we know now as
and
that model without any supervision
learned to detect cats. And so there was a
neuron in the network, which if you activate it, it would generate a cat and like different
types of cats. And so it learned something like semantic without any training, like any like
input data from humans, right, just by looking at images. And so when I saw that and that was done
by Google by Jeff Dean and Andrew Ang and they did it on a bunch of GPUs and they managed to scale
it up and I'm like, I want to do that. I think that's that's the thing that's that's going to, you know,
change things. And so I joined Google research. My belief always was that natural language,
not images, going to be the driver for reasoning and for kind of like intelligence, because,
you know, there's many, many species in the world, like hundreds of thousands of species that
see and only one species that talks and has language, right? So there's a way more semantic
information in language. And so my team worked at
a variety of things, specifically question answering.
So when you type questions on Google.com, we were actually running a neural networks
to try to read our pages that you see and respond to you with like a short answer.
So like you would see sometimes short answers.
Now, the challenge was the neural networks at a time, specifically recurrent networks, were too
slow to be put in production.
And so we were just using bag of words models, which means you literally throw all the words without any order into the model, and it kind of tries to figure out what's going on.
And it worked reasonably well.
But, and this is where Kenneth, the Transformers gave births, was like we could not use R&N in any practical use case.
And so we were looking kind of for something.
And so Jacob, who was a manager and had like another team, came up with this idea.
Like they were using attention on top of words without any occurrence for another task.
And so kind of merging that idea with recurrence, like can we use attention to somehow figure out which words are relevant in the order when you do answer questions or translate something?
And that kind of gave birth to the transformers really was like, we need something that's something that's sort of.
really performant that can be highly paralyzed and attention is really good mechanism,
you know, logically to do this. But if you package it all, kind of the way this models really
work is that everything happens in parallel. Like the way I like to describe it, there's this
movie Arrival where aliens talk in the whole sentence at the same time. Like there's like a circle
of Scrooblease, but they produce it at the same time. And that's kind of how Transformers actually
read articles. It's not like one word at a time. It's not like one word at a time. It's
literally reads the whole article
all the words in parallel
and then has multiple steps
to kind of process it and reconcile
the understanding of it
and then it answers the question.
So that lays out really well
for the modern hardware GPUs that we use
and so it allows to have like this massive
kind of performance improvement
which means also you can scale out the models.
And so I've
worked on that
was a team of amazing researchers, which now all went to do really cool stuff. And then at the time,
I decided to leave Google to start an AI company near AI, which was supposed to be pretty much
teaching machines to code. So my belief, and I still believe this, that now given this steps of
models, you can change how we interact with computing. You can actually talk to computers, and they do
work for you instead of needing to have an engineer to write code for you, right? Which again,
like now seems more obvious that that's possible. Back in 2017, there was like, huh? And so,
so we started an AI, but we only, we gave us us a year because obviously at that time it was
a moonshot. And we didn't have that much resources. So we're doing some interesting stuff
around data collection and some machine learning stuff. But one thing we ended up to,
doing is getting a lot of people around the world actually doing, like writing some code
for us, writing some descriptions for the code. And so we had to struggle to pay them because they
were mostly students in China and Ukraine and Russia in kind of some other countries. And like,
some of them don't have bank accounts. Some like Ukraine, for example, PayPal doesn't work. In China,
PayPal doesn't work. And so there was like no good way to do it like programmatically to send
people money. And so we started looking at blockchain as like, hey, can we just send people money
easily in code? And the answer was in 2018, the answer was actually no, because even back then,
the fees on Bitcoin and Ethereum were way too high. And then, as you probably know, when you
start on the blockchain rabbit hole, you can't stop. You just keep digging. And you're like,
wait, what is this?
And so we kind of, as we kept digging on
researching different blockchains and different technologies,
we're like, we actually know how to build something of this sort, right?
So my co-founder Alex, he was building Shard of Database Company before.
And we have like, you know, systems background.
We're like, we can probably do this,
but we can focus on user experience, developer experience,
while kind of solving this scalability underneath
and making sure fees are staying stable.
And so that's kind of how we went from near AI
to becoming a near protocol in 2018
and starting this journey.
So, Ilya, in this current wave of LLMs,
of course, like this attention mechanism is a key part,
but another key part is just the idea of, you know, like,
just the idea of scale, right?
like collect a lot of data from the internet, from books, and then pre-trained the model.
And of course, the ideas of RLHF and all that they came later.
But the fundamental idea is you throw in a lot of data, you pre-train, you make a big model,
sort of produce good results.
Did you anticipate that scale was going to work this well?
And if so, why did you use that approach in near AI?
No, that's a part that definitely kind of was interesting to see that as people scaled up the models, they became, like, they started exhibiting kind of properties, like more and more sophisticated reasoning properties. And it's like, it makes sense now that, you know, you think about it, like, the capacity of the model is higher. It's able to like generalize better. It's able to kind of learn quote-unquote programs that it can execute.
But, yeah, at the time, that wasn't, like, particularly clear that, like, it will be that kind of step function change.
And so, yeah, we were not, at NEI, we were not doing that partially because we also just didn't have, you know, like, we raised, you know, small, like, pre-seed round, actually.
and we thought we could get better supervised data instead.
And we did some pre-training on GitHub and things like that,
but we didn't think of train on the whole internet at large scale.
And we did have resources to do something like that either.
And the other interesting thing is kind of like this attention mechanism
also seems to, like it's built for natural language processing,
but also seems to kind of work across different modalities,
such as like images and maybe video in the future.
And like, how does that come across to you, right?
Is that unexpected or is that something you expected in the past?
I mean, like when the Transformers were just in development,
And there was like, like the teams actually tried them on different modalities.
I mean, not like multimodal models, but different modalities.
And it was pretty interesting to see it worked really well.
So I think it's, that was kind of known that it works on different modalities pretty early on.
I think the kind of the intuition there is really that, you know, the way kind of we work as well is very much like, like our eyes actually like move all the time.
every like I forgot how many milliseconds.
And so we actually kind of pay attention to different parts.
And then our brain kind of reconstructs the image at different levels.
And kind of, you know, the natural language is same, right?
You read sentences, you like build some semantic meaning.
And then, you know, you kind of continue building out the meaning of what you read.
But sometimes you like zoom in on specific words when you need to answer a question.
And so like I think like generally speaking there is like intuition behind this but obviously again it's like it's interesting to see how well it all works.
Right. Definitely, you know, like we had we had a pretty good models like human before.
It's just like they were super slow and like none. You couldn't use them in production at all.
But this, you know, obviously like the scale was which for example, open AI went and scale.
it up. And by the way, they did a tremendous amount of work to make it work. Like, it's not,
we cannot take it for granted. They're just like, oh, we just increased parameters and hit enter.
Like, no, it was a ton of work across the board from, you know, low-level engineering to like
fine-tuning to, you know, they change some of the model, kind of details of model architecture
as well. But yeah, like, it is, it is, it was surprising for me. Like, I think, like, when it went
from two to three that was like interesting.
Like two, it was kind of like, okay, yeah, I get it.
Like with chain models like that at Google kind of thing.
From two to three, it was like, okay, that's really interesting
because I can see the, you know, there's like something more now happens.
And obviously it's 3.5 is where like, okay, yeah, that's like,
it actually learned something that is like beyond just language modeling, right?
like there's some reasoning that is extractable now
through kind of this instruction fine-tuning.
On a high level, I'm actually curious
what your stance is on this stochastic parrot
versus understanding spectrum.
So there are people in the AI community
that, let's say that LLMs actually don't understand anything.
They are stochastic parrots in this.
sense that they have understood the statistics of what word follows what other word in language
because they have seen billions of examples.
And when you talk to an LLM and it's generating words, it's just replicating the statistics
of what it has seen in the past without actual any understanding behind the box.
That's the like at the extreme that's the stochastic parrot view.
And then the other extreme, perhaps, there's a view, maybe like the Eliasatka view, which is kind of, when you force a model to predict the next word, and you force it to do it again and again, in order to predict the next word, it has to start learning something about the world itself to do the job of prediction well.
and in kind of trying to predict it well
it is forced to learn about the world
and so it has actual
intelligence about what world it has in
it is in
so it's not just stochastic parrot
this is actually when you're talking to GPT4
you're talking to something which has
understanding distilled into it
and there seem to be like these two extremes
in the space and I'm curious
like where you stand on
on that debate.
Yeah, I mean, I definitely closer to
discover's view.
Like, from my perspective,
kind of, you know, at the end,
it's a bunch of math, right?
And so, like, you can kind of decompose
what this math is doing
and, you know, try to build an intuition
around, like, types of transformations
it can or cannot do.
And so from my perspective,
kind of, you know,
the first step is,
you take the document and you embed it, right?
So you went from words into a multi-dimensional,
into dots and multidimensional space, right?
So, I mean, let's, for a second, imagine it's two-dimensional,
although it's multiple.
And so there is like, kind of,
the words that are similar, right, are, you know,
close to in the space, the words that are far.
Now you have a next layer which transforms this words, right,
to kind of give them more context.
And so just, you know, think of it as rotation in the space.
And then you have a tension which is, you know, you're trying to kind of give in the current word, you know, try to pull in the context of the words around it to give it more semantic meaning.
And so that's another transformation, right?
So like in a way you take kind of set of words, right, and then you kind of keep transforming them.
And so what it learns is the transformation function, which in a way is a program.
It's a program that is trying to transform the words into a level which is useful then to predict next word, right? Or and then later respond to questions.
And so, is this like a pure stochastic parrot where it's like, well, pure stochastic parrot we had when we were doing just like, you know, we were generating Wikipedia articles, for example, right?
You just give it a name and just say, generate a Wikipedia article.
Like that's pure like, you know, it just makes.
mix up stuff because like that that name doesn't exist right there's no there's nothing so it just
generates something that looks like an article but when we when we're starting to look at like okay well
how would you answer to this question right it to be able to do that right it needs to kind of
process information right it does this kind of transformations on the on the article and like
it's trying to
contextualize that and give
the answer. So in a way, like, I think of it
as it learns some set
of programs that, like, our world
has, right? So, like, it's not
a complete world model, right? It clearly
has a lot of gaps, but it is
a kind of set of programs that
our, like, world model
has that it can apply to be able
to answer
well or predict next word for a training.
And that on itself
is really useful, right? As we
see, but it's also because it has so many gaps, it, it has issues with doing some, you know,
kind of specific things. And the more precise it needs to be, the less well it does, right?
Because it kind of ends up being, like, if either the programs are very probabilistic and
kind of semantic versus, you know, if you ask you to like describe the steps of something.
But at the same time, a lot of the things we do is kind of, like, there's a lot of, like, there's,
just like few core things and then everything else,
you kind of fill in automatically, right?
So that's why it's really good.
Like, even at coding, like, most of the coding we do, right,
is actually kind of boilerplatey.
And so there's like few nudges
you can actually get to like a reasonable code.
And that's why I think like things like co-pilots
are pretty good products in results.
Cool.
So turning to applicative view,
So now this LLMs are pretty amazing.
And you have some applicative ideas on applying them to the near ecosystem.
So yeah, what are they and how do you see that unfold?
Yeah, I think of this kind of across three dimensions.
So the first dimension is actually less about AI itself and more about our kind of society.
And this is the idea that kind of as more content is generated,
as there's more kind of information wars in general, misinformation.
And again, the important part to note, misinformation is not an AI problem.
It's a human problem.
The, you know, we are in cryptospace,
and so Byzantine generals is something that our space is based on.
And that's literally the, you know, the mis-sidable misinformation.
And so the idea of misinformation of malicious attack on information is something that exists from, you know, from like early on.
And so from my perspective, the way to kind of start solving that is to bring the kind of security, cryptography and reputation to a level of,
of the content, of individual pieces of content.
So right now, for example, we are using websites.
We have HTTPS.
And so we have some set of security guarantees around accessing specific websites.
But the content on the website can be coming from anywhere.
It can be saying anything.
And there's no way to kind of maintain reputation, context, comments, et cetera, around it.
So we need a new set of standards around that so that you can hover on image and it shows
you or a video or a piece of text.
And it tells you, like, who published it, when it was done, if there's any side comments
or context, et cetera, from reputable sources, that should be attached to it.
So for that, we need blockchain.
We need, you know, set of standards.
We need browsers support, and we need kind of publishers to be supporting this.
And I think that's a really important part for our society generally because otherwise we're
going to be living in the world of kind of, you know, all the content.
is like you never know if it's true or not, right?
And it's constantly like kind of manipulation around that.
Now, kind of the second pillar for me is I call it kind of decentralized AGI.
So if we assume, you know, this models are getting more powerful, more intelligent,
what you definitely don't want is a single company or, you know, two or three companies
deciding what's right and wrong for this models to do.
You don't want them to decide what you're allowed to do and what you're not allowed to do as models.
It's also, like, it's the same thing that happened with social networks.
Like, being a kind of moral police for the world just doesn't work.
The world is very multidimensional.
Something that's legal in Amsterdam is completely legal in a lot of other countries and the other way around.
And so, like, you know, what moral is is even more complicated.
And so it's really important to have community be governing kind of the alignment, safety,
as well as kind of the instruction data sets that these models are trained on.
And as well as being able to validate that the model you run is actually the model that you wanted to run.
So right now, if you call GBT AVI or Google API, you get a response, you have no idea of which, like, who produced.
that response. You have no guarantees that it was the model that you wanted to run. And actually
sometimes it's not because they're trying to optimize costs. And so, like, how do you actually
have this guarantees, and especially for something that's mission critical, right? Like, if I'm
doing trading on this, if I'm doing healthcare, like any kind of business decisions, right,
you want to make sure to, you know, you're accessing the model that you have predictable parameters
and outputs.
And so for that, we need decentralized inference.
We need kind of model marketplaces.
We need kind of community data, crowdsourcing, data management, governance,
and so kind of the whole stack of tooling that really manages this.
And then, you know, on top of this, you'll be able to kind of interact with it in a hopeful,
like, I think the other way is like making sure it's privacy preserving so that when you interact
does it, you have it. So there's a lot of work to be done. There's a lot of, like, there's a bunch
of startups doing decentralized inference. There's still privacy gap. I think that people are
researching, but it's still pretty far. There's some data, marketplaces. There's some other
kind of pieces, but it's not really, I would say, like, combine into, like, a product story yet.
But I think, like, that's a really important for, like, humanity period, because otherwise,
you know, like tomorrow
you go to your favorite
AI model and it says like, oh, you banned
or you use the incorrect word and so no
or something, right?
So all the usual stuff we've seen before.
And then finally, I actually think
the flip side of this
is local models, right?
Because although like these big models,
they have the world knowledge, they have maybe
access to lots and lots of context. But actually what you want most of the time is a model that
knows everything about you, but you don't want all this data to go anywhere else, right? You want
to live with you on your machine, on your private encrypted data store, and you want a model
that's able to access that. So you want a local model that is personalized for you, you control it.
It's not affected and manipulated in any way by, you know, advertisement giants. And so
it's actually on your side and is just responding kind of the way you would like to not the way
you know tied once you or whatever and so I think that is a really important side of kind of as
well and so we actually been playing around with like edge intelligence and I did a couple
events and being kind of talking with some projects around this space and it's like it's
actually it's less web three like in no sense of
of blockchain, but it's more Web3 in the sense of principles, right?
It's user-owned AI, it's controlling your own data.
It's like all of those values that we talk about.
And I think that and kind of the Web3 self-custody will be converging more
kind of on the principal side, right?
Maybe on technology side as well.
And this is kind of the area I'm most excited on working on right now.
So in practice, how are you approaching this?
Are there like teams you are funding or is like their AI team in NIR?
Or how can we imagine this?
Yeah, so we've been working with some AI teams.
We actually just had a NEOCON about a month ago and we had an AI track there with some
projects presenting that we already working with as well as kind of I'm working as like
advisor with a few projects.
kind of more closely, and we do have, I would say, like, AI efforts more on also just
automating our own operations. So the other side of this is, I think, kind of the ecosystem
itself should become AI enabled, and over time, AI ran. So, like, ideally my, you know, my job
and kind of the job of coordinating the ecosystem should be done by AI. And by the way, the AI is,
a kind of, like this approach actually solved the core problem of humanity and of resource coordination.
The core problem of humanity is principal agent problem, is that when we want somebody to do stuff
on our behalf, like we select, you know, in elections or we hire someone to manage our money or
something else, they have their own needs and they have their own ones. And so their decisions are
usually not fully aligned with us who hired them.
So that's called principal agent problem.
And so AI actually being the agent that behaves on our behalf is the way to solve that.
And if you scale it up to kind of governance level, right, like actually having AI being the
actor that, you know, makes decisions based on what the population wants is the way to solve
a lot of the current challenges with, you know, when you like someone the day they do stupid things,
or not think that they promised to do,
that's a way to really address it.
And so there's a really interesting kind of future of governance there.
But like we can start applying it now in this decentralized ecosystems
because they're already fully digital.
They already have kind of like all the actions are in chain, right?
So you can have traceability.
You can have like veto power, et cetera, if something goes wrong.
And so I'm really excited about also that side of the applying AI in WebTVit.
space, and obviously you need that whole AI, like, decentralized stack to do that. But we are
kind of starting to do it from bottom up on our side, just in foundation, for example, like, hey,
what are things we can automate? What are things that we can, like, start leveraging this technology
for, as well as maybe build some of the tooling for developers to build kind of AI-enabled
things in the space. We also have, yeah, a bunch of projects that are kind of experimenting
with this across different areas.
Yeah, that's super awesome.
I also saw actually your co-founder, like Alex,
working directly on like smarter LLMs.
Can you maybe also like, what's that about?
Is that related to NIA or is it like some totally different thing?
Or what can you share about that?
Yeah, so I mean, it's a stealth project right now.
So I'll not go into too detail.
Maybe you'll have him, you know, at some point to go more in depth into it.
But yeah, I mean, we kind of, so I'm advisors there.
and we work kind of, I would say, side by side.
But, yeah, he's focusing more on the lower level
and, like, kind of preparing for the future of this as well.
Yeah, I think, I guess maybe you're mentioning, right,
like AI sort of also, like making our life easier
in the sense of operationally in the organizations,
but also, I guess, yeah, in the wider society.
And I guess that's always been like a huge focus of near.
So, yeah, we wanted to sort of dive also in that side of NEA where basically you're branded now in many places like as the blockchain operating system.
And I think, yeah, one of the core features around that is like sort of the UX focus of NEU.
So maybe, yeah, can you explain to us how NEAR has sort of approached, yeah, basically usability for developers and users in blockchain systems and what you're currently doing?
there? For sure, yeah. So, I mean, this was our vision from the start, because when we started
ourselves, kind of diving into the blockchain, and again, this is 2018, so things were different.
You know, you needed to install mist. And so the, I mean, the experience was pretty, like, painful.
And it's also, it was built on top of kind of a very different set of primitives, I would say,
say, like conceptual primitives that then what normally people, both users and developer expect,
right? So, you know, you need, like, to understand the X-Wallets, you need a seat phrase,
you need to, like, kind of pay gas, you need to, like, have do all those things which are, like,
strange when you, you know, when you're just starting. And what we've tried to do from
the start is like how do we design kind of still like a blockchain that is secure, that it has
all the same properties that we all want, but is able to kind of hide a lot of this complexity,
ideally most of it and make, you know, blockchain kind of abstracted out, such that developers,
when they build applications, can just build like as close to normal, up to experience.
but using the benefits of Web 3,
using the kind of all of the value,
and then also enabling users to have like more compatibility, right,
more ownership, kind of being able to interact with multiple kind of applications
and have this like transportability of data.
And so the NIR itself, right, kind of was designed with this.
So we've, like our accounts, for example, you know,
the account obstruction part of the account.
have been designed from the start on the protocol level.
There's like a bunch of differences that we've done,
including that accounts themselves are just a username
that follows kind of domain name structure.
We have lots of different keys with different permissions,
which allows to have multiple devices securely.
It allows to delegate access.
It allows to like the front end of application
to have a session key, for example.
to transact for a specific set of interaction.
So kind of all of this functionality comes in by default.
And then on the developer side,
the choices we made are around,
first of all, choosing WebAssembly,
which at this point is like,
seems that everybody kind of agrees on,
but pretty much just like,
it's an engine that runs in all the browsers.
It's something that is like on billions of devices at this point.
It's supported by large network of developers.
It runs on edge.
It supports lots of languages.
You can run a lot of software in it.
And so we kind of picked that and made it really easy to build.
In a way, from a developer perspective, when you write near smart contract,
It's really just a service which has messages in and out,
and you have a kind of local key value database,
which is pretty much, like, the limits there are so big
that, like, I don't think anybody ever hits them.
Like, I think we have contracts
that have, like, four gigabytes of storage in their database, right?
So you can build, like, massive, massive contracts.
Specifically, you can build all the chains as a smart contract on NIR.
So we have Aurora, which is an EVM as a smart contract.
Just like take in, you know, the EVM that's usually run people a separate chain, just put it a smart contract.
Their database is where all the state of the stored, right?
You can do the same as Bitcoin.
I've been suggesting somebody to like fork Bitcoin and put it on there, make it ultrasound money.
We have JavaScript running as well, so you can run JavaScript smart contracts.
You can potentially do Python and other stuff.
So it kind of enables developer experience across the board.
And since then, we kind of, following the same principle is like, okay, well, now that you can build anything on smart contract side, what's the next part?
Well, actually, you want to get the data out of this, out of the blockchain.
And blockchain, they're not optimized for reading data.
They are kind of, we've tend to optimize them for writing and kind of maintaining security.
And so for reading data, you want a completely different data structure.
And so hence, there is like this principle of indexing and kind of in the way of chain computation.
And so we've been building indexing framework and that actually culminated in what we call query API,
which is a service that indexes that you can like write a smart contract that describes the indexing of data that executes off-chain.
So in the way, it's like an off-chain computation framework that allows you to store.
output of that computation in
kind of SQL databases
that then you can query.
And finally,
well, okay, now you have back end and
middleware, now you need a front end, right?
And again, it seems weird that we are
like, oh, you build
everything decentralized, but now run a server
on a specific domain that
you will need to maintain.
It's like, okay, well, that kind of violates the whole
part point of what we're doing.
So we
created this
kind of decentralized front-end's framework that allows to store the front-end code itself on chain.
So again, the smart contracts code on-chain, the middleware code on-chain, the front-end code
on-chain, and now anyone, any kind of, we call them, gateways, can render this code on the user side,
right? So we have a desktop app, you can have a mobile app, and we have, obviously, VAB apps,
that can load that from the blockchain directly in your browser and render it there. So there's no kind of
middle server that's needed
to render.
You don't need to have a domain.
You can obviously, if you want to.
And so you can just, you know, launch
your web, launch kind of part of your
lab app as this
decentralized front of component.
And now it will live forever on blockchain,
right, but side by side with
your smart contracts, have the same
upgradability, have security, cryptographic
security, who has it, have
versioning. So if I, as a user,
don't hate a new version, I can go to
version before. And so like all of the same properties we really like about smart contracts
we now get for Fronten's. So all of that really enables like a full stack decentralized development
that is, you know, familiar with two normal developers. It's React JavaScript components. It's
JavaScript for middleware indexing. It's JavaScript Rust, C++ and like other languages for
smart contracts. So you have like a full stack decentralization that you can have. And interestingly,
as we were building the front ends, we realized actually the frontends can work with any blockchain.
And so we kind of just turned on all the EVMs and some other blockchains.
And people started building all the EVM fronts as well.
So we have a uniswap.
For example, for Linnea, the kind of official uniswob front end is served out of the decentralized front end, right?
Because by the way, it also doesn't charge extra fees.
And we have like partnerships with others, ZKVM, mantle, etc.
And so the idea is like actually, as you start looking from that lens, from a user lens, right?
As a user, I don't really care which blockchain the apps is on.
I just want to use them.
And like if you go to, you know, some like, you know, some of this gateways where you can access this front ends,
you can just go and search for whatever app you want, click on it and start using it.
That's how it should be.
And so, and this is kind of where we get to this concept.
I started with, which is like, hey, we want to abstract the blockchain for users
with developers. We're getting back to it with kind of this, now that we have this full
stack decentralization, we're like, actually this works for all blockchains, for all chains,
for roll-ups, for whatever, because you can actually abstract out all that on the front end
side and make it really easy for people to interact with it. And so, hence we kind of started
going backwards now with some of the other launches we had, right, allowing pretty much
as a kind of how do we make it really easy now for one experience to to unite all of the
blockchains and kind of we call it chain abstraction principle and so this goes into like g a and
and some other things we yeah we can discuss so so india is it is it correct to imagine um so when
you talk of like this index a service or the service of hosting a front end is it correct to imagine
it as the indexing logic or the front-end logic is stored on the chain, but then there is some
kind of off-chain actor that is actually taking that logic and the data and actually serving
it much as a traditional server, and somehow the chain is guaranteeing that this server's work
is correct and it is compensated? Is it correct to imagine it like that?
Yeah, pretty much. So the idea is, I mean, similar to maybe blockchain validator nodes as well, right? There's a kind of a logic that is conceptual and that all the validators are doing that job. And like you can always have, you know, more validers, less validators. It's kind of independent of that. Similarly, yes, the indexing logic and the front end kind of source code itself is stored on chain. And so any server,
can run on and kind of create the same, you know, outcome from this, right?
Again, similar to RPCs, for example.
RPC server, right, is serving your data, but it's, you know,
anybody can run an RPC server and get the same results.
So, like, it's part of protocol in a way.
It becomes part of protocol.
And so a similar thing we're trying to do for VFronense and middleware indexing as well.
So maybe one way to think about this is that, so on all,
On Ethereum, like if you look at Ethereum, there's a base layer of blockchain and then there are separate protocols like the ENS for naming your blockchain address to a human readable name.
That is the graph which kind of like indexes a smart contract and kind of presents historical data about the transactions and events in the smart contract.
And maybe there are other examples that I'm missing.
So in Ethereum, these are like different systems
and usually they are competing systems.
There's ENS, but they won't be a competitor to ENS,
the graph, and they might be a competitor to the graph.
But in NIR has kind of taken the philosophy
that some of these things are like really key
to the UX of a blockchain
and therefore they should be supported out of the box
by the layer one itself.
Is that, is that,
philosophy. To extend, yeah, I think the way to think about it is it's more than just layer
wine, right? Like at the end, when we are interacting with applications on any of this change,
like there's a whole host of tools and more importantly standards that we are interacting with.
And so like ERC20, for example, is a standard. And it's a standard that kind of came out of the
application space, but it's now like you can not imagine
a CDU without the RC20 standard.
And so what we're doing here is really defining standards for this key primitives that are just
going beyond just, you know, token transfers, but going to like how to define indexing,
how to define decentralized frontets.
Now, implementation of those things can, like, you can have many implementations, you
can have, you know, mobile render and web render, you can have indexing.
Like, you can have, you know, external partners who are competing with each other how to implement
it. Same for
RPCs, right? RPC is
a standard, but then the way it's implemented,
right, can be very different.
Underneath, you maybe cached everything in
database, maybe using Cloudflare, like,
whatever the architecture you want to use.
But the standard is there. And I think
what we've been trying to do is define
a standard, and I mean, have a reference
implementation, but for
this more key pieces to make, indeed,
the experience more aligned and kind
of have this
like, singular journey for developers
and users that is cohesive.
And yeah, like the way, you know, some of the things, like, you can have businesses around
the standards that are, you know, very profitable.
But, like, the core principle for me of decentralization is actually in the standard.
It's the fact that, like, if you define a standard, it means that you can swap in and swap
out any participant.
And so you're not, you don't have this, like, lock in effect.
You don't have the effect of, you know, you go to a bank and you cannot move your money,
out because it doesn't allow you to. Or, you know, cannot cancel your telco provider.
Or like positive telco providers don't even work for you. Here we can always have like a competitor
that comes in and if they're more effective and can provide better prices, people can switch to it.
But the stand because the standards is the same. And so for me, that's kind of the key principle
of like Web 3 in general. And so I think the challenge that I've seen is like not having the
standards actually leads to kind of huge fragmentation of experiences and as well, like,
actually monopoly has been built because like now that you built all your software towards
some API, you cannot switch because nobody else provides this and like you need to rewrite
half of your code to do that.
So how much is kind of an analogy with the Apple ecosystem versus the Microsoft ecosystem
for desktop?
How much, how well does it map?
So in a sense, when you look at it.
look at kind of like the Apple ecosystem, it's a company that has kind of maintained control
over its kind of like operating system supply chain, its way of like delivering music, its way of
delivering books, its way of how kind of applications kind of like appear to the end user.
And in the beginning, I think they also wanted control over the hardware, but maybe they have
retracted on that now.
Whereas like, kind of like Microsoft is one where
just like the raw operating system and then applications emerge
and if there are standards needed for their interoperability,
the market kind of like figures it out.
And from the outside, it feels like, okay,
near is kind of like more going towards that Apple philosophy
that we are going to define all of the standards for many of the things
that are key determinants of the user experience.
Whereas other ecosystems like Cosmos or Ethereum might be more kind of like the Microsoft approach
where we are providing like transaction throughput as the center, the account model as the center,
and then kind of a lot of the interoperability between the standards is left to the market to figure out.
How much does that analogy map and how much doesn't map?
I mean, I would say the part that like agree on is we definitely,
trying to focus on user experience, right?
And so with that, it's important
to figure out, like, what are the touch points
that you want to have standards on?
Again, like, my perspective is, for example,
RPC, JSON RPC is part of Ethereum standard.
Like, it's part of kind of the protocol,
even though it's actually not.
But, like, it, by both accounts,
if you try to change that RPC API,
like, you will break everyone.
And so we kind of see in a similar way, right?
Like, if RPC is part of a standard,
why not some other parts?
But as I said,
like,
you know,
NIR, for example,
has like number of contributors
that are building things.
Like,
actually the VM that's built right now
for the decentralized front ends
is built by proximity, right?
And,
for example,
you know,
the query API,
kind of other companies
can implement the same standard
and provide kind of better services.
So I think the idea here is that
like defining the standard,
we kind of actually opening up the market for people to fill in, like, with better products in this.
And again, like, it's pretty early still, so, like, a lot of stuff we still build, like, reference implementations.
But similar as a CDium, by defining a standard for protocol, it opened up a place for all of these clients to be implemented, right?
Like, that's kind of the idea.
It's, like, you define a standard and then you open it up so that others can contribute to it in the same way versus competing,
on APIs and competing on like kind of, you know, in a way marketing, what's happening right now
in like token price. What's happening in Ethereum for some of this like infrastructure tooling,
right? It's like, can we get a bigger air drop by using a product versus like, hey, this is a standard?
Everybody will be using this standard. And so now what's the best product people can build for
the standard? So I think like that's kind of the difference. I don't think it's, I don't think it's,
as applicable to like this, you know, big commercial for-profit companies
versus like this is an ecosystem or a building and really defining more kind of this,
I would say, like layers of the stack.
Going back a bit to like this, what you said about the switching cost from your telecofer project,
I guess, like, related here.
I guess one big thing in blockchain is generally like bridging or like if you want to switch
the ecosystem, you have to go to some other chain and move the liquidity there, which can be
like cumbersome. And you did mention the chain abstraction for a second there. And I saw on your
Twitter a bunch also like this concept you have like account aggregation that you teased. So maybe
you can, yeah, can you talk? Tell us a bit about like what, what are you doing there or how are you like
sort of solving this interoperability problem in the blockchain space? For sure, yeah. So that's a very
important topic. So although we have started building bridges, I think, so our rainbow bridge been
built from 2019. So I think we started building, you know, kind of in line with IBC kind of timing.
And we've been running since, I guess, like the beginning of 21. And at the same time, like bridges
are really bad as the concept because they create a honeypot for security. They
are the place to siphon off assets.
And if there's any attack on the protocol itself,
bridge is kind of how you exit.
And they like just the amount of failure modes
between different blockchains is pretty big, right?
But between multiple blockchains, it's like insane.
Like, you know, chain stop, blocks didn't publish,
like all those things you need to.
Like, as a developer, now you need to handle.
And then on application side,
okay, you know, the fungible tokens transfers is maybe reasonable, but as soon as you add any logic, right, be that rebasing or be it, now when you bridge it, you lose all of the logic on the other side.
And so the concept I've been kind of exploring for a while now, I was calling it originally remote accounts, but we kind of reframe it as account aggregation, this idea that ideally you want to have one account,
And there's mapped accounts to this on other chains.
So imagine, you know, you have my root dot near account on near, and then I have an address
on Ethereum, I have an address on Bitcoin, I have an address on Solana, which I control
with this account.
And so now, if I want to buy a Salon NFT, right, right now I would need to, like, set up a new
wallet, you know, bridge some stuff to Solana, buy the NFT, and then, I don't know, and then, like,
go and look at it from time to time
because I'm mostly sitting on here.
Or you have this Salana address that's linked to your
near account. You pretty much
through this by an NFT for this address
and we can talk how that works.
And now you have the front end that actually shows you
everything you own across all of these chains
from all of these addresses.
And the way mentally to think about it is
when you go to Binance or Coinbase
and you sign up with your Binance or Coinbase account,
you have addresses on all chains, right?
And, I mean, they are deposit addresses usually.
But imagine those addresses were actually normal addresses.
You can use apps and buy NFTs and tokens, et cetera, whiz.
So that, but your account is your, you know, Coinbase account.
And so that's kind of where your, you know, like ownership is.
And so that's what we're trying to, like, we building,
we're going to be launching end of the first quarter,
is this concept of account aggregation
that now allows, together with decentralized front ends,
allows to actually collapse
this whole multiple chains,
switching networks,
bridging, all of this into a very simple experience of
you get an account,
you deposit some funds into it,
and now you can transact across all blockchains,
across all of their apps,
and it kind of will get executed on your behalf,
on those chains and you have
these addresses but it's all self-custodial
and all kind of hidden from you.
You don't need to think about gas fees on those chains, et cetera.
So that's kind of the experience we're going after.
And again, this is just an extension
of what we've been building with NIR
by trying to abstract out the NIRBlock chain.
We're just like, okay, well, we can actually do the same thing
for everyone and really provide
like a unique and valuable experience
because, you know, anything multi-chain
you want to build,
near will be actually Z place to build it
because you will be able to transact
across all of the chains
without having to bridge without having
this complexity.
You want to build, for example, Bitcoin DeFi.
Well, on near, you know,
every near account or smart contract
will have a Bitcoin address.
You can deposit to, it can start, you know, doing stuff.
Right.
And so that's kind of conceptually what we
really bring into market
and like kind of finishing our,
I would say, arc of chain abstraction
that we started.
was doing NIA in the first place.
So on a high level in NIA, like this idea is in the cosmos ecosystem,
there's a chain called neutron.
And because the cosmos ecosystem has IBC,
so cosmos chains can bridge to each other in quite a good way.
Neutron has the idea that in cosmos you have the idea of delegated account control,
which is like on one chain you have an address,
and that can control many other puppet addresses on other.
chains and neutron is trying to build that kind of that that puppet master chain where like
you will have your central account and you will control other addresses on a lot of chains over iBC
through neutron it feels similar but the reason it works in the cosmos ecosystem is you assume
iBC that there's a secure bridging solution underneath of enable for this to work in neutron my
i almost start to think that okay the only way like this can work for near and solana for
for example, having like a address on NIR that can control, a puppet address on Solana,
you need a secure bridge between NIR and Solana.
Is it not?
So the bridging problem, solving the bridging problem seems like a prerequisite to this.
Yeah, so we're trying to go away from bridging almost completely.
I mean, there will be some places where you still need bridges.
So let's look at Bitcoin as just a way more clean example, right?
With Bitcoin, you cannot have a smart contract bridge because, well,
Bitcoin still has more contracts.
And so the only thing you can do is to own addresses.
And so the core idea here, and it's conceptually the same as, yeah, what Neutron is doing,
but the core idea is different.
The core idea is that we make near network itself be able to sign transactions for other
blockchains.
And so near network becomes, in the way, custodian of all of this mapped addresses
on all other chains.
and you, as a near user-telling network, right,
BZ for smart contract or user interaction
to sign a transaction on Bitcoin to send some Bitcoins
from your remote address, from your delegated address,
to some other address, right?
And so because of this, like,
you don't need to actually bridge Bitcoin to near to do anything, right?
You just literally, the Bitcoins live on Bitcoin network.
the opi coins live on optimism the salonan and nftes will live on salana nfti on salana and i just control all that
by just sending transactions there but as a user like i just interact with near and i kind of
pay near gas fee which is very small i say like do this you know i attach whatever also you know if
i need to buy something etc on near and then we have kind of intent relays that actually execute stuff
like, you know, the transaction gets signed by near network
and then intent real air, you know, sends that transaction
on your behalf on the other.
And so there's no actual, like, bridging.
There's no kind of security kind of issue where, like,
if this bridge gets broken or whatever,
or that network gets forked, etc.
Like, none of that exists.
And because near account,
there's also like a very interesting and kind of a little bit crazy thing
because near accounts are actually tradable.
So you can actually list.
near account as NFT, and somebody can buy it and get access to it because you can rotate keys on
near. What this allows to do is you can have lots of assets across all kinds of networks, and then you
can list that as a bundle on near as like you want to sell some BRC20s, some Sone NFTs, some Ethereum
NFTs and some, I don't know, OP coins and GMX at the same time. You can list all that as a bundle under
one near account, and then somebody can buy all that.
with one transaction on NIR, paying near transaction fee, and within one second block time.
So you don't need to wait for Bitcoin transfer, you don't need to wait for all of this.
You can do it on one.
So you can actually start bundling all of these things and trading kind of across all chains
on near very easily without actually sending transactions or bridging anything anywhere else.
And that's kind of the shift that we're trying to do.
I call it unbridging that we're like, you have the account level kind of.
of ownership that's maintained indeed, but it's maintained by very specific security parameters
that are near parameters.
And then if the, let's say, Solana network fails for whatever reason, there's no bridge
problems, right, that would, you know, rise from this, like, because you own stuff
on Solana.
So whatever Salana has to deal with, right, like, whenever it recovers, et cetera, like, you
will get it back.
But, like, it's kind of, like, you know, you have this kind of.
relationship with that network, but not, like, there's no bridge that you need to deal with
and kind of think of as like an intermediate, you know, complexity. So that's the idea, it's like,
you know, again, we're going to be rolling out more documentation. It's like we have a
test net version coming out for people to hack on in kind of January. And so we actually invite
people to start building. Because again, like, and multi-chain experiences, like you'll be able to
build this way easier because you don't need to think
about all of the complexity of like, oh, this message didn't deliver, the network is like paused,
you know, like something crashed because of inscriptions. Like, you don't need to deal with any of that,
right? It's like you can literally sell your, you know, network failed. You can sell the account
that has assets in failed network, right, to somebody else, for example, if they want to take that
risk, right? So like, you can do that without having that network life, even. So that's kind of
the level of experience we won by up. And this, this least,
to fully abstracting the blockchains, right? Because now from a user interface, I just go,
I use the app and I just see that I'm using my, for example, near account. And it doesn't
really matter for me that that was a Solana, like, NFT that I bought. I just see it in my portfolio
view. And like, for that we need like indexing of Scalana and all the other chains data. So the same
stack there. We need decentralized front ends that kind of aggregate all this. And so kind of
That's like how we package all the stack into by abstracting the blockchain.
So quick, quick nerd question, which is, so, okay, so NIA is like, this is awesome, first of all.
I mean, NIA becoming like a distributed custodian essentially.
Imagine it as like Coinbase, but distributed.
And the distributed custodian can have hot wallets basically on all of the other chains.
but as an engineer
my questions
really starts to be in Bitcoin
you have like a single single single
account or a multi-sig account
that's what Bitcoin provides available
right like it assumes that there is
like maybe like one private key
and the one public key
and there's a signature to that public key
whereas near as a distributed network
has lots of validators
so how do what fancy cryptography
makes this
makes this work
Yeah, so it's called chain signature
And so this is a threshold signature
Where as valid is rotate
You can maintain actually the same
Set of public keys
So even though you rotate
And like have different parts
As a private key
Being rotated
They all like when they sign
Trashot Signature
You get the same public key
And I mean you can have derivations of this
So you could have like as many public keys as possible
But they all deterministic
within the whole blockchain.
So it's a pretty cool technology.
And yeah, like it's kind of reasonably new.
Some of the folks from Dyscad have been pioneering that.
And yeah, we're kind of leveraging that as a way to have near to become this decentralized kind of custodian.
Right.
I think maybe also Axelar works a little bit like that or am I, I think.
But anyway, one question that I had, like, in this.
scenario where you have
the invite an NFT
on Solana, you need the liquidity right on
Solana as a user.
So maybe I have funds on NIR, but I
don't have on Solana.
Is there some system that you're thinking of
to balance that out
without bridging or, yeah?
Exactly, yeah. So this is where we
call them in 10 through layers or
I mean, we're still shopping the name.
But this idea that on
near we have this
well, we have this principle of trial account.
So this idea where I can send you right now a link, you click on it, and you'll have some near in it, so you can do stuff on near, but you cannot withdraw that near.
So we actually, like, we kind of, like, what it will do is, like, actually send you a one-time used private key, which when you click, it will actually create a new private key on your browser, switch that private key, but that private key is limited access to that account, so you can transact, but you can not withdraw funds.
And so that kind of concept applied now to other kind of chains in a way, what it allows to do is we can have other parties to fund the account to execute things.
They can put some Salana tokens to pay for gas or for NFTs, but you cannot withdraw that by sending a direct transaction to withdraw Salana.
So what this allows now to do is you can pay somebody on NNN.
with NIR token, and then they will put Solana tokens there and then execute your transaction.
And kind of by doing that, right, we kind of have pretty much a way to, that's what I say
it's intent, right?
You say, like, my intent is to buy some Solana thing, but I don't have Salana token.
Like, here's a bunch of near tokens.
Execute that there.
And so, and, you know, now you need somebody who has liquidity on all the chains to execute
the stuff, but that's like having a third parties doing that is way.
easier, right, than to have, like, whole bridging and automatic execution.
So this is, like, yeah, really, like, some sort of fee attached to it and the relayer can
grab it. It's not like a blockchain network or anything. Yeah, exactly. Yeah, it's like a
sort of party, like, you know. Like a market maker or whatever, whoever it is.
It's like, yeah. Any market maker or any like, like, bot, arbitrage bots can do this kind of
stuff pretty much. And they also, as doing that, they'll just,
relay the transaction as well. So like you don't need to actually also send like submit
transaction because like the valid is only signed transaction right now. Somebody needs to like actually
ship it to peer-to-fuel network. So they will do that as well. Yeah, that's pretty awesome. Yeah,
like looking forward to reading more about it once the more documentation is there and stuff.
But yeah, thanks for sharing it here. And yeah, I guess further in the near journey, like we didn't
actually talk about much about the chain itself, right? I think
You were basically one of the first, if not the first, like sort of sharded blockchains and been like staying with that sort of narrative while I think others have pivoted from that.
So yeah, can you tell us a bit like how has the near sharding developed or what is sharding actually again for people that forgot about it?
And you know, where is it going?
Yeah, so as I mentioned, right, my co-founder Alex, who was, you know, building sharded database.
I mean, I'm coming from Google where everything is sharded just like you cannot have, you know, billion users and put them into one database.
This just doesn't work.
And so, and like online computer.
And so for us, it was like, you know, kind of pretty obvious that you need sharding.
And so sharding, I mean, at the core, the idea is like, as you process, you know, as you store more data, as you process more transactions, you need.
multiple machines doing work in parallel, and you want these machines to be kind of doing similar
work, right, and distributing load. And ideally, as more load comes in, you actually increase
number of computers, right? So this is how all of the Web2 giants work. You know, again, imagine
your Gmail, right? Or imagine Facebook, right? There's like a database underneath, which, you know,
is sharded. It has hundreds or thousands of servers that store, for example, user data. And
And when you're a user requesting, it routes you to the server where your user data is and
achieves it.
And then when you need to update something or process transaction, it kind of routes a transaction
there.
So that's kind of the core concept.
And like, you know, again, logically, you cannot have, like, you cannot have billions of
users using the same, like, one server, right?
And this is what's currently happening where for non-sharded systems, it means, like, they're relying
on pretty much one server replicated, but one server, nevertheless, to process everything
that happens on their chain. And so for us, it was kind of, you know, pretty obvious that we need
to do this. Now, blockchain adds extra complexity compared to Web 2, where you have all of the,
you know, security that you need to deal with. And so we've been kind of obviously iterating
on a design kind of within this conceptual thing. And so we introduced Nightshade back in 2018.
which was our sharding design where in a way every single near contract or account is actually a separate chain.
And we just bundle them in such a way such that as users and developers you don't know about it.
And so we kind of bundle them to the number of machines that, you know, parallel processing machines at a time you need to.
And so again, this is very similar how Web2 works where, you know, every like user account is in a way.
way independent and they store and they can be like moved around between different databases,
like between different computers in the database. And so, so this kind of allows to abstract out
the complexity of the sharding from the user, right? As a user, if you go to near blockchain,
you will not see shards. We don't actually show them. Like you need to go to our PC and like
query the block headers and stuff like this. Now, the thing that we in 19 were planning to do
was for security was based on challenges. And that's,
proved to be very challenging, and this is across the whole space, right? We've seen, like,
a number of other chains actually struggling with implementing challenges. And so, kind of earlier this
year, we ended up kind of doing research and refocusing on instead doing stateless validation.
So what this means is now when block is produced, block actually contains all of the state
that transactions touched,
and that information is being sent around to everybody else.
What this means is that other validators don't need to have state of the shard.
They can just validate the block on its own.
And it means we can have hundreds and thousands of validators validating every shard.
It can be completely random.
They don't need to be assigned to specific shard at any time.
And this also means we can...
have now a lot more kind of nodes and validators in the network kind of proving the whole system.
Now, on a low level, what NIR is is really a decentralized shared sequencer that then
sends out the data availability of these transactions across the whole chain.
We use erasure coding.
And then we have this execution, which now is stateless execution, which then is being proven
by number of other validators and settled, right?
So we kind of package the whole what now is modular framework
actually in one pipeline way on top of the same set of validators,
right, kind of just being rotated constantly across the network.
And so that's, you know, at the core, what near is.
And so we actually are going to be launching the new testing network
for a stateless validation.
to kind of as a part of our phase two launch.
And so this is kind of finalized this like core roadmap of sharding that we've outlined since 2019.
And this should be coming kind of January and February.
And we're going to have, you know, the full main net launch probably in April.
And this is the idea that actually kind of conceptually, if people read like with Daleks Endgame,
this is in the way that structure.
You have block producers who are sharded
or kind of can, you know, we can keep adding more block producers in parallel
so you can keep scaling the network.
We're also moving the kind of somewhat
because of this block producers now don't need to rotate as much.
We're actually moving the whole state into memory,
which gives us about 10x improvement on each shards
kind of transaction processing.
and so each shard gets 10x
and then you can have more shards
and so then they
kind of, you know, they do
where's the coding data availability
and then they do processing
creates this
blocks with state witnesses, send them out
and then you have large network of validators
who don't need to be this large
who can just validate these blocks
without having the full state of the chain.
So that's kind of
of the, you know, in a way finalizing our roadmap, but also very much of endgame.
It kind of bundles a lot of the current like roll-up concepts and, you know,
sells a base concept that Ethereum is talking about into one product. And then we announced
we working on ZK. Vasm with Polygon because this kind of is sending out state witness
with the block is actually a lot of bandwidth. And what ZK. Vazim allows us to do is actually
actually to prove the whole block execution with State Witness on the block producer directly.
And so now instead of sending potentially, you know, megabyte of data, we can just send, you know,
whatever 10 kilobyte proof out. And everybody else can just validate that without re-executing all the same
transactions. So that is kind of actually, you know, final elm game. I mean, there's like a few more
pieces to complete the picture, but that is the structure that we think is pretty much final
kind of architecture that, you know, you have temperature-resistant, shared sharded sequencer, right?
So, and you can, you know, you have like all the data availability underneath to provide you
so that, and like we do data availability first before execution because that means all the other
indexors and other piece of infrastructure can start executing.
in parallel. And so you don't have latency on user interfaces before the kind of finalization
of the execution on the validators themselves. Then you have kind of execution on validators,
send out witness, and now, you know, large network of validators can validate it and prove it
without needing to have state rotated and all kind of having like, you know, potentially state
is like 50 gigabytes, for example, so they don't need to like have that 50 gigabytes on them. They just
receive whatever relevant for the transactions that been processed.
And so that's kind of the, yeah, I mean, it's a little bit complicated as a scheme, but,
but like, really it's powering this, again, like the kind of the endgame structure that
people have been talking about.
And at the same time, it's, it is like kind of that modularity just like reusing the same
set of servers, right, to ensure kind of throughput and the latest.
low latency.
Yeah, that's like an episode on its own, to be honest, to dig through that.
Is it correct to think that like the stateless validation requires ZK.
Wasam as a primitive?
So no, because you can do stateless validation without ZK.
So what you do is you execute transactions, you record which pieces of state you touched,
and then you just send those pieces of state with witnesses, right,
with kind of proof that it's part of the state together with transactions.
And so we're actually launching that first while in parallel kind of working on Zikaivasm.
And so Zika Vazim, what it allows to do, though, is just compress all of these
and execution of and validation of this into just a proof, right?
So in a way, like Ziki Vazem will prove the execution of this blob, pretty much state plus
transactions, into just a fixed-size proof.
but it's more of an
optimisation. Zika was from this perspective
is optimization and it's
obviously like way better for
like longer term
storage but it's not
a prerequisite.
I mean maybe I'll try to
present my simple imagination
of like of this system
so the way I imagine it is like if you imagine
I'm a validator I'm an accountant
right? Automated accountant
essentially in here
I have the capability
I'm assigned
somehow like some piece of work
and somehow my work is also rotating
like it's not part
there's a massive leisure
massive ledger massive state
and I am assigned
hey go and make some changes
to this part of the state
so I can basically
go to that part of the state
there are a bunch of transactions
associated with it
I execute the transactions and first I can I make the data available.
Hey, these are the transactions I'm going to execute.
I make the executioner.
I update the state and today I somehow provide some witnesses so that for the other
accountants I can sort of provide a proof.
Hey, I did my job correctly.
Here's proof and they don't need to download my part of the state to verify
my work and this ZK proof
will make that even easier. So
the state, imagine as a massive
tree or something, I can
modify some branches of the tree
and I create a proof
and then I, that proof
is witnesses today, ZK was them tomorrow
and I can send that thing to
others. They don't actually need
to have my part of the tree
in order to verify my work
and
then there's a separate system that says
okay, in modifying this part of
the tree, what are the transactions I did, somebody duplicates that work. And because I can
modify a part of the tree quite independently, and there are many like me, so there are many
accountants like me, all of these accountants are kind of modifying like different parts of the
tree in parallel, and like that is fundamentally why the system is able to scale.
Yeah, very well put.
So you have a partnership with eigenDA.
Why do you need a partnership with eigenDA in that case?
Yeah, so this kind of maybe, yeah, changing gears, right?
So this is like near itself.
This is near itself, right?
Like it has no interaction with other things, but yeah.
Yeah, so, and again, near itself right now is, you know,
top use blockchain by number of addresses, for example.
you know, daily active, monthly active, weekly active.
And so, like, Neer itself has, like, a bunch of utility and value already.
But, again, we kind of, when we frame this, like, chain obstruction thesis, right?
What it means is that for the developers and users on top, we're trying to provide as smooth experience
across using other chains as well.
And this is where we kind of looked around and, like, oh, near already has data availability built in,
like that's just part of our protocol
and so we have a bunch of layer two
that we can plug in into this
to kind of hooking into the rest of our systems
right and so that's where we
kind of you know started
and like kind of
pretty much provided a way to hook in
OP stack CDK
Starknet's kind of stacks
how do you publish your data
on NEAR. Now, if you just published data on NEAR, it's useful. It's obviously very cheap. It's
way cheaper than pretty much everything else in the market. And because NEAR is sharded, you actually
have more capacity than anything else that can take your data already, and we're going to add more
shorts. But it's not as useful because you cannot route messages between smart contracts on
roll-ups between each other and near-in-near contracts.
And so that's where we had a partnership with eigen-layer, not eigen-DA,
to help us actually do the work for this layer-2s to get to executed state and
outgoing messages, such that the applications that won around messages faster
within one and two-second, they can actually do that through the near network.
So eigenlayer validators will execute this roll-up, given the data published on NIR, they'll execute it,
and they will have a new state route for the roll-up itself now.
So think of it, it'll be extra accountants, Ethereum accountants,
who will be actually looking at the roll-ups and updating state-rude there,
but then publishing back to the near, like telling it to a near-accountains as well.
And so now near account and synodium accounts together know the state of both near and all of the roll-ups that are plugged into the system.
And so now you can route messages between roll-up contracts and near contracts and, you know, back and forth.
And so this allows us to kind of, again, like align more the space of the space.
And so, again, for chain abstraction, for aggravation, it means we can do things way faster between all of the roll-ups that,
fit into the system. So that's kind of how like the A-plus eigen-layer kind of provide this fast finality.
And then, you know, there's other kind of tooling that we, you know, plug in on top with decentralized front-ends to really kind of abstract it from a user.
But like we need that kind of alignment again.
Near in the way each account, like each element of that tree is a separate roll-up, right?
We have a system for managing them, and so we're kind of trying to fit the other roll-ups into the same system.
And, you know, obviously we need to like plug in some pieces to make it work under the same security parameters that roll-up expectorite, which is Ethereum security, hence the eigen layer.
And then DA is kind of a way to get this data, you know, into the system as well and provide some guarantees there.
hard to unpack but but like logically it's like yeah imagine yeah it's exactly that it's
imagine near as this massive tree and then there are like lots of accountants in near itself
there's one group of accountants and then accountants can kind of modify parts of the tree
independent of each other they can send proofs about their modifications so that other other
accountants can trust their work and then kind of like this eigener partnership is in
some way saying that.
There is CDM accountants, yeah.
Yeah, it's like near says we have an awesome group of accountants,
but if you want your own accountants and if you want your own roll-up,
you have created a separate group of accountants,
but then your accountants and the near-accountains,
we sort of need to interface in some way so that
so that the work your accountants date can be
deduplicated on near and the other way around.
And via this deduplication, we can somehow achieve, like, trustless
interactious between Ethereum roll-ups and NIR.
Something like that, right?
Yeah, pretty much, like, I would say the roll-ups is pretty much,
I want my own accountant, right, that runs everything.
But then I trust the CDM accountants to revaluate everything and finalize it, right?
So like Ethereum accountants are the final.
My accountant is the one who can do quickly, right?
He sits right by my side.
And so what we say here is near accountants can provide a bunch of value by either connecting
your accountant to the other guy's accountant, right?
So you can connect together or to our applications.
But we still need Ethereum accountants because the finality of the roll-ups is on Ethereum.
Right.
And so that's why we have a lot.
line layer, pretty much to lend us their Ethereum accountants to kind of use the, you know,
as a roll-up publishes the ledger, right, from their accountant first, like, we have the
seedium, you know, account and so eigen-layer to, like, validate everything quickly, right,
before the full ECDium scenario will happen. And so that allows to kind of near accountants
and to have, like, trust into the execution of what happened on the roll-up, while,
also have the way quicker time to finality and to communication of messages for this roll-ups
and maintaining the same security as they have through ETHium.
So that's kind of like, you know, it's like a roll-ups near and ETHidium coming all together
into like one happy family of accountants.
I think that's a great note to end on, right?
Like a big happy family of accountants.
Yeah, Ilya, thank you so much for coming on.
with like a massive episode.
I think, yeah,
I need to like process this
and I'm sure our listeners will take some time
to process everything too.
Well, we can do another one.
In a few months.
As we launch all this stuff, so.
Yeah, totally.
And yeah, we still also have Alex episode
about the smarter LLM's outstanding,
so lots to do.
But yeah, thanks so much for coming on
and thanks for our listeners.
We'll have like one and a half hour.
of content here.
Great guys.
Thank you for joining us
on this week's episode.
We release new episodes every week.
You can find and subscribe to the show
on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode
of the Epicenter podcast.
Go to epicenter.tv slash subscribe
for a full list of places
where you can watch and listen.
And while you're there,
be sure to sign up for the newsletter,
so you get new episodes in your inbox
as they're released.
If you want to interact with us,
guests or other podcast listeners,
you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show,
and we're always happy to read them.
So thanks so much,
and we look forward to being back next week.
