Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - David Minarsch: Autonolas – Autonomous AI Agents
Episode Date: January 27, 2024The Autonolas stack aims to address the ‘A’ in DAO (decentralised autonomous organisation), through its Open Autonomy framework, which enables the creation of autonomous, off-chain services for cr...ypto applications. A key component for ensuring the proper operation of these off-chain autonomous economic agents, is the consensus mechanism. The protocol is overseen by the Governatooorr, the world’s first autonomous, AI-powered governor.We were joined by David Minarsch, co-founder of Valory, to discuss the ever-changing landscape of AI agents and how they can be used to automate crypto applications.Topics covered in this episode:David’s background and founding ValoryAgentic AI systemsMulti-agent systemsAutonolas’ agent frameworkCollaborative agent economy & composabilityDAO optimisation via autonomous agentsPotential attack vectors & AI risksEpisode links:David Minarsch on TwitterAutonolas on TwitterValory on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Friederike Ernst & Meher Roy. Show notes and listening options: epicenter.tv/532
Transcript
Discussion (0)
This is Epicenter, Episode 532 with guest David Minage.
Welcome to Epicenter, the show which talks about the technologies, projects and people driving decentralization and the blockchain revolution.
I'm Friedrich Erns and I'm here with Meher Roy.
Today we're speaking with David Minaj, who is the co-founder and CEO of Valerie and founding member of O'Donnellus.
And O'Donnellus is a funny project.
It kind of, it's an AI slash blockchain crossover and why they're not.
is interesting. We'll dive into in just a second. Let us tell you before about our sponsors this
week, though. This episode is brought to you by Gnosis. NOSIS builds decentralized infrastructure
for the Ethereum ecosystem. With a rich history dating back to 2015 and products like Safe,
cowswap or Nosis chain, NOSIS combines needs-driven development with deep technical expertise.
This year marks the launch of NOSIS pay, the world's first decentralized payment network.
With a Gnosis card, you can spend self-custody crypto at any visa-accepting merchant around the world.
If you're an individual looking to live more on-chain or a business looking to white-label the stack,
visit nosispay.com.
There are lots of ways you can join the Nosis journey.
Drop in the Nosis Dow governance form, become a NOSIS validator with a single GMSPAYOR.
T-NO token and low-cost hardware, or deploy your product on the EVM-compatible and highly
decentralized NOSIS chain.
Get started today at NOSIS.io.
Chars 1 is one of the biggest node operators globally and help you stake your tokens on 45-plus networks
like Ethereum, Cosmos, Celestia, and DYDX.
More than 100,000 delegates take with Chorus 1, including institutions like BitGo and Ledger.
Staking with Quarice 1 not only gets you the highest years,
but also the most robust security practices and infrastructure
that are usually exclusive for institutions.
You can stake directly to Quarice 1's public note from your wallet,
set up a white label node or use the recently launched product, Opus,
to stake up to 8,000 eth in a single transaction.
You can even offer high-year staking to your own customers using their API.
Your assets always remain in your custody so you can have complete peace of mind.
Starseeking today at chorus.1.
Hi David. It's a pleasure to have you on.
Yeah, pleasure to be here. Thanks for having me.
Absolutely. Tell us a little bit about yourself and your background.
Sure. So I came to crypto from a sort of background in my background.
maths and economics.
I did maths undergrads and then really got into applied game theory.
There were some fantastic course that at UCLA where I did that.
And one thing led to another and I ended up doing a PhD there.
And then that, you know, if you fast forward quite some time,
that for me to discovering that I really liked this intersection of
game theory and machine learning, which I had done a lot, together with an interest which had
sort of grown steadily in crypto and blockchain.
And so I'm working in that space now for over five years, and particularly at this intersection
of crypto and AI.
Yeah, super interesting.
Sounds like applied economics and maths.
And yeah, it sounds like the ideal background.
for getting into crypto.
You, as we said in the intro, you are co-founder of Valerie,
which is a cool contributor to Autanelus.
You kind of co-founded with someone else also called David.
And what motivated you to kind of co-found this project?
What's the problem you were setting out to solve?
Yeah, it's a great question.
So Valerie's mission is to basically create open source software for people to co-own primarily agentic AI
and we'll kind of uncover a bit of what we mean by this.
I actually have two co-founders.
One, David Galindo, has a background in cryptography.
What's a cryptographer and so is?
And the other co-founder has a pseudonym called Oaksbrot-Bart-Tatant.
he has a product background
and the three of us really kind of
in different ways
we're excited about autonomous agents
and this general pressure which you see
in AI towards a genetic
kind of AI systems
and we had different
experiences with this topic
different insights and we
came together to basically build a sub-sustrade on which
you can as groups kind of co-own these agentic AI systems.
So I think that's sort of the driving force is to provide this kind of software set,
which allows people to do that and also kind of create applications which people then
can own in that way.
You've used the term agentic AI system quite a few times there.
What is an agentic AI system?
The way I think about it is that if you look at the sort of dominant forms of AI,
then you can sort of see maybe three ways.
So you have like in the earlier parts of the last century, like this dominance of rules-based systems.
And then they basically say, you know, you have hard-coded rules, often extremely sophisticated,
which allow you to build sort of certain types of AI systems.
And by no means has this kind of part of AI research.
and applications gone away.
But at some point, you then had more these kind of learning systems emerge,
where you have like neural networks and deep learning and other forms of learning,
reinforcement learning, where you effectively use data to construct part of the algorithm effectively,
right?
So the system learns from data rather than all the rules being prescribed.
And if you look at what's now happening is,
have these very powerful large language models and other types of powerful AI models.
But they by themselves are certainly now not agentic in the sense that they, you know,
there's some data which sits somewhere and then you effectively query these models,
you instantiate them, you query them, and then you get a response.
And that can be a very sophisticated response, but that's it.
What's interesting is once you think about effectively systems that are having agency and can sort of autonomously act,
and there's an enormous pressure towards the systems from a pure optimization point of view and evolution point of view.
Like if you think about it, where can you make the most money, where are the most exciting applications as well, it's autonomous systems.
which can take actions by themselves
and they're not, you know,
maybe instructed by some third parties,
some human or some other system.
And so, yeah, these kind of agentic systems
we can uncover a bit what they look like conceptually.
But that's, I think, where the train is headed,
where a lot of focus is going towards across the AI fields.
So, David, maybe to kind of let me re-passed,
package this somewhat, would it be fair to say that in the genetic system is something where you
kind of give an AI agent a goal, but don't perfectly specify how it should go about achieving
that? Or is that too simplistic? Yeah, I think that works. And in particular, I mean, what we're
interested in are sort of these autonomous agents. And so we can briefly define that sort of
conceptually. So usually it's a sort of software system which is placed in some environment
from which it perceives certain information. They could be blockchain events literally,
or they could be things from an API. They could be something from a sensor, which it has locally.
It then uses that information plus whatever its internal architecture looks like to then
take action again in its environment. And that environment again can be like a blockchain, another
API, another agent, some actuator of any form. So this is what we would call like an autonomous
agent. And effectively, what I'm saying is that what we're seeing increasingly is that there's
more focus to basically create models which can act as subsistice.
of such autonomous agents or even like almost like subsume an autonomous agent as a whole right and
so there's this kind of pressure and towards these kind of systems so as an example of an
autonomous agent maybe we could think of so imagine there's a there's the gnosis network and then
there's the code base of the gnosis network and one could imagine like a coding agent of some kind
where somebody opens an issue against the code base of the Gnosis nation,
against the code base of the network.
I want to add this feature to the core protocol of the Gnosis network.
Then an agent could be something that kind of as a first step,
isolates the pieces of code that need to be changed.
As a second step, creates code, making those changes.
As a third step, does some form of testing.
so that could involve kind of like static analysis,
but that could also involve like runtime analysis,
then gets feedback from the environment
and then makes another set of changes
and comes up with a draft,
like a draft change, a draft pull request.
Yeah, I like that.
I think that's a good example.
Also speaking of
Knowsus Jane,
so one autonomous agent which is running there,
one type of autonomous agent,
which is built on the autonomous stack and is running there every day as an Asian rich trades in prediction markets.
And so if we kind of map that into this model, which I was just describing, it might be quite helpful.
So again here, what it's observing are sort of basically new markets opening.
So it adds us to the list of markets, which it kind of has a look at.
it might then fetch information pertaining to the events which are referenced in these markets
from really anywhere in the web so a search API or just like crawling itself almost
it then uses that information that context basically on the event as well as various AI models
at the moment, most of the agencies use some form of large language models to basically
prompt these models with that kind of information.
And then once it arrives at a prediction for that event, together with an accuracy and other
kind of information, which it estimates, then it will construct a transaction.
and then sort of act in that market, i.e. kind of take a position in that market.
So, for instance, if it's a buy-new market, yes or no, by the relevant tokens which represent these events.
And so here, what you then have is this environment being sort of these smart contracts
and the information endpoints which are pertaining to these markets,
and then the actions are taking these positions in the markets.
and then some time passes and then the agent might actually make some money.
Okay, but that sounds primarily like kind of like automation technology, right?
So basically people wouldn't necessarily know that I run this sort of software to do things,
just like kind of I run, for instance, say trading scripts, right?
So how do we know that this is, I mean, I assume,
to some extent this is already happening.
But kind of like where it gets really interesting is kind of when you kind of design
systems where several of these agents kind of come together to kind of in a game
theoretic way to kind of figure out, you know, something to do or some conclusion or something,
right?
So yeah, a couple of things.
So firstly, I think you're right that like there's sort of automation and then there's different levels of autonomy.
And like if you think about a self-driving car, they have these sort of levels.
And it's a bit similar to think here, like you have different kind of levels and you can be closer to what people might describe as automation.
And then there's also this thing where as time moves on, we tend to prescribe things, which maybe we saw as more autonomous towards automation because they kind of get wrapped behind.
like an agent for instance and then I can just sort of see the act of interacting with this agents where the agent is actually autonomous as for me from from the perspective of the user's almost like just like automation I'm just calling this API which then goes away and creates an outcome for me and so I think there there's always that but you're right and then in this system which I was describing actually the way it's practically implemented already is that there's already three types of agents.
today. So the trading agent itself doesn't actually come up with the prediction. It's other agents
who specialize on that. And now we're even like picking apart that role because what we basically
see is that from a practical perspective, if you can sort of specialize your agent that has its
benefits, like the same way we specialize as humans. But also,
from a sort of practical user's perspective of running the agent that can have its benefit.
So for instance, if I had an agent which has to have all sorts of, let's say, open source model
which need to run alongside it, which it uses, then this can become quite a beast to actually run this thing,
like quite impractical.
Whereas if an agent can use other agents to get something done, then it might be a simple.
as making a small crypto payment to, for instance, get a prediction.
And so that's the case here.
And then you have to obviously trade that off with other design considerations of the system.
So like to state Friedrich's question in like then in different ways.
So any standards taking company would would be running, for example, price oracles, right?
So in a price article, it's fetching the price from somewhere and submitting the price.
to the blockchain and it's getting paid in crypto to do that.
And the one could imagine that entire,
so it's like the code of like a price article is highly mechanical.
It can,
it is specified entirely in,
in a programming language.
The input to it is very structured.
It is probably coming as like JSON files that,
that are structured in a particular,
and its output is also very structured.
It is producing transactions that have these fields and etc.
Perhaps that is actually like an agent itself,
except it's like a very dumb agent.
And the kinds of agents you are thinking of are like AI agents
where we are trying to climb the hierarchy of,
well, the inputs no longer need to be that structured.
It may not come as JSON or Protobuff or any of these protocols.
It might come as an English language and it can,
be anything that comes in. So the inputs becomes unstructured. Then the processing logic, instead of
being structured in the form of code, you could have processing logic where the agent, like, Dinovo comes up
with how to execute on a certain input. And its execution path is kind of like invented for that
particular input and it might be different from what it was previously. And then finally on the
output side, its outputs could also be unstructured, meaning it's producing output in terms of
English language, which has like, of course, English also has structure, which has like, but less
structure than a programming language output or a JSON output would have. And so maybe the
one thing of it, the AI agent is we are trying to,
generalize the input, the processing, and the output of what is already kind of like a traditional
crypto agents. So validators, price oracles, we might think of them as traditional crypto agents,
but we are trying to kind of push their boundaries in like what they can do. Yeah. And there's
these different dimensions which you're kind of pointing at, right? So you have like the levels of
autonomy and then the levels also of the kind of how dynamic is the decision making and how open-ended
is it how structured and unstructured can be the input and output and basically like if you look at it
from our perspective the way we look at it is our stack kind of allows you to build across a
whole range of these things so we have some products which
are very, very structured.
So they're basically rules-based of the kind, which, you know,
an Oracle is actually one example.
You can build an Oracle on our stack.
It's not like anyone is like majorly focused on it,
but we have some demos of the sort.
And then all, you know, you go to the right on that dimension
and then you add this prediction agent where it becomes a bit less structured.
Because, yes, some of the flow of it is entirely structured
in the sense that it will always sort of do certain,
actions in a certain sequence.
I'll get to that in a second as to how that's actually done on a code level.
And then inside of these states, actually, let me explain it right now.
It's like we structured as a sort of finite state machine.
So basically we say, okay, the overall agent is described that as this graph-like structure
where transverses through these states.
And then in some of these states, it might sort of dynamically choose which path to take going
forward but sort of the rails are given right so it can't just sort of totally go off the rails
and suddenly say i was a prediction agent now and now i'm kind of doing this other thing um shopping clothes or
whatever um and this um we see this as a basically a pragmatic approach and b also a big advantage
because um you know obviously these kind of AI enabled agents autonomous agents is something
relatively new in that form. They were stuck in the sort of Daldron for a long time where
basically nothing much happened for decades in multi-agent systems research. I mean, no sort of big
move forward. And on the other hand, you now have these sort of AI agent models based mostly
in large language models where it seems a lot is happening. But then when you dig a bit in,
often if you leave them too unstructured, nothing is an interesting research exercise. But
practically not too much happened. So the sweet spot is still in between is what we say,
where you provide a certain degree of structure and then within certain states,
the agent can be dealing with unstructured input or output and can do what you were just describing.
And that, I think, it's, you know, how long we will be in this phase where it's so
an in-between, I don't know, you know, there's certainly attempts to build
sort of like almost like a large language model, but for actions where people sort of train
this sort of into the model itself, we'll have to see when they're actually, I think, usable.
But if you want to use off-the-shelf technologies today, then you're sort of limited to still
providing some degree of structure. The other way to look at this is also from an efficiency
point of you. So once you actually know that your agent is meant to be an autonomous agent in
prediction markets, that it's meant to make its money there, and that's that you want to use it for
that, then it's kind of pointless as if every time it's running, it has to figure this out from
first principles. That's a very dumb approach, right? The same way in programming, if I write an
efficient program, I might not generate everything dynamically. I might have like sort of hard-coded,
you know look at tables or whatever where I just pull values out because it's a way more efficient
than if I were to generate them on the fly even if I can and so the same thing is here sometimes
you might want to apply an agent actually at the building stage so going back to what you were
saying earlier me here and applying agents to build agents is also something we are focused on
so we have like some internal tooling now where we are able to basically prompt
our tooling and then it generates
sort of half of the agent.
Not all the code is finished.
There's still like some software developer engagement needed,
but it generates a lot of it.
So there's this angle as well from an efficiency point of view
where you don't necessarily always want to figure out everything at runtime.
You might want to sort of ahead of time build a better agent,
which is then forced to act within these bounds given by.
that design.
I'm still a little bit confused, kind of as for the Asian terminology.
So I think kind of there's these cases where kind of I can imagine you kind of,
you have large games that you kind of you optimize for and kind of that means kind of you
don't have to do so much on chain because kind of you can optimize it off chain and kind
of agents can keep each other in check, right?
And that to me is kind of like the multi-agent system.
at least in kind of my lay understanding.
But kind of in your description now,
it sounded like what I would have conceptualized as one agent,
you guys often think of as different agents
that are somehow amalgamated together into kind of like a super agent.
As you said, kind of like there's the prediction,
there's kind of the research agent and the prediction-making agent and so on.
maybe you can kind of delineate the terms here a little bit for us.
Let me zoom out even like a bit further.
So one of the, you know, core things, which I guess like the idea of multi-Asian systems is,
is that you have multiple potentially different types of agents which generate some sort of
emergent outcome.
So if you look at any individual of those agents, then they themselves wouldn't bring
about this outcome.
and then that's only the collection.
So this is the example I was giving earlier
where you have these three types of agents
and then they're kind of coming together
and the outcome are sort of AI-driven prediction markets
where no human ever participates in.
Now, if we look at our stack specifically,
it gets a bit more interesting still,
which is that we basically say,
okay, going back to this idea of co-owned AI
and co-owned autonomous agents,
like what motivates us there?
Well, what motivates us is that we're a bit concerned
that as there is this tremendous pressure
to build better AI models
and as there's this tremendous pressure
to build better agentic AI systems
that ultimately they will be owned in a very centralized way
and also operated in a very centralized way.
And so the question is,
can you create basically a substrate
where people can own them in a decentralized way?
So now one obvious answer is if you somehow can make a smart contract smarter.
And there's a lot of exciting projects which are kind of trying to do that with ZKML
and other kind of technological approaches where effectively you just use a blockchain,
a public one, and you run some code on it,
which might have been sort of verified off chain.
So verified on service, it might have been proved off chain.
Now, in our case, what we offer is basically, okay, if you want to build an autonomous agent and you then want to run that code as a decentralized system, then you can do that.
So in the OLA stack, you can develop this trader agent, which I mentioned earlier, and you then can run it as a multi-node system.
So what basically happens is that the trader agent is like the whole of all these agent nodes.
And here it's a bit different because these agent nodes are effectively like blockchain nodes,
they're sort of replicating the work and also the code.
They're often quite identical or can even be fully identical instances of each other.
And then they work together to effectively become this trade agent.
And so on-chain, they're represented as a multi-sig.
And off-chain, there's this couple of nodes which have like state-cuit.
synchronization between them.
So very practically, what they use is tendermint at the moment as a consensus
gadget so that all the nodes in the system agree on the actions this agent should take
next.
So in the field of LLMs itself, right?
And now I'm referring to, let's say, like, the non-crypto part of building on top
of LLMs, which is probably a thousand.
thousand times bigger than the crypto part.
There's like lots of different frameworks that are kind of like building agents using LLM.
So Langchain is probably the most commercially successful.
But then you'll go and find like Microsoft Autogen, which which is a multi-agent system in how it's constructed.
But there are but there are like loads of others.
In fact, in fact the problem is it's a problem of blend.
rather than a problem of scarcity.
So maybe to start with
in terms of like the agent framework you are building,
what is like really different about your agent framework
from the things that might be happening outside crypto as a whole?
Yeah, I think one key thing is that we always
one system which are sort of able to take action on chain
like any other users.
So we see like autonomous agents as these sort of
daily active users of various protocols.
And we can talk about this later in a bit,
what benefits that has for the protocol.
But that means that in our case,
sort of the crypto wallet and also the on chain representation
of the off chain agent are like first-class citizens.
So we think of this from the design beginning
and that has implications.
For instance, when we come back to this trader, if I want to co-own, like, let's say, a long-chain agent,
well, you'd have to basically build what we've built because you need some way of basically
sharing ownership of, let's say, an on-chain wallet, like a safe, let's say, a multi-sick,
with these off-chain instances of agents.
So our framework lets you do this.
That's one way to look at it.
So it's just a sort of native crypto support.
I guess. The second thing is this, if you go further to co-ownership, there's sort of two extremes there again.
So if co-ownership can be achieved entirely on chain, so for instance, you have like a safe,
which has some assets, and now you have a lot of, let's say, land chain agents or auto-GPT agent or whatever,
one of those framework agents all kind of holding a wallet and then being assigned.
or on the safe, then this could work, right?
Because they don't necessarily need off-chain consensus,
depending on what the application is.
But actually, once you look into the interesting application,
turns out that almost always,
once you go beyond, like, simple things which are done on-chain,
you need off-chain consensus.
Because often it's like things like,
even an Oracle needs to agree off-chain,
potentially on the data it wants to put on-chain.
Certainly efficient oracles want to do that off-chain.
and then if you imagine this off-chain system wanting to act upon something else off-chain,
then for sure you also need off-chain consensus.
So there it then also again helps to have a stack which gives you this out of the box, which I always does.
Now, a third way to look at it, and this is sort of purely on the independent of crypto
and more sort of on the structuring of the agent, is that coming back to our discussion earlier,
automation versus autonomy and like sort of fully AI based and dynamic agent systems rather
than those which are maybe like sort of based on hard-coded rules.
The reality is if you want to build like really use cases which people can use today
and which are actually meaningfully and securely achieving something, then you can't go yet
in these sort of fully unstructured models where you're.
you just basically repeatedly prompt an LLM.
Like, you can do it, but it doesn't work.
I mean, you need to provide structure.
And then if you look at the frameworks, you know,
lung chain is an interesting example.
And, you know, I have nothing bad to say about it.
But it's an interesting example.
They're moving towards graph structures as well
because it's pretty obvious that a chain won't cut it.
Like your decision making is almost never a chain.
That is the most basic kind of application
where it's like A, B,
D and then going back to A, right?
The reality is you're going to have, even in the most basic application,
you're going to have the happy path, which might look like this,
but then off the happy path, you have all these arrow paths,
which need to sort of loop back to different states.
And so you're basically in a graph structure.
And so that's where we started our journey.
We basically, like five years ago, said,
well, if we're building autonomous agent systems,
then it's unlikely that we're going to have in the short term,
these sort of fully open-ended sort of just models which we need to trend or somehow the
agentic system pops out but instead we still need to provide some rails and then use
models alongside those rails and these rails in our case are these basic graphs along which
the Asian has to travel and now if you put it back all together I think one of the benefits
you have our stack is that you can go and say okay I have a
a use case where there's some states in which the Asian is very free, there's other states where I want the agent to just travel along this track, then I can do this, and now I also want this agent to take action on chain ever so often, then it already comes out of the bots.
So we obviously, from the beginning when we built the framework, we're really heavy users of the safe.
So we had like Ethereum support with safe since basically day one since the framework is usable.
And now as we're sort of expanding it to other sort of types of blockchain ecosystems,
we're always kind of having the same design paradigm again where we pick like a multi-sick,
which is dominant in this ecosystem and then build the compatibility of the stack around it.
Okay.
I think I'm now less confused.
about the agentic part.
But I'm still confused about
kind of like the protocol as a whole.
So kind of like if you look at the stack,
now we kind of have some understanding
of what these agents can and can't do.
I can't just give an agent.
I don't know. I can't just say,
here you have a hundred die. You make me some money.
And basically the agent will go and kind of like
either kind of like
build arbitration bought or kind of
make saucy pictures on
mid-journey and kind of put them on only fans.
I mean, so basically it's like you have to give it some structure.
I understand that now.
But how do you put this all into a protocol
and kind of where does the co-ownership come in?
Because this is something that in principle
with an LLM model and like some deaf background,
I could just do on my own, right?
I don't need autonomous for that.
Yeah.
And that's a great question.
So basically, I think for like one of our core insights is that it's not about building individual agents.
It's about building effectively many agentic systems which can interact because ultimately we're like big believers in the specialization.
And even like from a very practical point of view, if we want to build better agents, people will build very different agents.
So the framework will have to create a very different sort of use cases.
So the protocol was always designed to enable basically entire Asian economies and enable their bootstrapping.
So there is a couple of mechanisms which facilitate that.
The stack itself is open source.
So when you have an open source stack, there's never a forcing function to tell them,
oh, you have to use this protocol.
So you have to basically create like a reason on top.
why it would make sense for people to engage with this protocol.
So one thing which we noticed is if we want to have these
basically autonomous agent use cases really grow,
then we need obviously a lot of development,
developers who build on the stack.
Why do developers have the benefit of building on the stack?
Well, there's some of the technical reasons we mentioned before,
but there's also one of composability.
So we basically have created a very composable framework where it's not so much about composing arbitrary Python libraries, which is the focus of a lot of the other frameworks, but where it is the focus of the stack to compose business logic itself.
And that's particularly with autonomous agents of the current generation, if you think about it, it's very important.
So if I have, like, for instance, this trader, and at some point it's going to settle a transaction,
you might say, well, that is just a matter of sending the transaction.
Well, this is actually not true.
There's around like 20 or 30 states in the finite state machine, which takes care of settling
the transaction because there is like on the happy path, various things that you need to sign
it from all the agents, you need to then submit it, you don't need to wait for it to be settled.
And if anything goes wrong in any of those states, the resolution,
looks different.
Now, you don't want a developer to re-implement that.
Similarly, if you think about things like interacting with these prediction markets,
that might actually be like something which you might want in another agent.
So being able to kind of compose these things is very, very interesting.
And so one big part of the protocol as a result is this focus on creating a developer
development incentive mechanism whereby developers get rewards for contributing these pieces of
agents and entire agents into the stack.
So that's this code side of things.
They can do that permissionously.
So very practically, you know, you develop the stuff, you registered on-chain as these NFTs.
And then there's a sort of reward system which works sort of on an epoch basis.
on the other side of this is the question of capital so obviously the developer rewards
come partially from you know emissions but over the longer term they will have to come from
productive agent systems which the Dow kind of operates I'll get to that in a moment
but even to get you there basically you need a bootstrapping mechanism whereby the
people can actually use this
OLA's token. And so that's
where bonding comes in. So whenever
the protocol is deployed on a new
chain, then effectively
there's a bonding mechanism
in place whereby
people who use or believe in the
protocol can provide liquidity
in this
token and the chain's token
and then return
that LP token to the protocol
and receive effectively
OLAS. And what
this does is that basically you have a very decentralized way of bringing that utility token to more
chains why do you want it on more chains because that's the third bit which is staking so once i
obviously have like code which does something useful and now i want to be able to actually
operate these agents and as we noted before you can operate them like a decentralized system
so it ends up looking a bit like operating a blockchain so you affect
then have the staking system where the operators of the nodes in any given agent can
basically earn these staking rewards and in order for that to all be smooth it helps
when the token is basically accessible on that chain directly.
So that's sort of the free mechanisms like staking being the last and then the code capital
sort of pair
and then we can't
dive in there if you want.
Yeah, so actually that's a lot of things
right. So let's try to recap.
So
right at the layer of building the agent
what you're saying is like, okay,
your framework
in a sense, like there are many frameworks
which we can
be used to build agents,
but the
differentiation of the autonomous
as framework is that it contains the difference in on two dimensions. The first dimension is it contains
components that would make blockchain integration and blockchain transaction creation
easy. So this could this could involve things like okay a blockchain an agent if it needs to
interact with a blockchain it needs to store a private key. So maybe it it needs some components for
securing of private keys. It needs components by which it can read blockchain data.
It needs components by which transactions can be sent and it can figure out that they were
confirmed or not. So there are like some standard pieces of logic that are used in a lot of different
places. Maybe even exchanges use it in their hot politz or things like that. And you're going
to build like the standard versions of those components and integrate them into your framework.
or a developer doesn't have to worry about those aspects.
Then the second thing your frameworks providing is
it is providing some kind of cognitive architecture.
By that, what we might mean is that you want the agent
to basically apply its intelligence,
but you want it to,
you want to constrain its intelligence in a certain manner,
which is like, you know, for us,
a particular problem always think, always create a tree and reason through the nodes of a tree
or in this particular problem create a line and like there are nodes and reason through all of
these nodes.
So particular problems might have particular ways of thinking that if we constrain the agent
to think in that particular way, it will produce better outputs.
And so you are you are providing a way to develop against some of these.
of these like constraints, right?
So our developer can put these constraints into their system
and then they could use it.
Those are on the framework itself.
And then what you're saying is actually like the network itself.
So now we jump from the framework deals with the problem
of how do you build a single agent or how do you build
two or three agents and they coordinate with each other.
But then you jump to the network level where it's the problem of ultimately you want thousands of agents to be built.
And there the kinds of problems you're trying to solve are how to provide developer incentives for the improvement of your agent framework itself.
yeah so this is so if we zoom out a bit what we ultimately want is a machine to well even if you
zoom out a bit more so the co-ownership of autonomous agents and agentic AI ends up being I think what
if you think about like that like decentralized autonomous organization is sort of almost like the end state
of that. So if you think about this concept of like some organization which we own, which is in
itself autonomous and which has the highest degree of decentralization we can achieve, then
ultimately this will be using forms of AI and be agentic, right, by its definition. And so
the different angle at which to come at this is to say, okay, how can you basically coordinate
all the actors which need to make that happen, right?
Because if it's just on-chain,
then you're always constrained by what you can do on-chain, right?
So if you just have a smart contract,
then there's always someone who has to call that smart contract
for something to happen.
And by necessity, you will always be limited to what's possible on-chain,
which I think will always be less than what's possible off-chain.
And so in a way, the other way to look at it is to say,
how can you create basically a protocol
autonomous, which allows the creation of these kind of cool, unable, autonomous agents.
And then this means you need to coordinate a bunch of actors. You need to coordinate those
who are developing them. That's why you have the death incentive mechanism. You need those who
operate them, which is around staking. And you need those who basically provide this
liquidity for the whole system to exist at any given point, which is the bonders. And so
that's kind of what the roles of the protocol is, is to coordinate all these actors. Now, obviously,
it's highly complex, so we should make it a bit concrete. If you think about what we had earlier
discussed quite a lot, the trading agent use case with the prediction markets, then there's
the system called the max inside of it, which is this third type of agent, which basically
just specializes on making the predictions. And these kind of
agents are basically something which you can imagine running as this decentralized system,
which the Otonalas style itself then can own.
So you effectively then have a situation where the Othonalus DAO can provide on an ongoing
basis this kind of off-chain system with configurable degrees of decentralization,
which offers these services to other agents in the autonomous ecosystem.
And then that allows you to sort of bootstrap this over time.
That makes sense.
Can I think of it like this?
So today we have a few different chains that are trying to build what I call like puppet accounts or delegated account control.
Those are two like two interchangeable words.
But the essential idea behind it is.
So the near network is trying to be.
build this. Okay, so, so
NIR's idea is that okay, there's
a blockchain with a sort of, with the set
of validators and what
if this blockchain itself
could own a Bitcoin
address, not only a Bitcoin address
but another Ethereum address.
And so, from
the perspective of Bitcoin, it's like a normal
address with a private
key, but
the private key
is actually split into
the validator set of NIR by
some really smart cryptographic protocol.
So Bitcoin thinks this is like a single,
it's a normal address, a single individual,
but in reality underneath,
it is actually the validator set of NEAR
that controls that account.
And in a sense that you can say that,
okay, that an near network by itself is kind of like
owning this,
owning this address on Bitcoin and this other address on Ethereum.
If you start with that point,
and then can you kind of feel,
layer on the idea that is it possible that, okay, that there be a way by which a network
could own not only an address, but an address plus a piece of like running code.
And that running code is one, an economic agent.
That running code is an autonomous framework agent.
So it has an address.
and it has some kind of like structured and unstructured logic.
So you can actually message it, give it tasks, and expect responses.
And so autonomous is trying to do that.
Ultimately, like, how do you have a DAO that can own an address plus some kind of code?
And so, and it owns both of those components together.
And then it can also sort of like make money.
make money through it.
That is what you're seeking to achieve.
Well, this is, yeah, so we would call this like a protocol on top.
So it's basically, if we go back to this concept which you said earlier,
if you have an existing validator set, so basically you could say,
okay, well, let's just do this all on chain, you know,
like that's just somehow modify the chains or it can sort of run long running tasks.
And then you will find very quickly that there's all these arguments as to why that cannot work.
Like you need an application-specific chain in order to have long-running tasks.
Because if you have a public chain and it becomes an immediate, basically,
attack vector for denial of service, distribute denial of service,
because you can just sort of preempt future blocks indefinitely by scheduling tasks for future blocks now.
So effectively, whatever NIR is doing there, I don't know in too much detail,
but there's limits to kind of putting too much on a public blockchain,
which is meant to run repeatedly or schedule, basically.
So you need to do it on some sort of application-specific layer.
And now you could say, okay, well, we can just run some sort of layer two or layer three
or layer N or whatever.
And ultimately there it's mostly about having sort of, again,
an architecture where you can basically then,
inherit some degree of security, right, and execute some of those instructions.
And in a way, I think ultimately, you know, in the future one day, an autonomous service will
look quite similar to an app-specific roll-up potentially, because it will basically have a lot
of degrees of verifiability, and it will potentially even inherit some of the security as a result
from the chains on which it acts,
but it will have these more autonomous,
a long-running task here,
which is executing,
which is different from like a public blockchain
where I always need to basically at any given time
offer these blocks,
which accept a certain amount of basically bidding into them,
and then once they're full, they're full, right?
I can't guarantee you that I'll execute you,
whereas an autonomous service can do that.
It can say, well, I'm application-specific,
ever so often I'm doing exactly this thing.
Okay, so I feel like this has become super abstract.
Maybe let's kind of make some examples, right?
So one of the main topics that kind of you posit this will be used for in the short term is
optimization of DAWS.
Can you give us some examples?
how kind of things work in DAO's today and how you see them improving by kind of putting these
autonomous agent systems on top of them or kind of enmeshing them.
Yeah. So actually there's a nuance to this question in the sense that originally we thought when
we started out with the stack that like DAOs are this primary customer for that, right?
They have various off-chain processes which are often quite centralized that's helped
them make them more decentralized and more autonomous, both things which are in their name.
Turns out from a good to market point of view, it's not particularly great because a lot of
dows have actually a lot of things to do and they're maybe not the best organized entities always.
And so it takes a lot of time and you're not getting to the goal very fast.
You also need to coordinate a lot of actors by the definition of it.
So actually what we noticed is that what's, whilst we still believe in this, and I'll talk about an example, is that it's better to focus on problems we see in our own DAW and make them as autonomous and decentralized and or just build basically users for other decentralized protocols.
So what I mentioned earlier, the use case, these autonomous agents are basically users of Oman, users of Gnosis,
users are safe, you know, they have done around 70% of all safe transactions on Gnosis since
summer, like on a weekly basis, basically they've done hundreds of thousands of transactions
which basically benefit these protocols on which they're deployed and obviously themselves
as well because they're profitable.
Now, an example which I like, because it's very easy to understand, which can apply to
many DAOs and which they can adopt quite easily is Gouvernaitor.
It was a bit of a joke project, which is sort of slowly maturing.
Basically, it's built on the autonomous service stack, which OLAS offers autonomous.
What it does is it basically replaces a human delegatee in a DAO.
So if I obviously have tokens and I don't always want to vote, I could delegate them to
someone I think will vote more or vote in my favorite, like with my intent and so on.
And we implemented that in code.
So basically there is an autonomous service which continuously watches those styles for which it holds delegated tokens.
And then when it sees those proposals either on snapshot or on chain, it can then vote in those proposals.
And obviously in order to do that, it needs to use a large language model to actually read the proposal.
and reason about it.
It also needs that in order to make sense of the preferences, it is given and sort of
bring those two things together to arrive at a voting decision.
But the actual voting, coming back to the structured versus unstructured, is a very structured
process.
There's zero point in having the agent figure this out every time because it will probably
fail most of the time.
Instead, you just have that part hard-coded, right?
So basically, this is a nice example of what we were discussing about earlier in very
abstract ways, you have these sort of structured bits which are defined very well. And then you have
these unstructured parts of the logic where you're looking at these proposals, making sense of it
and so on. Are there new attack factors that are introduced here? So basically, if I kind of,
if I kind of trust an autonomous agent to kind of make voting decisions for me, I kind of, I rely heavily
on the fact that this autonomous agent actually will act in the way that I would act if I were to look into it, right?
So how do you make sure that the agents actually do what they are meant to do on the face of it?
It's a great question.
So there's two parts to this, well, many, but I would split it into.
one is like the preferences so that's where the governor
this falls short it doesn't actually allow you to express very rich preferences at all
and that's just a matter of our time and effort which has gone into this part of the application
but one side which exceeds on is the basically certainty that it implements the decisions
the decision logic which is meant to implement so if you think about a human if you delegate
to them, you basically have no clue, right? It's all reputation-based. If you were to think to
delegate to a single long-chain agent or auto-GPT agent, well, it really depends on the developer
who is running that. Are they even running it? If they're running it, are they running the code?
They told you they're running, right? All this kind of stuff. Whereas with an autonomous service,
which has multiple nodes operated by different operators, you then start getting into a similar
basically thread model, which you have with like a, you know, like your cosmos chain, basically.
or any other sort of Byzantine fault tolerant system, whereby you have to reason about,
okay, how many operators are there, how decentralized is it, and then is the majority of them
honest. If the majority of them is honest, then you have very high security guarantees because
you effectively, what happens is that each one of them has to agree or the majority of them
has to agree and each one of them uses these models so you're not even relying on a single
model instance, which is another issue with large language models. They're not necessarily
deterministic at all. They sometimes can be configured to be, but like some of them can't even
be configured to be deterministic. So then having multiple agents each come to independent
valuations and then sort of pool that decision making and then agree is actually like a massive
improvement. So on that dimension, I would say governate is already better than a human because a
human could, you know, do whatever. And here you have like a node system implementing that
decision logic. Yeah. So I think kind of what we often try to do in these episodes is kind of we
try to understand how exactly things work. And I think this was more an episode about kind of
talking about why it would make sense to have something like this. So kind of I kind of, I know I want to
change gears a little bit here, and kind of ask about concerns you may have about this.
So kind of like, if you look at AIs, the way that they have improved in the last couple of years,
at least kind of like in the popular mind.
I know that kind of it's been a long time coming and so on, but it's really impressive, right?
It seems absolutely certain that they were kind of surpass human,
ingenuity and capacity on all kinds of axes in, you know, the very short term.
And if you talk to AI safety people, kind of often they will tell you they're not so concerned
because you can always switch it off. And now kind of pairing it with the technology that by
definition, no one can turn off. Does that worry you? Yeah, I think it's a
a good topic to discuss and one we will not be obviously sadden about.
I think the first thing which I strongly believe in is that it's very, very, very unlikely
that there will be just sort of one model which kind of runs away and like takes over.
And that's just even in like very favorable cases to the sort of super intelligence arising
and being able to consume a lot of resources,
there's like geographical, physical sort of constraints
which make it unlikely.
I think what's much more likely is that we'll have a situation
where certainly a lot of centralized players
will own very, very powerful models.
And so I think actually what we should be most concerned about
is the economic impact of this kind of change in technology on people,
rather than these hypotheticals where some software slays us all.
I think it's important to kind of keep it in the back of our minds,
but like with every technology, be mindful as to when these dangers become more apparent
that we kind of think about them, but like the much, much bigger concern,
is economic, on the economics.
If you, if you hit, hit, hit, listen to someone like Sam Altman, it's this naivety of
the economics, which really riles me up.
Like, they all go around and say, you know, I mean, you know, by all means, like they're
great like, you know, entrepreneurs and great, great products and so on, but everyone has
the weak points.
So I think here, it's like this kind of naivity around just because I create better technology,
everyone will be better.
Well, that never worked out that way.
The reality is that it's always a distribution question.
And if the distribution sacks of access to these kind of models
and people's ability to use them for their lives
and improving their own situation,
then it doesn't matter how good the best model is,
then there will still be even bigger disparities
in sort of income, health, wealth around the globe.
And I think that's what we should all be really worried about,
and that's kind of the mission about entire business
and the mission of autonomous is about creating these kind of systems
which can be co-owned so that there can be groups
who can share these systems.
That doesn't mean that all problems are solved
because now these groups could, again, be better off than others
and you still have these kind of distributional issues,
but at least it's a start.
So I'm worried about the economic impact of this much, much, much, much more
than these kind of hypotheticals,
which I think are interesting for different.
conversations, but really don't kind of miss the point mostly. Having said that, I think,
you know, let's say we fast forward. There's like multiple generations of advances and like
even like models, which are basically agent in the model sort of, you know, like some call them
large action models now I saw and others call them differently. Then, you know, OpenAI has their
reinforcement learning merged with large language models.
There's different attempts, whatever it will be in the ultimate state.
If we imagine that to run in a sort of blockchain-like way,
where it sort of has a bad intent and we can't turn it off,
yes, I think it's something we should keep in the back of our mind and think about solutions.
But I think the flip side of this is, again, that if this model is used for good,
then having transparency and kind of censorship resistance can bring many goods as well.
So I mean, let's take it one step at a time, basically,
and focus on the problems which we for sure know well happen,
which I think are distributional.
I feel like we've touched on many, many things.
If people want to learn more or kind of build their own agentic systems,
for autonomous or kind of just use systems that are already there.
Where should we send them?
Yeah, so we have this thing called the Academy.
That's a great start.
So that's for people who want to basically have more support as they're building.
We have the docs.
All of that can be found on the websites or all of us.
network and then if you follow Othonel us on Twitter as well there's like weekly updates
where I think those two places are the best.
Perfect.
I am so curious to see how this is going to evolve.
I think we should pencil in kind of a follow-up soonish just to see kind of like what people
build and kind of how it actually changes things because the opportunity space
here is absolutely enormous. Yeah, let's do that. Yeah, it's been a pleasure to have you on.
Thank you very much. It was a pleasure being on. Thank you for joining us on this week's episode.
We release new episodes every week. You can find and subscribe to the show on iTunes, Spotify, YouTube,
SoundCloud, or wherever you listen to podcasts. And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast. Go to epicenter.com. Go to
epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your
inbox as they're released. If you want to interact with us, guests or other podcast listeners,
you can follow us on Twitter. And please leave us a review on iTunes. It helps people find the show,
and we're always happy to read them. So thanks so much, and we look forward to being back next week.
