Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Humayun Sheikh: Fetch AI – Decentralising AI Economies
Episode Date: March 16, 2024While large language models (LLMs) are rather passive from an economic perspective on their own, AI agents offer a preview of what truly autonomous AI applications can achieve. Fetch.ai aims to create... a platform for economic interactions in the AI economy, where participants can provide many different kinds of stake, ranging from purely financial, in the form of cryptocurrency tokens, to utility based, in the form of data sets that LLMs can be trained on. It thus creates a supply chain that links different actors of the AI economy.We were joined by Humayun Sheikh, co-founder & CEO of Fetch.ai, to discuss AI economic models and how LLMs can be integrated by agentic systems as a foundation for autonomous AI apps.Topics covered in this episode:Humayun’s backgroundFounding Fetch.aiMulti-agent systemsAutonomous economic agentBuilding a Cosmos based blockchainIntegrating ML with agent economyScalability & interoperabilityUse cases & partnershipsAI x crypto projectsIncentivising developersAI alignment problemFetch AI roadmapThe future of ML & LLMsEpisode links:Humayun Sheikh on TwitterFetch.ai on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Friederike Ernst. Show notes and listening options: epicenter.tv/539
Transcript
Discussion (0)
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Frederica Anz, and today I'm speaking with Hamayan Sheikh, who is the founder and CEO of Fetch AI, which is one of the older AI slash blockchain crossover projects.
And we've had Hamayan on the show before, probably like five years ago.
But before we talk with Hamayan about Fetch, let me tell you about.
about us sponsors this week.
This episode is brought to you by Gnosis.
Nosis builds decentralized infrastructure
for the Ethereum ecosystem.
With a rich history dating back to 2015
and products like Safe,
cowswap, or Nosis chain,
NOSIS combines needs-driven development
with deep technical expertise.
This year marks the launch of NOSIS pay,
the world's first decentralized payment network.
With a Gnosis card,
you can spend
send self-custody crypto at any visa-accepting merchant around the world.
If you're an individual looking to live more on-chain or a business looking to white-label the stack,
visit nosuspay.com.
There are lots of ways you can join the NOSIS journey.
Drop in the NOSIS Dow governance form, become a NOSIS validator with a single GNO token and low-cost hardware,
or deploy your product on the EVM-compatible and highly decentralized NOSIS chain.
Get started today at nosis.io.
Korse 1 is one of the biggest node operators globally and help you stake your tokens on 45 plus networks like Ethereum, Cosmos, Celestia and DYDX.
More than 100,000 delegates stake with KORS1, including institutions like BitGo and Ledger.
Staking with Kors1 not only gets you the highest years, but also the most robust security practices and infrastructure.
that are usually exclusive for institutions.
You can stake directly to Quaros I's public note from your wallet,
set up a white table node or use the recently launched product, Opus,
to stake up to 8,000 eth in a single transaction.
You can even offer high-yield staking to your own customers using their API.
Your assets always remain in your custody,
so you can have complete peace of mind.
Startsaking today at corus.1.
Hey, Hamayan, welcome on Epicenter. Thanks for joining us today.
Thank you for an invite. It's lovely to be here with you.
Perfect. Before we deep dive into FET, can you give us a bit of background on yourself?
Yeah, so my background started from gaming, career science gaming,
and we then got introduced to Demis, who became a very good friend,
Demis Havas, who was the founder of DeepMind.
And then I went into DeepMind with him.
I was one of the first investors and one of the first five
who kind of was looking at commercializing AI technology.
So I exited once we sold to Google.
I think it's around eight, nine, ten years ago now.
So once we exited, then I was kind of very much
interested into how we can get AI and how we can granularize AI and how we can actually build
solutions which smaller and businesses and individuals can use. So that's how I kind of started Fetch.
Cool. So Fetch was started in 2019 or so. And at the time, this blockchain AI crossover field,
that's now clearly in bloom, was very nascent.
So I think at the time it was probably projects like Numurai and Ocean,
but there were few and far between.
So what was the inspiration to make this move to do AI things on chain?
Yeah, it's quite interesting when I used to start,
when we first started Featchen, I used to talk to people about, you know,
AI agents and people will look at you and take, I think you're losing your mind.
This isn't going to have.
Anyway, anytime soon.
Why do you even bother?
But the concept behind Fetch was to build a multi-agent system.
And we looked at building the multi-agent system, which was open and a bit more decentralized
than where AI sits today.
because AI, as you obviously know, is based on machine learning algorithms,
and these algorithms work the best with the more data you have,
because the more data you have is the data sits with mostly big companies.
So how do you actually bring all of this together so individuals and our smaller companies
can actually use it and kind of deploy a solution onto their own business
and use it as individuals.
So that was the premise where Fetch started from.
And we looked at building this multi-agent system in a centralized way,
which kind of defeats the objective.
And then we looked at blockchain and the decentralized ledger technology,
and we felt that was, I mean, at the convergence of the two,
I was seeked the convergence five years ago,
but much people are seeing it right now.
So the convergence was where we needed to be,
because what blockchain enables you to do is provide this kind of decentralized way of orchestrating
and keeping a record of the transactions, of interactions, of training, of machine-leading data.
So all of that could be put on a blockchain to keep a record off
and also to actually enable you to orchestrate solutions,
which are visible to others if you want them to be.
And they are very auditable because one of the main things which I also focused and be focused on deep mind is how do you create this explainable AI?
How do you create this ability to audit what AI has done?
And if you do multi-agent systems, where is the record keeping, you know, who is in control of it?
So blockchain kind of fitted really nicely into that premise.
and as we started working at the convergence of those two technologies.
Cool.
There's a lot to unpack here.
So you've used the term multi-agent systems several times now.
So I assume this just means several agents who kind of have different goal functions,
or how do I understand multi-agent systems?
That's kind of it, really.
if you look at multiple stakeholders interact, I mean, let's unpack it right from the top.
So if you look at a system where a lot of stakeholders are interacting with each other, effectively that's a multi-agent system because what you have is all these stakeholders, you know, trying to achieve an objective and others who are participating in the completion of that objective.
and you have various stakeholders having different objective.
Now, that's a very kind of generalized concept here.
So in a multi-agent system, you have to be aware of, you know, it's a zero-sum game.
How do you make sure the consensus is achieved?
Because that's a nice sure.
But luckily, all those problems are kind of very solved in DLTs.
So we just learned from that whole process.
So that wasn't really the big issue.
We were trying to solve.
We were trying to solve was how do you take, and that won't come to what is happening right now.
So if you think five years ago, to imagine what is going to happen, if you actually took, let's say, if you took a piece of software, which is kind of built in a monolithic system, and you took all the functions which exist in that software, and you actually made them as a,
Now, what you have is all these functions who are interacting with each other to build an application,
but these are not monolithic applications.
These are applications which can be built and can be composed, can be orchestrated by multiple
stakeholders in multiple different ways.
So that's kind of the premise where we started from.
Now, if you work down from there, you think, okay, so what's the best way of granularizing?
those functions. That turns out to be like an agent. Now, what is an agent in a simplest form is
a piece of software which does something on behalf of some instruction, which is owned by somebody.
So now you have these functions, which have this communication methodology where they can
communicate with each other peer-to-peer. They can actually compose themselves based on whatever the
objective is, whatever the application is.
and they can exchange economic values.
So how does we call them autonomous economic agents?
And that was the term coined by myself and for fetch.
So we focused very much on that technology.
How can you make them communicate with each other?
How can you compose services?
How can you write protocols and how can you make sure that these protocols can be generated dynamic?
So that was kind of the universe that we were trying to explore and build together.
So now you fast forward three or four or five years,
then what you realize is now suddenly you have these large language models,
which actually take your objective,
and then they can take the objective and convert that objective into an action.
But at the moment, what is happening is you have the text,
and the text takes your objective,
understands it and gives you a solution around it. But what is not happening at the moment still,
and it's in its very early stages, is how do you take those components that form the action
following on from your objective? And how do you execute those actions? So now, if you take what
we were thinking, which is taking the agents, these small components, these functions, these microservices,
which can be self-composable,
and you connect that to this new way
of taking your objective
and converting that into an action,
and then you go into this space
where action actually orchestrates these functions
to deliver the objective that you came in for.
Now, I know there's a lot of stuff
which I'm kind of mentioning,
but think of it like I go to a software
box and I say to it, I need to build an application which does this. And rather than just
writing the code, it picks up the code which is already there using microagents and puts them
together all the fly and delivers you the objective. So the components can be built by multiple
stakeholders, can be built by multiple developers. And if you compose your service, you compose
your application on the fly using these microagents.
Yeah, so can I give an example to kind of see whether I'm getting this right?
Okay, say they want an AI assistant.
And I'm telling the AI assistant, I have to light off.
What could I do to help me unwind?
And then the AI kind of brainstorms with me and said,
do you feel like going out for a drink?
Do you feel like going out for dinner and a movie and so on?
Then I say, dinner and a movie sounds nice.
and then they kind of pull up restaurants near me
that might have availability to light
and they say which ones I like,
how many people I would like to go with,
whether I would like to invite someone,
and then they would suggest movies to me,
possibly based on what I have liked in the past
and be able to kind of make reservations for me
by calling the cinema.
Is that kind of like the right idea here?
That's absolutely the right.
idea. But let's now break it down into the
compendant, right? So when you speak to your
assistant, that's a large language
model you speak to it. So
that's not where we are, right?
Of course, you can have specialist
large language models and we have
our old language models, but let's
just kind of compartmentize it
in the right way. So you ask this question,
let's say to your Siri or
to your Alexa or you type it
in to open AI or chat or whatever.
And he kind of says,
okay, yeah, we can see, you know, so now there's two issues. One, is your context being added? Because it
needs to understand who you are, what is your preferences, where do you live, or what kind of things
you like. So that context is not just the LLM. That's, that's, LLM is a general foundational model, for example,
but it needs to bring that context in. And to bring that context in, let's now imagine that you have an agent
which sits and speaks to your LNM.
So now that's your agent.
Agent holds information about you,
and it can feed that information into the LNM.
So if you think about, it's kind of a rag, right?
So you can actually kind of create the suggestions
based on what the preferences you have, which the agent is.
So that's one component.
So let's call this a preference microachial.
which sits with you. So that goes in, automatically provides that. Now you have, okay, yes, you have
three options, four options. Now, that's the large language point because it's absorbed all the
data, it knows how to suggest things to you. It suggests a restaurant, a cinema, and it's something
else, whatever that may be. But that's where it stops, right? That's where the LLM stops. So now
somebody has to go and create these welcome integrations that, say, the integration system,
with a cinema, integration sits with something else, with a restaurant, booking. Now, you could do this
via aggregators. So, for example, you can add this aggregator which has all the cinemas on it.
Or you can do it in a different way, which is the more efficient way, which is where the
paradigm shift is coming, which is every cinema says, I have an agent, and this agent can just
hold information about that cinema. When your L&M or your system suggests to you,
that you want, you should go and watch a film and you say, okay, yeah, tell me what films around,
what can I book? Then, based on your preferences, that agent then goes and speaks to other agents
to find out what's in your area, what's relevant to you, rather than this whole, you know,
I can generate whenever I want. It's not, it's a very deterministic approach.
So your agent goes to the cinema's agent and says, hey,
you know, this will be, seems relevant. I'm going to propose it. Do you have any availability?
Do you have any seats? So, so your agent speaks to the agent, the seminars agent,
automatically works out if there is availability, because there's no point suggesting it to you,
then you're going on the website, trying to book the seat. No, if it's proposing something to you,
then that means their availability is also there. So you can then say, hey, go book it. And then the
two agents interact with each other, not through an intermediary, not through an aggregator,
not through some weird platform, you're having this conversation, whichever the channel is,
it can then go and book it for you. So rather than saying, oh yes, I like this idea,
now I'm going to go and show me where this cinema is. I go and see the cinema, click it,
and see, oh, there's no availability because the cinema is bought. So all of that is completely
bypassed and you go from making a direct connection between the two microagents and delivering
your objective based on, you know, what you proposed initially.
Cool. And the blockchain element makes sure that kind of this works in a composable way. So kind of
like there's a neutral platform that all of these microservice AIs can live on and communicate.
Is that correct?
That's correct.
So what you have is you have to register your agent somewhere,
and if you want it to be registered in an open system,
then you have a blockchain registration system.
You want to then, when somebody accesses your agents,
ask the question you need to be able to explain an audit,
why this happened, when it happened, that it keeps a record or clouded.
Okay.
What in principle, blockchains could also enable is giving the AI economic autonomy, right?
So basically not only could they say you want to watch, I don't know, poor things at the cinema and go to this Italian restaurant,
but they could also immediately book it for me because in principle you can endow them with money, right?
Absolutely.
And that's another reason why.
Because we think digital currency is going to be the future.
Now, whatever the digital currency is, we don't need to take a bet on it.
But we know some digital currency, even if it's a government,
down one or it's a, you know, it's a public one or it's decentralized or not decentralized.
It will be a digital currency.
And giving your agent the ability to transact on your behalf, that's one of the, and hence
the autonomous economic agent concept that we introduced.
Yeah, I see a lot of room for making our lives easier there.
So even if I just think about things like booking a holiday, right?
kind of saying, I want to be away for these two weeks because that's when my kids are off school.
I want to go somewhere warm.
That's not too densely populated.
I definitely want to get some sun.
I would like to go on a safari and I would like to spend a couple of days at a beach.
This is my budget.
I want direct flights only.
And then if you were to kind of research this yourself, you'd spend hours online,
and kind of looking at kind of like hotel reviews
and kind of seeing how everything would kind of work together.
But in principle, all of this can be abstracted away from you
and kind of you can be given like five different proposals
of where to go based on your criteria, right?
Absolutely.
But there is an economic advantage here as well.
At the moment for you to do that, you do the work,
and then somebody in the middle says,
okay, we sit in the middle, we take 25% of it.
because, you know, if you think about booking.com or any other aggregator, travel aggregator,
that's what they would do.
And the reason is because there has to be some place where all the information sits
so that you can search it easier.
You don't have to go on Google and search for every single one independently.
But in this case now, what's the paradigm shift coming is this, that search is changing.
The search is completely going to be different from the search that.
we do today. And the evidence of that is coming. So your agent can actually interact based on your
preferences with directly with the supplier of the service. So that hotel can actually then reverse
bit for your business. So you have five and you can say, hey, this is my budget. Who's going to give me
the best price and I will take it? And so you don't have to, but you don't have to do anything, right?
So it gets done automatically.
You just get to press the button and say,
yes, this is my choice or no, this is not my choice.
So you still have that kind of autonomy.
The agent is not taking that away from you,
but the decision making belongs to you.
But your workload has reduced.
And the economic value transfer is not sitting in the middle.
It's now going to either the supplier or the consumer.
I think I now understand what the problem space is you're trying to
solve. How does it work technically? So I understand Fetch AI is its own blockchain. Why did you
make that decision? How does it interact with other blockchains? What sets it apart from other
blockchains? What's kind of the design space here? When we started, it was only Ethereum. So
Ethereum is not suitable for this. And I'm sure anybody who knows Ethereum would appreciate that.
It's not suitable for many things. And this is definitely one of them.
So we started with the premise that we need to build our own, and we also realized that that
blockchain cannot be proof of work because it'll be too slow, it would be too costly.
So we steered towards proof of stake, but that wasn't the only reason if I steered towards that.
We also were building a mechanism of useful proof of work.
So you could actually have machine learning algorithms producing results, and based on who's running
those nodes and how many kind of training sessions they're doing, you could reward them.
So that's part of the blockchain securities, people putting effort and money in.
Well, very quickly what we realized that is nobody is great to initially tackle that issue.
So that comes after, once we have kind of entered the space of actually proving what we are
trying to do is workable.
And we were too early in the market to develop that.
We'll see over time we will be releasing our own different elements of the blockchain and the consensus mechanism.
So what was really interesting for us was to choose and make a blockchain, which is modular and is able to cope with the further developments that we will do over time once applications start coming in.
So we chose a cosmos ecosystem, because we felt the complementization, the ability to choose your consensus mechanism, ability to add and subtractive.
things was much better.
So actually,
Fesh blockchain is a cosmos-based
blockchain.
And the ecosystem around cosmos was building
quite nicely. So we
chose that also because
of the ease of kind of
changing
consensus mechanism, for example,
because we wanted to introduce our own useful
for work. So we chose
that. And at this point in time,
we are still cosmos-based
and we're now
starting to bring in the components from the multi-agent system, machine learning models,
LLMs, and trying to make that part of the consensus mechanism, which is where we started from,
which is the useful proof of work. But our focus currently is very much on agent-based system
because people need to initially, people need to build solutions for blockchain, which are not
just financial solution. Because if you look in blockchain, 95% of systems,
solutions at the moment. Our blockchain are just financial. They might dress it up as something
else. The actual transaction is just financial. And that's what we think is not going to enable
crypto or decentralized ledger technology to progress. What needs to happen is there needs to be
use cases which are non-financial, which have to be deployable on blockchain. And if we can start
deploying those solutions, the scale of this whole market base is going to dramatically change.
I mean, it will be a completely different space for what it is today.
Because if you think about the financial system, the financial system exists, but on top of
that, you have to build the industry. So the industry is missing. And because it's missing,
we need that to come in first.
Yeah, absolutely. So obviously, the, as
L1 and L2 blockchain space has changed a lot over the last five years.
So if you were to deploy today, kind of if you look at recent AI blockchain companies,
they're all on alternative L1s or L2's to Ethereum, right?
Would you also have gone down that route,
or do you see further advantages in kind of running your own blockchain?
Because it also comes with a very significant amount of overhead, right?
Yes.
For us, it's important to have that, the ability to control our own chain because, as I said, some integral part of this blockchain will come from how we run machine learning models.
I give you an example.
So one of the products that we have is called collective learning.
And the process of collective learning is to train machine learning models by multiple stakeholders who have multiple.
different data points and they train them collectively, but they don't see each other's data,
and they don't see each other's model. So the model weights get trained and then transmitted,
and they are brought in to the system, which could be used for our consensus mechanism.
I'm just, I'm kind of giving you a very top level kind of idea and thought. So to bring that
consensus into the blockchain itself, because that's building value in the blockchain, which we then
also use the stakers to stake in the proof of stake system as a combined consensus mechanism
is quite valuable, because you then start seeing how training machine learning models could add
value to a decentralized ledger technology. Yeah, I can see how kind of this,
proof of useful work is clearly superior to kind of like proof of more or less useless work that
we used to have before proof of stake. How do agents incorporate these elements of machine learning
or these data that they can be trained on into themselves? Beautiful question. So it's a
question I would have suggested if you didn't ask but there you go. So what he tells me is that
we are on the same page and you kind of.
So if you now take an example, right?
So the example is, let's say I need a prediction.
I need a prediction of some type.
I need a prediction of footfall in a particular location.
I'm just giving an example.
This might not be valid for this, but it'll give you a rough idea.
So let's say I have 1,000 shops around the country which give me a prediction based,
which are training a model.
which was created by some university student somewhere,
who is taking data from weather
and correlating it with the footfall in that shop.
Now, it's a very simplistic example.
But let's just assume that a thousand shops are creating this mall
because they're training them all with the data.
Now, all we require from them is the data coming in
and them staking into the staking so that they can't cheat the system.
So the training is fair.
And let's say one shop is providing 1,000 data points, the other one is providing 5,000 data points.
But they're training the same model.
Now, the same model gets trained.
And let's say the model is now sitting on the fetch, co-learned platform.
Now, you come in and you say, I want to see if your agent wants to tell you to go to a particular shop
because based on, let's say, the weather, it needs to tell you which shop to go to,
because if it's raining, the footfall is going to be busy or this shop and, yeah, whatever the
prediction might be.
So the agent goes to the machine learning model agent and says, hey, this is my location,
give me a prediction, if the shop is going to be busy or not, but I will pay you one cent
to give me that prediction.
So the exchange happens.
you get your response in the sense that you get a message saying, yes, it's going to be busy,
or no, it's not going to be busy. You got your prediction from what you needed, it from the agent.
Agent then pays to actually give value back to the machine learning model.
That machine learning model then pays the value back to all the people who trained.
So that's the whole ecosystem.
them. Now, the machine learning model has now got financial value because you can see if
million agents are going to query it and pay a cent each, it now suddenly has value. So that
value can be used as a staking mechanism rather than using physical cash. So you can start
taking the cash out and start putting these valuable models, which then actually generate
revenue because that's really a true way of actually securing the ledger itself because,
you know, just putting money in the bank is our old financial system. But if you want,
but you can have a combination of the two where you have value coming from different places.
I mean, you could still put cash in terms of cryptocurrency cash, but you can also add more value
because it's a revenue generating model. So the machine learning model, so the machine learning model,
always sits in there, which was doing the useful proof of work, is actually what's building
value in the ledger itself. So that ledger in itself is highly valuable if it's kind of bringing
all these machine learning models together. That's a wonderful example. I think it kind of really brings it
home. Is there some sort of reputation attached? Because otherwise I could just train the model on
made-up data.
I could even ask chat GPT,
can you make me like
a gigabyte of data that kind of
simulates footfall
in this area?
And kind of it would look like
realistic data, but obviously
it would be of no use whatsoever.
So do you have any way
kind of having the person
or the agent that asks
for the prediction signal back
whether this was actually
good information or not?
That's absolutely the model that we have
built in because you have to have that level of trust. And that's why we want the people who are
training to stake first. So if their predictions are incorrect and if we realize that the data is
not correct and it's an automatic system, we've written about it and there's a blog post about
it. We can perhaps share it with your views as well, if you be. That's the model that we are
building. And that results in slashing the stake, which people put.
when they train them all.
So the one thing that's always difficult
with blockchain-based microservices like this
is scalability and transaction speed, right?
How do you think about this
and how do you address these challenges?
It's less relevant to us,
but that answers your question which you asked before,
which is why did you not choose Ethereum?
For that reason, because the speed is not right,
the way we need to deal with it is not right in the sense that it's costly. So all of those things
is why we chose to move across. Now, we have scalability solution, but we are chain agnostic.
So we will look for the best chain, which can do the job. And we can split the transactions
on one chain, but we can split the machine learning on the other. We can do all those things.
That's why we built it in such a componentized manner that you can actually take the best
of the world, wherever it comes from.
We're agnostic about change,
the agnostic about user interface,
we're agnostic about what machine learning model you want to build,
as what is our core technology
which brings all of this together is the multi-agent system.
So it's the framework, the platform,
which enables you to connect agents to all of these things
and then bring them on to FedShame.
I understand.
I think that will kind of make me reframe my question a little bit.
So then how do you think about interoperability?
So interoperability in our case is an easy solution.
So we have agents.
These agents can be grounded into any chain and they can observe the rules of that chain.
But when they interact with each other, they interact on the fat chain.
So we actually have showcased already models where we have taken.
taking a polygon-based agent, which is reading the chain, doing a transaction on Ethereum,
based on what happens on polygon, or based on what happens on PolkaDot.
We have shown integration of PolkaDot ecosystem.
We've been shown into where we currently about to showcase it with Solana,
so we don't really care because the agent is picking up its feed from,
and the roots are in that particular chain.
and if you trigger that agent to do something in a particular other chain,
you just need to have another agent there which communicates and transacts on your behalf.
So, for example, sending a token from chain A to chain B does not require these complex bridges.
It can be a very simple solution.
One agent sending it, the other agent kind of interacting with the other side,
and holding it an escrow agent by agent.
So you don't have to have these big bridges which get hacked.
But then the agents could get hacked, right?
The agents could get hacked, but the agents, every agent will have to be hacked.
It's like hacking each and every single bullet which does the transaction.
Okay, that's fair.
Can you share some practical applications and use cases that are already in operation or development?
We have a travel use case which is in development where you can actually do exactly what you said.
We have been coordinating with automotive sector because
as I'm sure you've seen our partnership announcements with the likes of Bosch, BMW, Mercedes.
We're building solutions built into the cars, which people can interact with.
People can actually transact via agents and record their transactions or interactions on the chain.
We have a deep-in integration with a company called PEEC, which I think there's an announcement, which is happening, has happened.
We've showcased how a decentralized public infrastructure network can interact with agents
and how it can actually transact on the behalf of the equipment.
So we have plenty of use cases that they're all kind of either talked about on Twitter
or we have them on the website or on our GitHub.
Can I also use FETCH to kind of interact with the defy ecosystem?
So say I have a portfolio of 50 different.
tokens and I want to know how to best yield farm with them given my risk appetite.
Because basically there, all the information is inherently on chain, right?
So that should be kind of like a super low hanging use case.
And it gets increasingly difficult to keep track of all the relevant pools and so on.
So does Fetch also contribute in that sense to the defy ecosystem?
Yes.
And we released a showcase agent called the block agent, which monitors multiple blockchains
to see what the transaction is happening on the chain are. And based on that, it can actually transact for you.
Now, that's a very basic solution. You can make it a lot more complicated. So you can monitor uniswap contract,
make sure you're constantly monitoring what's happening. And that based on that, you can ask your agent to do something.
Yes, that is a very easy and low-hye fruit, as you say.
it's definitely available
and
we are asking our community
to build these solutions
as we just
us building all of these solutions
is not even possible
and we want the community
to kind of thing like you're thinking
I could build that solution
and yes absolutely you can build that kind of solution
here
super cool
now there's a lot of AI companies
kind of in the blockchain space
right so the likes of
autonomous and Jensen and
Origin Trail and like all of these companies
do you have like a mental model of kind of how to group them or kind of
what are they like all one kind of
no of course of course not I mean forgetting a little bit about the crypto space
and the decentralized logistics technology space and if we look at just
just look at what's happening in AI right so what's happening in AI is that you
this bottom layer where there's the silicate layer, which is the GPUs. So on that GPU space,
you're going to get a lot of companies who are building data farms or building GPU farms,
but also building software to distribute work to those farms, right? And actually giving you
the option and the ability to deploy on those. So you have the companies like Anchor, for example,
they're providing a cloud space.
And that's the decentralized cloud space.
So you can actually run on GPU,
whatever you want,
and there's no restriction as such, all that.
So that's one.
And then you have the other side where you've got,
if you think about the other side where you have,
second layer comes in,
and then you have the foundational LLMs.
But that's a commodity as well,
because there's going to be some big companies
who will build foundational LLMs.
And then that space will get commoditized.
So in the crypto space, perhaps, you'll see a lot of people using these open-sulls models
and deploying these LLMs, the foundational LLMs, as a commodity for this space.
And that's the second layer.
So you'll be able to group some in that layer.
Then you have the one after that.
So, I mean, I can carry on with the whole stack, but I think it's quite interesting to just see.
So then you have these specialist LNF.
or the rags, and there are companies which are doing something like that.
So you have agents who can just give you some X, Y, and Z service individually.
So after that comes this application there, so people will be putting these LLMs, Web 2, Web 3 together, to build that application.
That's where kind of fetch it's set.
So we have a platform where you can actually build those applications.
You can launch those applications.
And what we also have is a search and discovery layer
where you can actually find these applications.
You could go on it.
You can say, hey, I am building this application,
but I don't have this microservice.
Can somebody provide me that microservice?
Your agent goes and finds the right microservice,
brings it to you,
and then actually you can just easily connect it
without even writing a piece of code.
So that's where we sit.
And then after that,
you're just going to see more and more applications
being built, which would be kind of AI space, but not truly doing the AI kind of side of things,
but actually building applications on AI kind of marketplace.
So that's, in my head, these are the categories you see.
Yeah, I think that's a really good breakdown of kind of the potentials of the space.
So if you kind of look at Fetch's domain, how do you incentivize,
developers and enthusiasts to kind of contribute microservices to the ecosystem,
because it's kind of like a chicken and egg problem that any marketplace has, right?
So no one will check out a marketplace if there's almost no offers on there.
It kind of needs already to be lively for people to actually go there and offer their services there.
How do you overcome this chicken and egg problem?
So we're building quite a lot of applications ourselves, the small application.
But we have a community fund.
We're giving grants for community to build these applications.
And I think that's quite a well-proven method to kind of kick-start the ecosystem.
If you give people the incentive, financial incentive to come and build it, that's great.
But what is quite interesting observation for us is if you look at Hugging Face, for example, right?
So Hugging Face has a lot of people building these machine learning models.
But these machine learning models sit there.
And you can see they've now started doing the inference.
The problem with that is there is no way to monetize those at the moment.
So if you ask those the same builders,
and it takes five minutes literally to convert a hugging face model
into our marketplace, five to ten minutes.
Somebody who knows what they're doing, they will not take them long.
So you now deploy that agent or microagent on.
our platform. And we have application builders who are looking for those bottles. So what we are
incentivizing is the application builders to come and build, and we give it grants for that. And we're
building some of them ourselves. But what is quite interesting is that there's a lot of,
there's a lot of interest, that's something which Open AI has done for all of us is,
there's a lot of interested people in the legacy systems to kind of onboard it to this new
AI economy. So we are seeing a lot of traction from small businesses who don't have the full
capability, the technical ability to come and onboard because they don't have machine learning engineers,
they don't have AI specialists, but they want a simple interface which we are providing
because we built this application, which can take your legacy system onboard just one person,
two days, onboard it into this new way of doing things. And then you connect these together
and suddenly you have, I mean, we have seen a huge uptake of all of this.
We had like 25,000 developers, tech pro-noders join our platform and their building.
So I think we'll see that, yes, there is always a herbal chicken and egg situation,
but we're seeing a lot of traction.
I have no doubt that very soon this space you're going to see a lot more applications coming through.
that sounds super encouraging.
There has been a lot of talk about AI alignment, right, and AI safety.
Now kind of crossing AI with blockchain technology,
where kind of the inherent characteristic of blockchain is that you can't just turn it off.
Does that give you a pause in any way?
We, again, my objective has always been to build a modular system.
So that because this is a new space, you can't just say, here's the definitive solution.
Ours is the definitive solution.
So if you keep that in mind, and I think just looking at what you just said,
which is, you know, AI safety, alignment, all of those things,
there is no one solution which is going to fit everything.
What I do think, though, is this.
You can't stop people from training machine learning models.
That would be the wrong approach.
And the governments are focusing on that at the moment, which I don't feel is the right approach.
Because you will have somebody trained something somewhere.
You can't stop it.
I've seen some governments doing it.
Some are thinking about it.
Some are trying to impose regulation.
But both those things, they're quite interesting.
Some governments are trying to control it.
Some governments are asking some big corporations to try.
I think both approaches are wrong because, I mean, we have seen what happens when, you know, a big corporation like Google can get it wrong.
You saw that on the video case, you know, all the, what happened with all the pictures and the videos and, you know, bigger corporations can get it wrong too.
So it's, and if they do and it's out of control, you can't fix it that quickly.
They still haven't fixed it.
The video service was still not all the last I saw.
So that's not always the right approach.
What is the right approach is to enable inclusion,
but when you come to the deployment of this AI,
that's when you start monitoring it.
That's where you start putting policing in effect,
that you can't actually take this AI and actually apply it to applications.
I think, and to have a platform, which is one auditable, second open, enables inclusion, it's not in one control.
Because you don't know what's happening eternally what control.
You can't see it.
So having a system where people can actually go and see what's happening, how the training of these models is done, how are the agents doing transactions?
Why did the agent do the transaction?
What is the logic behind it?
and have that whole auditability is perhaps the right way.
I'm not saying it's the only way.
I'm just saying that's one of the right ways that I can think of.
Yeah, I think that makes a lot of sense.
And I think kind of having things out in the open
is always better than kind of not knowing about them.
I don't know whether you know this Joshabach person.
He also argues kind of for encouraging everyone
to kind of build the most advanced models that they can
because if you kind of, if you stop,
if you try to stop people from building it,
it will invariably be the bad actors
who will still do it and they will have the much better models
as compared to everyone else.
So any way you can see this go wrong though?
Yeah, I think privacy stays still a concern
but I think a kind of a combination of the two
open but
you know having making sure the privacy is there
I think it's the best shot we can get but don't forget
I think we're still early we do not want to
stifle innovation by creating some regulation
which we don't even know what's coming right so
it's it's too early to start
kind of putting restrictions on things I think
as we evolve it's going to become
unclear what we need to do. And just like anything else, you know, when the financial sector starts,
we had pink slip, chairs, you know, there was corruption, there was problems, there was wrongdoings,
but the regulator comes in, the government's come in, we solve it. Yes, I understand the risk
could be higher, but then we went to cold war. So it's like saying, you know, we're sitting on
nuclear bombs, right? So AI is a bit like that, which is, but we managed it. I'm sure I have faith
in humanity, I think we'll manage it.
Your word and God, yeah, I just
fear that basically
even if 99.999% of
humanity comes in with the
best of intentions, sometimes
a very small group of people can
still disrupt or destroy
things very significantly.
But I hope you turn out being right.
Yeah, I think we'll always
have that risk. And I
don't disagree with you. There is always
going to be a risk. The only thing we could do is we continue to actually improve our processes
as we go along, because giving people the ability to improve the processes is more interesting
than just trying to restrict it, which kind of results in some other problems. So one country
might restrict it, but what other people might not follow the same, so then what? So, you know,
these are all the geopolitical questions.
I guess I'm less involved in, and I don't want to be too involved.
I'll leave that to the likes of Elon Musk and Open AI to the court case.
Yeah, so things you are involved in then.
What's next for Fetch AI?
Well, this year we are focused on bringing all the components together.
I don't know if you saw, but we announced the availability of GPUs for our ecosystem.
We add because it's a very, you know, a surprised commodity.
You can't find a GPU space of them.
Everybody wants to do it.
So we have built our own supply.
We're deploying our own supply.
That's very important for the ecosystem to kind of grow.
We are providing all the tools that we enabling, not necessarily,
just the developers, but also individuals and small businesses, how to onboard into this system
so they can actually use and actually benefit from this new paradigm shift that is coming.
And we're also encouraging developers to kind of come and build unique and very interesting
solutions to showcase and then monetize them. We don't believe in just open floors without
monetization strategy.
ultimately people don't
update it, keep up with it,
and then it doesn't get used the same way.
So we believe in monetization
has to be there,
or at least a structure has to be there,
and people can then choose what to do with it.
So that's the focus this year.
We're going to bring more developers,
more people, trying to break things,
building new things,
and trying to commercialize.
That sounds like a lot of happening.
So if you think about this space in five years, where do you think we're at?
In five years, we will have a lot better LNMs, a lot faster LNMs,
and we would have gotten rid of hallucinations, so we will be able to deliver deterministic solutions.
And I feel the biggest change is going to come in the surge arena, a way.
people will be searching in a very different way. They will be finding, they will be discovering,
and transacting in a very different way. So I feel that change is going to come very fast and it's
going to be quite dramatic. It's kind of a, it's not a evolution, is probably more a revolution
where things completely change how we do things. Now, the challenges for sure are, is the market
going to take it, is how are we going to interact with it?
That, again, is going to unlock some new challenges, which I feel we're already starting to see.
We can see how fake videos are making an impact, politics, and how it's very, very difficult to now determine who is going to, yeah, who is saying the right thing.
Are they saying it or are they not even saying it?
And, you know, so we're going to see a lot of industries being destroyed.
We're already seeing a sign of that, like the movie, the Hollywood, the Hollywood,
all that kind of industry starting to take a little bit of step back and seeing what they need
to do.
So I think applications is what's coming next.
And I feel in the next five years we're going to see dramatic change in new applications
coming in, how we deal and interact with these applications, user interfaces being different.
And that's what I see.
And for Fetch, I'm very optimistic that all these applications will use a agent-based infrastructure
and build them and deploy them a lot quicker than we are doing, rather than trying to fit into this old paradigm of web,
we're going to now move into this new paradigm of agents.
It's not going to be web pages, it's going to be agents.
I think those are beautiful closing words.
Hamayans, thank you so much for coming on.
If people want to stay in touch with Fetch
or kind of be updated on native developments,
do you have a newsletter they should subscribe to
or just follow you on Twitter or what do you recommend?
We have all of those.
So we have a newsletter you can subscribe to.
We don't just discuss Fetch.
We discuss the general of what's happening
so you can keep up to date.
Our website is always changing
because we are adding more and more information.
We do blog posts, so please come and visit the website.
We have a GitHub where you can see not just documentation, but all the code that we've been so messy.
Please follow us on Twitter.
We have a very active social media team.
We tell people what we're doing.
We want to engage with the community.
We bring them in.
We do a lot of hackathons.
We're doing them worldwide.
So if you're a developer, we want to know you.
We want to hear from you.
We want you to build on us.
We even give grouts for that.
So if you want to come and build a community proposal for a project, we are open for that.
Download our wallet, interact with us through the wallet.
We have a messaging system and the wallet.
You can find out more about that.
You can interact with any of the applications we have built, like a block agent, which is an agent, which monitors blockchain.
If you want to interact with that, we have that.
We have another agent-based trading platform called Metalex.
So interact with that.
So we have a huge array of things you can interact with.
But if you want to just start, follow us on Twitter, come and see us on website, and just drop us a message.
Perfect.
Thank you. Thank you. Thank you. Really enjoyed the questioning. It was great. Thank you.
Thank you for joining us on this week's episode. We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe
for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter,
so you get new episodes in your inbox as they're released.
If you want to interact with us,
guests or other podcast listeners,
you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show,
and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
I don't know.
