Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Humayun Sheikh & Toby Simpson: Fetch.ai – an intelligent learning blockchain network
Episode Date: May 2, 2018We are joined by Humayun Sheikh and Toby Simpson, founders of the Fetch.ai project. Humayun Sheikh is well known as the first investor in DeepMind, one of the leading AI companies in the world. This a...mbitious project seeks to create a self-learning blockchain network that fosters economic activity/combinations between off-chain AI agents. The fetch blockchain network will allow an AI agent, such as a delivery robot, to autonomously discover economic partners that would find its services and data valuable. Towards this goal, Fetch.ai claims to have found solutions to designing a useful proof of work system and building a scalable block chain. Topics covered in this episode: Humayun and Toby’s background at Deep Mind Toby’s background in the videogames industry building virtual worlds The vision behind the Fetch.ai project On solutions to useful PoW, salability and the complexity of the project Timelines and what to expect from Fetch Episode links: Fetch.ai white paper Fetch.ai video Documentary from DeepMind on AlphaGo Mike Hearn autonomous agents and Bitcoin talk This episode is hosted by Brian Fabian Crain and Meher Roy. Show notes and listening options: epicenter.tv/233
Transcript
Discussion (0)
This is Epicenter, Episode 233 with guests, Hermione Shake and Toby Simpson.
This episode of Epicenter is brought you by Shapeshift.io, the easiest, fastest, and most secure way to swap your digital assets.
Don't run a risk of leaving your funds on a centralized exchange.
Visit Shapeshift.io to get started.
Hello and welcome to Episand, Showwich talks about the technology projects and startups driving decentralization and the global blockchain revolution.
My name is Brian Fabian Crane.
And I'm Meher Roy. Today we have on our show Humayun Sheikh and Toby Simpson, who are CEO and CTO of fetch.AI respectively.
Fetch.a.I. is an ambitious project that seeks to merge machine learning and blockchain technology in order to build a collective superintelligence.
Gentlemen, welcome to the show.
Thank you.
Hi. Thank you. Nice. Great to be here.
So let's start with Humayun.
Tell us a bit about your background and how you got to be involved at this project at the intersection of machine learning and blockchain.
Sure.
My background is in computing, but I've spent the last 10 years of my life in commodity trading, building algorithms for the commodities market.
And seven years ago, I was one of the investors in deep.
mine, one of the early investors in DeepMind, because we were working on the gaming side with
Demis as well. And so we met, the team met myself and Toby met roughly 15 years ago. And we've
been on and off working into the intersection of commodity trading and price prediction models
and generating algorithms to predict the commodity pricing. So that's been my background for the
last six or seven years. In terms of how we got to the fetch, what was quite interesting was
that when you look at predicting commodity prices, a very basic way to look at it would be what's
happening in different markets. But what is more interesting is if you start bringing context
and various different information bases into the price prediction model and you can start
building correlations, then you realize very quickly that the price prediction is effectively,
you know, five or six times better very easily when you start building these correlations.
So what was quite interesting then is that if you extrapolate this whole thinking on commodity
pricing and actual physical commodities, and then you start bringing in ways to form correlations
in a distributed kind of environment, the results improve considerably.
So we did some trials over the last couple of years on and off with our third co-founder,
which is Thomas Hayne, who was a professor in Sheffield University, machine learning and AI.
We did some trials and we realized that to improve prediction models,
we need to have a system which brings distributed information,
to improve the correlations.
So that's really how I got involved in starting a fetch.
And just a brief interjection.
So regarding deep mind, many have probably heard of deep mind,
which was started a few years ago and became a very big machine learning company
and then got bought up by Google for lots and lots of money.
But my kind of exposure is to Deep Mind.
So Mayher recommended this documentary to me
called AlphaGo Zero about something that they did, you know, DeepMind did to build a program
that's very good at Go and got better than the best Go player.
And so there's a documentary about that which is absolutely fascinating.
So I highly recommend that.
So you were one of the early investors in DeepMind, right?
So as far as I'm aware, like the company started out pretty small and then was bought out
for Google for 400 million.
could you get into the story of deep mind and what they set out to do in the beginning
and what they ended up doing and how they discovered that past?
Well, I just want to make it clear that my investment and interest came in
because I've known Demis for roughly 15 years now.
We worked together.
He was one of our advisors in a kind of a small game,
which was bringing the real-life product.
into the virtual products, and that's exactly how I met Demis, which is 15 years ago.
What was quite interesting was that Demis had a great, well, he's a great mind of our times,
and I think he has done a great job in building the company.
What his concept was that neuroscience, which is the PhD degree he achieved,
he was looking to bring the general intelligence to life effectively.
and we were having lunch over in Cambridge and Browns
and when he proposed it and he said,
well, do you want to put some money in?
And I thought, well, there are three passions in which I have.
One of them is machine learning and AI,
and the other one is the virtual worlds and the decentralization.
So, you know, I couldn't have missed that opportunity.
So I invested in that.
And the concept was to bring the artificial general intelligence and the best way to show the benefits of it was in the gaming area, which is where, again, Demis and his background led him to.
So that's how it all started.
But the ambition was to build something great, which is what they did.
And to be fair, I mean, we, you know, deep mind is not.
not just in the UK, but probably worldwide,
one of the best AI companies which have come about.
And the journey has been, obviously, very ambitious,
and Demas set the goal very high,
which, you know, although having said that, you know,
artificial general intelligence is not an easy one to crack,
but they're making some great inroad into it.
I mean, it's quite a, you know,
and I'm not, I wouldn't say that,
they've cracked it, but I'm saying that in doing so, they've achieved quite wonderful, great
things, which Go is an example of that.
Now, moving on to Toby, you've also worked at DeepMind, but your background has been, you've,
you've founded quite a few different companies over, over the year, so like, walk us through,
book us through your history.
Well, I started programming computer games in the early 90s back in, back in the Amiga days.
and then have the great privilege of been involved in product called Creatures.
I was the director and producer of the Creatures series.
And what really interested me about creatures was that the guy who invented the technology for that,
believed that if you modelled all of the biological building blocks of life
and you put them all together, you might actually get digital life.
And that's precisely what he did.
And we had a genetically specified little creature that was made up
chemical reactions, emitters, receptors and neural dynamics, that would learn by itself how to survive
and live in the environment that it was in. And what was really exciting about that was that all of
those components were specified in a genetic code. So you could get a mummy creature and a daddy
creature and then you would have a baby creature that was made of a combination of the genetics from
both parents. And if that creature was better suited to surviving in its environment and more
likely to make it to breeding age, then it would be the one that was most likely to provide
its genetic code to the next subsequent generations. So effectively, what the computer was doing
was automatically fine-tuning itself to better work in the environment that it was in.
Now, I found this really exciting because it meant that the computer was finally the bit that
was doing all of the hard work. And so long as we created a rich dynamic environment
and had a big enough population of little bits in that world, we would have these virtual
animals, effectively learning how to live in that environment without the need for human intervention.
And I was kind of curious as to whether or not that thoroughly bottom-up development philosophy,
effectively and design philosophy could be applied to something grander.
And in particular, the creation of very large-scale virtual worlds, which has been a passion of
mine now for more than a quarter of a century, the idea that we could, yes?
Toby, may I ask a question on this creature's thing?
because that sounds really fascinating.
So I'm just curious if, you know,
you said these creatures would be able to learn on their own
and they have this DNA and adjust.
So where does the player come in here?
Like, would you make decisions that then the creature executes?
Well, you can ask the creature to do something,
but of course there was no guarantee that it would choose to listen to you.
It kind of depended on what it had learned to associate with you.
And you could interact with these creatures.
So you could give them a little tickle
and you could give them a little slap
if you thought that they were misbehaving.
So of course, in fact, if you did that quite a lot,
then eventually they'd be quite frightened
of your hand in the world
and they would learn to associate bad things
with your presence.
And that was kind of fun to watch happen.
I guess to a certain extent when human beings get involved in this,
it becomes more unnatural selection than natural selection,
but it was still quite, quite, in fact,
one thing that's probably worth mentioning
is people got so,
attached to these creatures that in the end we had to develop effectively a funeral kit
add-on to allow them to write a few words to remember the creatures by when they passed away.
And this came about because even back then in the mid-90s, people were setting up websites
and writing poetry and stories about the creatures that they'd had and the existence
that they led and drawing whole family trees of how they went.
And of course, looking back now, you sort of think, well goodness, if we were to do something like
that today, we'd get to collect the complete family tree of all of these things. But back then,
that wasn't technically practical. Those that were on the internet tended to be modems.
But yeah, so creatures effectively was a general purpose problem solver. We didn't have any rules in that
system. We didn't specify to the creature how to eat. We just gave it an instinct to pick things up
and stick it in its mouth. And if the first 10 things that it ate turned out to be rocks,
then eventually it would learn that sticking things in its mouth was a bad idea,
and that's not necessarily a good thing for the creature.
And the idea that we could apply that bottom-up philosophy to creating virtual worlds
was extremely fascinating.
And those worlds, well, one of the things that was always my favourite was my world in a box,
where you started with one agent, then you had two, four, eight, sixteen,
and before you knew it, you had a world that consisted of trees, animals, plants.
But the wonderful thing is, all of those things were real.
They weren't painted on, a bit like the Simpsons episode where the fire exit was painted on the wall.
And they asked whether they could have a real one in the future.
These things were all real.
So if you reversed a truck into a tree and knocked it over, you could potentially build a bridge out of it or a log cabin out of it.
And we didn't need to know about bridges or log cabins or program any rules in advance for that to be possible.
And this is great in virtual worlds because you can't predict what one person is going to do in a world, let alone tens of thousands.
And the idea that you can do anything that you perceive as been practical is really interesting,
particularly when the complexity, the really fun stuff, the grey areas that's the difference
between believable and not believable is entirely an emergent property.
And this is effectively the field of artificial life where you have a very large population of
simple things that combine to produce more complicated behaviour.
And Hermione and I had often talked about the idea that potentially it might be possible
one day to build one of these worlds that was grand enough and big enough and could have a large
enough population of objects that we could actually do useful economic work in it.
And we sort of toyed with all these various ideas as to how that might work.
But the technologies that I've been using tended to involve a great number of servers acting
as a client server type technology.
And then of course we bump into decentralized ledger technologies and it's just like click.
Well, this is it, isn't it?
Now suddenly we can construct a world of extraordinary.
proportions and we can fill it with an amazing population, a population of things that represent
humans, that represent hardware, that represent data or sensors or services, and they can all interact
on this network, decentralized across the entire globe. And we aren't restricted in the way that you
are with the real world, where if we're all looking at the same table, we're all seeing the same
table. In a digital world, there's no such restriction. Each individual could have that world
tailored specifically for them.
And that gets really exciting because, of course, you can ensure that the things that you
want or the things that you might want are the things that you see.
Okay.
So when you were talking about creatures, it felt a bit like CryptoKitties.
Have you followed that project?
Which one, sorry?
Crypto Kitties.
Oh, Cryptokitties.
I'll tell you what, I'll bet the creators of Ethereum never saw that one coming.
Yeah.
I think it's a wonderful thing.
And it's a great idea.
And it introduced a whole generation of people who suddenly wanted something that was unique to them, to this whole crypto space.
But yeah, that was a very cool thing.
And the idea that potentially all of these little entities that work together could be more than just visual representations,
but could have personalities and existences and memories and behaviours,
and could live on a digital space that was optimized just for them,
is another layer of excitement on top of that, I think.
But it's a great space.
I mean, don't you find it's just like, for me,
I sort of see it as a bit like the internet in the mid-90s.
Everybody knew that it was going to fundamentally change the way that we live our lives.
We're all experimenting with all these different ideas,
but nobody was quite sure how.
I mean, we didn't see social networking coming back in the mid-90s,
but look where we are today.
And I look at the decentralized ledger technology,
the blockchain space, the crypto space,
And it does feel that level of excitement that we all believe that the space that we're in is extraordinary.
And it's going to change the way that we live our lives.
And we're all trying out these amazing new ideas and exploring this space to see how that might work.
One thing that came to mind to me when you talk about these virtual wars is something like second life, right,
where you also had this whole world where at some point, you know, people would sell digital real estate or build businesses to create these virtual things.
things and sell them. And for a short time, it seemed like it could become a big thing. And then, of course,
it sort of disappeared. Is that also something that inspires you or that you think about when it
comes to the design of what you guys are working on now? Well, that kind of thing. I know we were
building our artificial life-inspired virtual worlds at the same time that Second Life was out there.
And I always take this approach of digital Lego that if you provide the low-level building blocks,
then physics takes care of your syntax errors.
So if you give people a pile of bricks
and they build an upside down pyramid,
sooner or later physics is going to knock it over.
If you provide people with a whole load of aeroplane parts
and they stick 100 engines on both wings,
then the physics of the environment is going to ensure
that that thing isn't going to leave the grounds.
And even if it could, the fuel consumption would be outrageous.
So that ability to ensure that you don't have to worry about consistency
errors in a world is extremely important.
And you do need that scalability.
It's not a, for me anyway, I always thought,
well, it's not a true virtual space
if I existed in a shard with 20 people.
I want to exist in a space with everybody.
I want to be able to look out on these worlds
and be able to imagine that in front of me,
there are millions of people in that space,
all doing things that they choose to do.
But these are all pioneers, you know,
a second life and creatures
and all these other virtual world
technologies in ways of presenting these environments that human beings can go in and, well,
for want to a better phrase, live a second life. In the case of fetch, of course, we're not
restricted in the sense that we're not building a world specifically for human beings. We're
building a world that supports the machine-to-machine economy as well. And that actually a great
number of the entities that are living in that world and working in that world and getting stuff
done are actually digital entities that are responsible for themselves going out there, trying to
work out how to get what they need and deliver what they've got.
Let's segue into what fetch,
fetch is.
So I get the sense that you're building another virtual world
and you want to use blockchins in some way
and sort of the inhabitants of this virtual world are,
who exactly do is the inhabitants of this fetch system?
what are sort of the end network points?
Well, we see Fetch as a decentralized digital world
in which useful economic activity can take place.
Now, that activity is performed by digital entities
that we call autonomous economic agents.
Now, these autonomous economic agents can act on your behalf.
They can act on their own behalf.
They can represent data or services
or any other number of things.
and we provide them effectively eyes, ears and touch into a digital world that is tailored
specifically for them.
So these agents connect to the fetch world through something that we call the open economic
framework which provides them with their visuals on the world and allows that world to
be tailored specifically to them.
I liken it a lot to the ultimate dating agency for value providers.
We ensure that when you connect, what you see is for the ultimate.
precisely what you need to see or what perhaps you'd like to see, and depending on what it is
that you're doing. Underpinning all of that and making sure that we can run this in a decentralized
global way and achieve the scalability that we want is our smart ledger. And the smart ledger
is doing a number of things. It's ensuring that we have integrity on the global state.
It's, and that includes, of course, all of the transactions, the interactions that are taking
place between those agents, but also it's providing the ability for the network to learn how
and under what conditions agents interact with each other so that it is better placed to ensure
that the ones that are most likely to want to interact with each other are placed together.
We also have a whole layer that is provided by that, which is the trust layer, which lets you
look at any transaction as it goes by and establish how likely it is to make it to the global
the state, which is the global state, sorry, which means that when you're looking at agents that
are involved in potentially hundreds or thousands of transactions very quickly in order to
construct a solution for someone, you can look at all of these transactions going by and you can
make a call very, very quickly in a matter of almost no time at all as to whether or not
those transactions are going to make it through. It's a big vision, right? And like, we need to
and at some levels,
like I've been trying to understand this vision.
Like,
I had a call with Toby and Humayun prior.
I read the white paper.
I've been, like,
thinking about this vision for quite a while.
And it's a vision in which, like,
there are, like, lots of components and, like,
you have to think about all of them
and then how they connect in order to get to a level of detail
that one is comfortable with.
So maybe we could start with,
so you have this idea of these,
autonomous economic agents, right? So these agents are, could basically be machines, right? They could be
like devices, right? You could think of, I don't know, a camera as an autonomous economic agent.
You could think of a sensor. Camera is a sensor. You could think of a sensor. You could think of a delivery
robot. You could think of a car. You could think of a human. And,
These are all off-chain.
They have some kind of processing logic of their own,
and they can sense the external world.
And you're building the virtual world for these entities, right?
So what kind of objects do you think are going to populate your virtual world?
Is it meant to be like Internet of Things devices?
There's a number of things going on there, I guess.
One of it is that a lot of the problems that we solve in our day-to-day lives
are involving an increasingly large number of moving parts,
and it's becoming very difficult for us to manage them.
Transport is one of the many areas,
where the number of bits and pieces that you have to juggle,
it does feel like pushing hot water uphill sometimes,
trying to organise all of these moving parts
in a way that makes sense to the individual.
And actually, a lot of these problems
that are highly complex involving so many things
are actually best not solved from the top down,
because there's only so many things an individual central controller can hold in their mind in order to be able to manage all of this.
And one of the things that we thought was something great about Fetch was that actually these problems could solve themselves in the bottom up.
And again, this comes back to my creature's background and the idea that a large population of things can work together out of which a complex solution can emerge.
And Fetch, as you say, Fetch is not, Fetch is enabling.
the agents rather than actually providing them all by giving them a world that
tailors itself to them to ensure that they're able to do what it is that they want to do.
With the minimum of friction involved, it really is about clearing all the junk away
from between Party A and Party B so that they can get on and do what they need to do.
And actually, agents, autonomous economic agents, whilst they might represent their data or
services or sensors or people. Quite often they actually operate as populations as well.
I mean, if you think about it as you walk around with your phone in your pocket, there's an
extraordinary amount of sensor information there. And why shouldn't that sensor information actually
be represented by an AEA and actually out there on the fetch network attempting to generate
value from the data that it has quietly in your pocket without you've been aware of it whilst
you're strolling along and going about your daily lives? So actually, in a lot, you're
In some cases, we're looking at populations of agents that exist together.
Flight hardware or a car is another example where if you took a drone, you've got the actual
bit that flies around, which is an agent, but also the sensors on there might be another
agent and the camera that's on there might be yet another agent.
And it may well be that the camera agent can convince the drone to take a small diversion
in order for it to take roof survey pictures of somebody's house on route.
So together, they're working together and communicating with each other.
independently of human intervention to allow better utilization of those things, because that's one of the things that's most extraordinary about our lives right now.
It's not necessarily the data that we do use. It's the data that we don't use, either because we don't know it's there or the cost to deliver it exceeds its actual value.
And when you can come up with an endless population of digital representatives to represent that data, then the cost to actually put something like that into the fetch network and for it to,
to actually generate value.
It goes right down.
And under those circumstances, it benefits everybody
because suddenly all that wasted information comes into play,
whereas previously it would have just gone.
Some people in the audience may remember this talk.
There was a talk by Mike Hearn from maybe in 2011.
I don't know if you guys have seen this,
but he was back then the first person to talk about,
you know, machine to machine payment on a blockchain
and these micropayments.
So that was back in the context
of Bitcoin and you know how Bitcoin could be great with like payment channels and you can have like,
you know, a car driving on the road, paying another car to pass them and, you know, that kind of vision.
Now, Bitcoin doesn't seem like it's right blockchain for that, although, you know, who knows,
maybe with Lightning Network and other things on top, some point it can get to that.
But at least conceptually, I understand that vision, right?
Because you say, okay, you have all these individual entities, they need to have some way of
transacting. I mean, they can, of course, have some sort of wallet in their device, and then they
can send transaction, and then we have economic fabric where then all of this activity and,
in a way, applications can emerge and utility can emerge. But, you know, from a sort of blockchain
perspective, it's just a user, right? Is it a person? Is it a car? You don't care, right? It's not part
of the core protocol. But it feels to me what you guys are talking about here is something
quite different where this, in a way, this agent isn't just a user, but it's somehow
it's intelligent, decision making is also part of this protocol. Do I understand this correctly?
Yes, yes, you do. This decision making process is actually really important that to a certain
extent we're putting geography on on the network a lot of the work that goes on between people
or entities are related to what's near you what's around you and what's in a particular direction
and been able to structure a network to put that dimension onto it is something that that is key
to providing this this digital world this highly tailored digital world to the users of fetch
I'll probably just add a little bit to that in terms of if you compared it, you know, the transactional system, let's say the blockchains and everything.
And as you mentioned correctly, Brian, we're talking more than that.
What you have to work backwards from this is, although we have a transactional system, we also need to set a framework for the economy.
There has to be rules, the economic rules, which these agents work under and obviously learn from.
All these rules are dynamic and they could be changing and they could be evolving as well, just like humans did.
But there is a framework which governs the interactions, the economic value exchanges.
It's not just a transactional system we're building.
We're building the second layer up as well.
Okay, great.
So maybe we can narrow in on that.
Because when we speak about the economic context, of course, you know, that's right.
That's a core part of any blockchain.
Right?
In Bitcoin, you could say, okay, block size is an economic parameter.
The block reward.
Another kind of economic framework is, you know, who gets to decide the transaction fees.
And of course, that is really the miners by choosing what transactions would they put in a block,
you know, the block time.
You know, there's all of those things that in the end and give rise to a particular
economic dynamics, you know, and those looked one way when blocks were not full. And then once they
were full, they started looking another way. So can you speak a little bit about what are those core,
you know, economic parameters in Fetch.coma, I,I, and how do you think they will drive
the, you know, sort of the behavior of agents on this system? Sure. So if you take the economic
framework, that's also split in two sections. One section is more the ledger section,
which is what you're talking about in terms of the block sizes, the rewards for nodes, minors.
That's more a transactional layer. But if you then move above that, that is only for the transactional
settlements. That's the settlement layer. You move above that, then you have, I mean, just to give
an example, you know, you have marketplaces. Now, that's a separate economic layer, because in the
marketplaces, you have marketplace dynamics, you have price discovery, you have product discovery,
you have negotiations. At the moment, whatever you, all the projects you look in the blockchain,
they mainly worry about the economics of the ledger. We are more focused, you know, I guess it's
a balance, but we're more focusing on the layer which is above that, which actually defines
the marketplaces itself, which defines, which gives you an opportunity to do price discovery.
But don't forget, this is all happening in a very high-speed autonomous system.
So it needs to be able to adapt.
It needs to have its own principles.
It also needs to learn from those principles and evolve into a better system.
So these are the two.
There is a split.
Even in the economic framework, there is a split.
And when I talk about the open economic framework, we are talking about the marketplaces.
We're talking about how the economic exchange between the agents happen.
We're not talking about the economics of how they settle those economic values.
Settlement of those economic values is what the blockchain and the technologies like that do.
We are also talking about something above that.
Okay, great. So maybe I should have used Ethereum before as an example, because I think that would have been maybe more analogous, right? So Ethereum, of course, again, we have like block size, gas fees and similar dynamics there, you know, somewhat different parameters. And then if you talk about a marketplace, then I think what that would be in the sort of an Ethereum analogy would be, okay, well, anybody can write some sort of smart contract on it. You know, you could have an auction contract or other contracts that then allow.
emergent behavior or different types of economic interactions.
You just have a basically a fundamental platform.
People can build applications on top.
To the extent that there's learning, it's just, well, person one can write a particular
smart contract and maybe people don't use that and they're going to go to another one
that's better.
And then over time, there was going to be evolution and innovation and all of that.
But here you guys are saying that you are directly designing also those applications that in the case of an Ethereum would be, you know, user deployed smart contracts.
Am I getting this right?
Yes.
So let me take you through the whole.
So why we are different to Ethereum and yet comparable to Ethereum?
you spotted it absolutely correctly.
So you're looking at the smart contracts,
which is the framework for value exchange on top of the settlement layer.
So if you look at the smart contracts,
well, they're not really that smart,
because you have to put the smartness into it.
So you have to come from the outside to put the smartness to it.
Where we are different is that we are adding,
intelligence into both the layers. Whereas we're not stopping you bringing machine learning and AI
from the agent's side, which is you can actually build as much intelligence as you like into
the agent. But what inherently the agent needs is some intelligence on, okay, what kind of
transactions do I want to do? Now, that information can't come from outside when your ledger is
doing and recording the transactions. So,
ledger feeds into as a as a as a toolkit it provides to the economic parties on who are
transacting on the OEF it provides them kind of a prediction model which says okay well
this is the kind of people you want to be transacting with these are the people who could
be interested in your product then comes the price discovery as well so the ledger provides
the trust element, which is, okay, what is the likelihood of this transaction to go ahead?
Because if you were thinking about millions of agents and an agent comes into the market,
how do you do search and discovery?
How do you do price discovery?
So inherently, the ledger and the economic framework has to provide a search and discovery facility
because you can't bring that from outside.
You can make a decision on what you are looking to choose, but you still need a search.
No, I don't think I agree with that, right?
Because let's say on Ethereum, somebody can build a decentralized exchange,
derex, and then people build a relayer on top, and then they aggregate all those different,
you know, offers and trades.
And then, of course, I can build sort of an agent, right, that is going to go and I have
my smartness and logic, and I'm going to, you know, be on the, run a node, get all that
transactions, I'm going to maybe ping different services, and then I'm going to make some
decision-making, right, and bring that intelligent into that. So I don't understand what it means
for intelligence to be part of the ledger. You're absolutely right. You can make that decision,
but we have to, some framework has to provide you the tools to make that decision. You need
that information to make that decision. Yes. And actually,
one of the cool things that by putting it fundamentally,
it's about the difference between stuff been around the ledger and actually on it
and inside the network as opposed to around the periphery of it.
And one of the very cool things I've made a note I needed to really mention about the prediction model
is one of the things that we can do with all of that is we know as a result of any given prediction.
We know how much on the network, how much value was exchanged as a result of it.
This is an evolutionary thing because what we can actually do is we,
We can reinforce those connections that are working and we can negatively reinforce the ones that are not.
So that means that those drop off and new ones can form speculatively.
So all sorts of interesting new ideas as to insights that could be delivered to the network are created all the time and on a constant basis.
And all of that really is about ensuring that the users on the outside of the network get a bang-up-to-date dynamic impression of what it is that works them,
what it is that does not work for them.
This episode is brought to you by ShapeShift,
the world's leading trustless digital asset exchange,
quickly swap between dozens of leading cryptocurrencies,
including Bitcoin, Ether, Zcash, Gnosis, Monero, Golem, Auger,
and so many more.
When you go to Shapeshift.io,
you simply select your currency pair,
give them your receiving address,
sent the coins, and boom.
ShapeShift is not your traditional cryptocurrency exchange.
You don't need to create an account.
You don't need to give them your personal information
and they don't hold your coins.
So you are never at risk from a hacker or other malicious actor.
ShapeShift has competitive rates
and has even integrated in some of your favorite wallet apps like Jacks.
So you can swap your digital assets directly within your wallet
just as easily as putting on your slippers.
Whenever you see that good looking fox,
you know that's where Shapeshift is.
So to get started, visit Shapeshift.io and start trading.
and we'd like to thank ShapeShift for their support of Epicenter.
So you mentioned that this system, it can assess the probability of a transaction going into the final state.
Now, you know, we've done, of course, countless episodes about different blockchains and concerns protocol.
I haven't actually heard of any consensus mechanism that does something like that.
So have you guys invented like an entirely new consensus algorithm or how does that work?
Yes.
Well, we've we've had to.
We started from the from the perspective of, well, goodness, you know, if we had 10 million or 100 million agents and they were all working together to produce solutions to problems, then we realized very quickly that we were going to end up with a whole new level of requirement for performance and scalability.
but also we couldn't afford to lose certain information about individual transactions
because a lot of the learning that our machine learning scientists were proposing on all these systems
relied on having individual transaction information going through the network and not compressing that up or losing it in any way.
And that required us to take a different approach to how we did this because you do need
particularly when it comes to organise these transactions,
it's very beneficial to have a blockchain-type structure
where you get the well-defined ordering of all of the events.
But we needed a different structure to perform the consensus mechanism.
We've chosen a structure that's a little bit similar
to a directed acyclic graph because we don't need the direct-ordering
on the proof-of-work tasks,
and we can scale that to record any number of results that we want.
And we sort of look at it as the blockchain type structure,
which we've got scaling through multiple transaction lanes
so that transactions that don't affect each other
can be executed in parallel.
That forms effectively the knowledge
and the proof of work system and the storage of results there
performs or generates the understanding of that knowledge.
And these two things together,
give us the ability to create the predictions that we need, but also to create the trust
predictions so that you can look at any given transaction or any given node and see how likely
anything involving them is to be in your benefit and working the way you expect it to,
but also to get the performance that we require from a transaction capability, because we need
that to be able to scale and scale and scale as the number of agents in the system increases.
So that did require a different approach.
The whole idea that we also wanted to be able to use the computing power that was on that network to be able to do useful work.
Be that actually part of the consensus mechanism of the network itself, the construction of intelligence information about the network itself, but also potentially general purpose computation that users of the network might require.
And we've looked at a number of things that are intensely parallelizable, if parallelizable is an actual word.
Particularly in biotechnology and research into diseases and genetic conditions,
you end up with things that can be divided up into lots of little chunks and potentially can be packaged up and executed by the whole network for the benefit of the people who want that.
And in cases like that, sometimes the information that is there also provides an interesting new understanding and,
insights that can be explored by the network to better connect people in the future.
So I just wanted to jump in on that point you said, you know, useful proof of work.
I saw that in the white paper as well.
And, you know, you mentioned the idea of some sort of biological computation stuff.
So, I mean, of course, the idea of useful proof of work has been around since.
Primecoin was the first useful proof of work.
Yeah, Primecoin's 2011, I think.
Yeah, exactly.
So I would say that approach so far has basically failed.
Because of course, the great benefit of something at Bitcoin is that you can very easily verify
the work, but it's hard to do it.
Yes, absolutely.
Most other things, you don't have that differential, right?
So maybe it takes as much or a lot of time to verify it as to produce the work.
And then that defeats the entire purpose.
So how are you guys able to do a useful proof of work?
Yeah.
And of course, that is, as you quite quickly point out,
the idea that it takes a long time to do,
but it's trivially easy to verify is one of the key points
that allows such things to work.
And we've cracked it.
And we've come up with a mechanism whereby the computational packages
that are put through the system.
we can create a verification purpose, a verification method through that,
but also that none of the work that is done by the people who weren't involved in eventually choosing the block is actually wasted.
And that's one of the things that we thought was very important about, useful proof of work,
that everybody who does some processing, no matter how capable their machine was,
is actually generating results for the network and is actually receiving some form of reward for that.
And we've got a mechanism by which the people completing that proof of work effectively get to vote on who it is, who makes the decision as to which block next goes into the global state.
So we're very proud of the results.
We've created our own virtual machine specifically for this purpose that is biased towards solving machine learning task,
but is also capable of general purpose computing
and a mechanism for packaging up those computational tasks
so that it is possible to verify that the work
that was meant to be taking place has in fact been taken place.
So let's sort of summarize.
So the way I understand it is
ultimately the customer of your system is an agent.
So imagine like the agent is me,
I'm a delivery robot of some kind, right?
And I have my own internal logic, like I can make autonomous economic decisions, like parties
to transact with, tasks to take, etc.
Basically, an agent like me can use the Fed system to a get work, like, okay, deliver
XYZ here and you'll get this much money.
So I will get like work proposals.
but I might also get work proposals that I might not be expecting.
So if I'm a delivery robot, normally I'm expecting delivery work.
But suddenly it might be the case that I get a request.
So I have a camera and when I'm doing my work of delivery,
I observe that there are some parking spots that are empty.
and then I get a request for that data
and I supply that data
that this particular parking spot is empty
that is valuable to a different agent
that might want to park there in five minutes
and so the ledger and the Fed system recommends me
to give this data
and do a transaction on like parking lot data
and so the way I imagine the Fed system
is it is recommending me these economic transactions
these pairings with other agents that I could do.
But in order to build this system,
what you're doing on the technical end is,
A, you are building a blockchain, a ledger.
B, you are building the marketplaces and the logic
by which these agents can match and they can transact.
C, you are building like a useful proof of work for the base ledger itself.
And D, you are building a new kind of scalability technology
for the base ledger itself.
Don't you think that's just too much?
Like, for example, if you had like useful proof of work,
wouldn't, why don't you use your useful proof of work
and just build a standard cryptocurrency?
That will itself be such a big invention.
It's often been said that if we achieve even a small proportion of some of these things,
but actually we set out to build this digital world,
we set out to build this extraordinarily large decentralized world
where huge populations of things could do useful stuff.
And yes, it's a convergence of a great number of very, very exciting technologies.
And actually, you touched on something with the road there saying,
well, you get some job to do that you hadn't thought of.
And we love that idea, the idea that some of the opportunities that are presented are non-obvious.
And this is one of the things that human beings are so bad at, which is predicting the future.
We tend to extrapolate what the position we're in now and imagine that, well, the future, everything will just be faster, bigger and better, but essentially the same stuff.
And we're trying to build fetch, or in fact, we are building fetch, to be an environment where it can adapt to all of those things.
It can look at novel new combinations of markets, and it can adapt to present opportunities that were never understood before.
We're building this prediction model, but a mechanism for delivering those predictions.
and delivering them effectively to the users of the network who have specifically said they want them,
but also the ones who might want them.
And it's that might factor that's really interesting because there's all sorts of intersections of different marketplaces that we haven't even considered at this point.
And these are the kind of things that Fetch will learn and figure out and start delivering to users just as a matter of people performing work on that network.
I'll just add a little bit to it.
I think you asked a very pertinent question.
Why are we trying to do too much?
The reason is not we are trying to do too much.
The reason is what is our objective,
and our objective is to bring this decentralization
to the real world.
Now, building a cryptocurrency,
is great because it kind of kickstarts that process.
but it doesn't deliver effectively what the end result is.
Because our end result is we want to connect economic agents which generate economic value,
not just generate unlock, which is plenty of economic value, which is unlockable.
There is no framework which allows you to do that.
Now, our objective is not to build a currency.
Our objective is not to just build a ledger.
our objective is to bring these economic values to life.
So that's our starting point.
So anything we do in the middle, which either it's a new type of a ledger, is a step towards the objective.
It's not the objective.
So that's our approach at the moment.
Sure.
It's just, I guess, to us, it seems like many of those steps are like gigantic.
Even like you said, you want to use it cyclic.
I mean, I think that's a nice idea.
We've done an episode before with the Spectre guys in Israel that wrote a very good white paper on that.
But it's also entirely unproven.
And so far, there's no, I mean, there's a sort of quasi-cryptocurrency, a Yoda, right?
But they have like a central coordinator.
So it does not actually, it's not a real functioning cryptocurrency that does.
that, right? So there's, I think, interesting direction. But even on its own, like getting that
to work, like truly work and decentralized will be like, I think a massive thing. So it feels like,
okay, there's so many of these steps along the way that seems a big challenge that you guys have
ahead, at least. We've been thinking about this for a very, very long time. I'm trying to figure
out how to get all of these component parts together in a way that allows us to deliver the thing
that we were imagining.
And it's just that wonderful position
where we are able to solve these problems
by drawing from all of these bits and pieces
and then combining them with a bunch of innovations
of our own to make something very, very different
that's there to do something very, very different.
And also, this framework is a framework
where we can plug and play as well.
So we're building it in a modular fashion.
So if new various ways of cryptography comes about, which are better than the rest, you should be able to plug that in.
Now, at this point in time, we believe our ledger delivers all the tasks we want it to deliver.
It's not which ledger is the best, which ledger is the fastest.
Can it deliver the tasks which we need it to deliver?
And unless it does that, that ledger is not used.
useful for us. So we have to build our own technology.
Abstractly, like, like overall we have understood sort of the division, at least like that.
Division itself is clear to me. It's like, there's like lots of economic agents and you're
building like this ledger plus marketplaces and recommendation systems that allow completely new
pairings of economic activities to emerge between these agents, right? Like your, you're
ledger plus economic system is going to recommend things to the agents like do this and you'll be
you'll get paid that much and then there'll be another recommendation and then these agents can
choose to act on them and perform some task and get paid and when a lot of like when millions of
agents start to work on a system like that and they do their small bits of actions the whole
system as a sum is greater than some of its part. It appears more intelligent than the intelligence
of the individual pieces. So you're trying to build something like that. Yes, I think like in general
from, like you come from a virtual world's machine learning background. We, we come from a very
blockchain background where probably like we and our listeners got into this space because we found
decentralized money
interesting.
Right?
And we are
like really the money nerd.
Like we want new forms of money.
That's what got us into this.
So I think the main skepticism
you're going to get from the blockchain
community is that
the sum of all inventions that you're
proposing,
it seems like a very ambitious
target.
And somehow like even reading your white paper,
like the personal feedback
I had was like,
useful proof of work.
So that useful proof of work is a white paper in itself.
Scalable ledger, scalable ledger is a white paper in itself.
The marketplace, that's a white paper in itself.
And it's funny you should say that.
We recognize that.
And we are actually creating white papers on all of those subjects individually
and a bunch more as well because they deserve the individual attention.
And there's simply not enough space to talk about them in the detail that
that people want to see as well in the technical white paper as it is.
So we are indeed working on white papers describing all of those things in more detail.
But also, I don't want to be apologetic about it being ambitious.
I think we are going to be ambitious.
It is ambitious and we expect to deliver the ambition.
What I'd like to just point out is, although we are ambitious, we're not in sense,
So all of this is done in a methodical way.
So, for example, the innovation in the ledger will in itself be deployable, and that's our first stage of deployment.
On top of that, when we have the open economic framework, that will be deployable as a stage.
So we're not trying to deploy the whole thing in one go.
And for people who like money, what this would do for you is that it will give you a place to spend it as well, because, well, with everything that is going on, you need this currency to deliver some real value as well.
Apart from, we're not just coming in to do crypto trading, but we're saying there is actually a utility for this currency and not just this currency, any currency, but you need.
to have that framework where cryptocurrency actually becomes usable. This is a structure where you
cannot deploy fiat currency. And the reason is there's millions of transactions, very low value,
happening at a very high frequency, building in an economic framework which is not governed by a
centralized organization. How do you deploy such a system? What we want this project to do is to bring
the cryptocurrency to the real world effectively.
So you just touched on exactly what we wanted to come to now, which is, okay, so what is
the sequence and the timelines that we have here?
So can you talk about, okay, so you mentioned a whole bunch of new IPRIPs coming, and I think
that would be very interesting to see more detailed explanation of this.
So, you know, when are those released?
When is there going to be some kind of alpha, something to use?
and then these different parts of the systems,
what time frames do you think there will be available?
Well, we have a bunch of the key innovations up and running
on a private test network in our office right now.
That includes the scalable ledger.
We have the virtual machine related to that's going to be used
for the useful proof of work that we're using to test
in a variety of different things
and get all of those systems up and working, as we imagine.
We have a basic open economic framework that provides an environment to agents and agents that are able to transact and explore and interact on that.
From a white paper perspective, we're looking at approximately give or take a week or two, about a white paper a month from now on in until the early summer where we're going to be scaling that up a bit because we've got a bunch of things that we want to talk about relating to the actual economics of the space but also security issues and how to develop proper documentation on developing.
agents and all of the opportunities that exist there.
And we're planning on having a public test network that will be available in the summer.
And that'll be when absolutely anybody will be able to grab that code and they'll be able
to build agents and see things actually work on the network.
The quality of the digital environment that's presented and the kind of predictions
of machine learning stuff that will be available will increase gradually in the months after
that leading up towards the back end of the year.
and we're looking at a main network release in the first half of 2019.
So that's our current development and enrolling plan.
So what functionality do you expect from this test network?
I think it will be there in like June or July, right?
Yeah, give or take.
And what's important is for it effectively to be a complete vertical slice
where the basic operations that you would expect on such a network are up and running.
And that includes all of the key innovations working as one would expect.
But some of the detail not there, particularly relating to some of the prediction stuff,
because the data won't exist on the network,
and we'll be looking to work with a number of partners to bootstrap that initial data
to get the learning models working.
And that's an opportunity for people then to be able to get hold of the software
and for them to be able to build agents and experiment.
with what might be possible and actually see those agents working on the network.
That we're particularly excited about.
Of course, we can imagine all sorts of things that people can do with the network.
But I would guess that that's absolutely nothing in comparison to what other people will imagine.
Yeah, so that was my sort of next question, which is when you look at a vision like that,
you can see that, yes, if the technology were to exist, the application space might.
be huge because this is about agents, they can recite on all devices, they can represent
humans, etc. And this is about delivering a set of interesting actions for agents that they
can do and generate economic value and the generation of combinations that the agents themselves
might not come across. So abstractly, this kind of system is applicable to a lot of things.
right but what do you think is the practical application in the in the short terms if you have a main net
release next year what do you think are the kinds of practical things you can use to build with
you can build with this system even when there's not lots of data lots of learning in the ledger itself
yeah you're absolutely right bootstrapping a system like this is is something that we have to
pay an enormous amount of attention to.
And I liken the thing to virtual worlds.
Nobody wants to be the first person to walk into a shared massively multiplayer online game.
And likewise, you don't want to be told that, well, what can you do with this?
Well, you can do anything.
Because actually, if you stand someone in the middle of a world and say you can do anything,
they tend to stand there, not really knowing quite where to start.
And we're acutely aware that there are a number of marketplaces and use cases where,
fetch is particularly applicable. I mean, I'll touch on on transport as an example because I've
already mentioned it. I'm sure Hermione's got a couple of it. He'd like to talk about as well.
Transport is one of those things as I mentioned earlier. It's got a huge number of moving parts.
They don't really play very well together. It's operated by a number of centralised entities,
trying to solve complex problems from the top down. And utilization is nowhere near as good
as it potentially could be. And coming up with an environment where actually those complex problems
and all those moving parts could work together to solve themselves in the bottom up is an amazing thing
in that it could really transform the way in which transport works.
For the benefit of everyone as well, not just the users of transport,
but also the suppliers of all of those moving parts,
because they can get better utilisation,
but also better abilities to manage these things and smooth some of the peaks and troughs.
And bootstrapping something like that, there's an enormous number of corporate partners
that we're talking to and going to be working with in the coming months
to ensure that the kind of data that's in the network is the kind of data that we'd need
for usable predictions to be able to be delivered in that subject area, but also access to
pre-existing data sources and census that are out there.
And we've got a concept of an agent class called an APIAEA, which is effectively an agent
that bridges the old world systems and computers to the new world and allows them to
exist inside the fetch space as an agent.
And those are all the kind of things that we can do to ensure that the kind of data
and the richness that you need to be able to solve these problems in a way that's genuinely
useful to people is possible in the transport area right from the very beginning.
I will add to that.
So my background has been in supply chain and predictive maintenance because it connects
us all to the commodities sector. So what is quite interesting is that whereas we are building this
dream, we are also very conscious of the commercial returns on such a system because we want to
deliver commercial returns as soon as possible. So as Toby mentioned, there is a lot of traction
in the transport sector, but that leads into something bigger than that, which is a supply chain
sector. Now, if you combine logistics, shipping, and predictive maintenance together, I mean,
just to give you an example, one of the biggest returns which is expected in the short term
from predictive analytics is based on predictive maintenance, because we're starting to get
towards condition-based maintenance anyway. So it would be a very easy win or a very, you know,
quick area to get the returns from.
So we're targeting some low-hanging fruits,
but then we want to build on top of that.
But they have to be in the right way
where we can build the right prediction models.
And there is a lot of information,
which is very low value,
which is sitting with the academics,
which is sitting in the academic institutions,
which is not being utilized.
What this system will enable you to do
is to bootstrap with cheap data and then build some very cool predictions on top.
Cool. Awesome. Well, thanks so much for sharing this and for joining us today.
Now, we're going to have links, of course, to your website, white paper.
But if people want to learn more about it, get involved, get in touch,
what's the best way? Should they wait for the test net release?
Where do you want to send people to?
We have a telegram channel and our website where people can interact with Fetch directly.
And of course, all the, well, we're there on Twitter and we're going to be publishing an increasingly large amount of stuff on our website in more detail.
And we're always interested to hear from people about how they might be able to work with us with Fetch.
Well, Hermione and Toby, thanks much for joining us today.
Thank you very much.
Thank you guys for giving us the opportunity.
Appreciate it.
And of course, thanks so much for our guests for once again joining us.
So we're putting out new episodes at Epicenter every week.
You can either get the audio podcast on any podcast application
or you can get the videos on YouTube.com slash Epicenter Bitcoin.
And if you want to support the show, you can do so by leaving us an iTunes review
that helps new people find the show.
And yeah, thanks so much.
We look forward to being back next week.
Thank you.
