Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Gilles Fedak: iExec – Blockchain-Based Fully Distributed Cloud Computing Infrastructure
Episode Date: April 11, 2017For decades, science and academia have leveraged distributed computing to solve massive computational problems. Distributed grid computing schemes allow donors to volunteer their desktop computer’s ...idle resources toward scientific projects in physics, biology, and chemistry, where large amounts of parallel computing resources are necessary. Relying on software like BONIC, these networks provide features such as built-in fault tolerance and result verification. And with the proliferation of mobile and IoT, the potential for massively distributed grid networks has never been greater. Gilles Fedak, a researcher at the French computer science research body Inria, joins us to discuss a new project which aims to build a high-performance distributed cloud infrastructure marketplace. Relying on mature grid computing technologies, iExec utilizes Ethreum to organize a peer-to-peer marketplace of computing resources, allowing anyone to rent their idle resources to grid networks. If it succeeds to execute on its vision, this game-changing project could revolutionize distributed computing through cost reduction and the commoditization of resources. Topics covered in this episode: Gilles’ background as a distributed computing researcher Distributed computing and it’s applications in science and industry The problems we see in distributed computing networks The iExec project and its vision for a distributed computing resource marketplace How iExec works as an Ethereum smart contract The different components and participants of iExec The iExec token and upcoming crowds ale The project’s business model and roadmap Episode links: iEx.ec Website Gilles Fedak Home Page at ENS Lyon iEx.ec White Paper iEx.ec Crowdsale This episode is hosted by Meher Roy and Sébastien Couture. Show notes and listening options: epicenter.tv/178
Transcript
Discussion (0)
This is Epicenter, episode 178 with guest, Gilles Fidek.
This episode of Epicenter is brought you by Jax.
Jax is the user-friendly wallet that works across all your devices and handles both Bitcoin and Ether.
Go to J-A-A-WallX.I-O and embrace the future of cryptocurrency wallets.
And by the Ledger NanoS, the hardware wallet which sets the new standard in security and usability.
Get it today at ledgerWallet.com and use the offer code Epicenter to get 10% off your order.
Hi, welcome to Epicenter, the show which talks about the technologies, projects, and startups driving decentralization and the global blockchain revolution.
My name is Sibatiankaju.
And I'm Meher Roy.
In the last episode, we interviewed True Bit.
And this episode is also focused on distributed computing, the idea of blockchains and distributed computing.
And we are focusing this episode on a project called IXEC.
So with us as guest is Gilles Fedak, who is the co-founder of Fadak, who is the co-founder of
who's the founder of Ixec.
And he is a permanent research scientist at Inria,
which is a French public body for research in computer science.
Gilles, welcome to the show, and we are glad to have you here.
Thank you.
So before we start, tell us a bit about your background.
What have you been doing over at Inria?
So yeah, I'm a researcher at Inria.
My research background is on a parallel and distributed
computing and more specifically my research topic this is really where I'm doing a lot
of a thought it's on a desktop grid what we call desktop grid which is basically
the idea of using a very large number of machines on the internet typically
desktop it could be also data centers to execute very large parallel applications so
I've been doing this since the 2000s I wrote
I developed many software in this area, also algorithms.
And I addressed all, not all, but many of the challenges, the research challenges around this research
topic.
So everything around data management, scheduling, fault tolerance, resilience, research certification,
interoperability with existing e-science infrastructure, standardization, hosting, hosting,
to establish actual infrastructure based on this paradigm, application of this paradigm,
this computing paradigm to other form of distributed computing, such as cloud computing,
and so forth, quality of service. So what is distributed computing used for? I mean,
I think a lot of people have heard about distributed computing. It's a term that, you know,
sort of gets thrown around, I think, now in this age of cloud and cloud computing, all this stuff.
what is this used for it in what industry sectors for what purposes?
So distributed computing is really a broad term and it's a kind of very large family.
In this family you have different category if you want depending on the infrastructure.
So for instance cluster computing, this is when you have like a data centers with many machines
that are very homogeneous usually and you know are very well connected.
And the further you go, the further you can have infrastructure that are, you know, more loosely connected.
So it goes, for instance, from cluster computing to grid computing.
So grid computing, it's a network of clusters, typically.
Then to cloud computing, and two, at the very end, it's desktop grid computing,
or volunteer computing, it's the same idea.
But here you have an infrastructure which is very loosely connected,
and then the nodes can join,
leave and join the network at any time.
So the characteristics are very different.
So for each one of these infrastructure,
you must find the correct algorithm
and the correct software that gives you
the maximum performance or the maximum usability.
And this is why there are some differences
between the one and the other,
and we don't build the software the same way
for one or the other infrastructure.
And why is this an interesting research topic for you?
What brought you into this space of distributed computing?
So that's an excellent question.
So if you want, it sounds a little bit exotic.
And when we started this 15 years ago, it was a little bit exotic.
At that time, if you were for a little bit old, but at that time, 15 years ago,
even doing computing on a PC, it was something crazy.
People were using Unix workstation.
It was super expensive.
It was really the beginning of Linux.
And at that time, when you, you know, in the lab,
when you were saying, okay, I'm going to use a PC to do, you know, instead of a Unix workstation,
people were looking at you like, groups.
And if you were saying, I'm going to use a lot of PC instead of a supercomputer,
people were thinking that you were just stupid.
Not here.
It was basically this.
And in fact, that was very important because at that time we were looking, you know,
because we wanted to do this computing over the internet, that was the notion of,
large scale. You know, we were designing systems for an infrastructure of thousands of nodes,
hundreds of thousands of nodes. And at that time... Was there a cost, sorry, was there like a cost
parameter in there as well? Like, was it cheaper to use distributed computing than
renting mainframe? Yeah, absolutely. Because you don't have to buy the machine. That was the main
motivation. You don't have to buy the machine. You don't have to buy the machine. You don't have to buy the
the electricity, you don't have to maintain the machine and you don't have to upgrade the
machine.
And for doing computation where you have a huge number of tasks, this is very interesting.
And from a research perspective, what was very interesting in fact is that because we are addressing
large-case system, we were able to emphasize to raise some issue well before they
happen in traditional supercomputer.
So for instance, at that time, full tolerance, it wasn't an issue at all for supercomputing.
because those supercomputing, they had like tens of nodes.
Now the largest supercomputer, they have like 10 million nodes.
So failure is a normal event, just like the Internet.
And so that as a researcher, it enabled me to address, you know, challenges
that became totally mainstream, but well before.
And for instance, technically speaking, the kind of algorithms we had for desktop grid computing,
this is very similar to the way, for instance,
we are designing now like Hadoop.
Hadoop, it works very similarly to a desktop grid system
because Google designed Hadoop in such a way
that it can cope with an infrastructure
where the faults are a normal event.
So what kind of desktop-grade computing systems
have you personally worked on
and what did these systems do?
Yes, so my PhD thesis, I proposed a software called X-2M-W.
Extreme Web, this is the idea of building a peer-to-peer network for doing computing.
So at that time I was a young student, I was very influenced by Napster,
and I was downloading MP3s from other people,
and I had a lot of my student made. We were doing a lot of simulations,
and they were always asking for CPU time, CPU time, and so my idea was there,
why don't we build this sort of Napster, but for CPUs,
and that was basically the idea of
of XtreM web. So after that, you know, this is a project that, you know,
allowed me to, I use that as a sort of a research platform, you know, to investigate
different way of doing computation. And after that I did a lot of things around
the quality of service, for instance, I developed a speculus, I developed Bidu.
Bidu, it's a very interesting piece of software in the sense that it allows you to do
effective data management for this kind of infrastructure. And this is something difficult
You must find new paradigm to do this, you know, because locality is important.
You don't know the machines you are going to work with.
That's something really important.
You know, when you have a cluster, you know the set of machines that you are working with.
So you can keep track of things.
Here on such a kind of network, machine can pop up, they can leave.
And so the way you're going to do the data management is very important.
The issues with locality, you don't want to move the data unnecessarily, this kind of things.
of things you want to minimize data movement and then we did map reduce over the internet
we were the first one to do that many many different and one of the most
important result that we did during this period was that we participated to
several European projects that were aiming at that establishing a real infrastructure
based on this paradigm and usable by regular scientists you know and so what we
did is that we were, it was something like four different projects, EU project, FPC-7 project.
So we were at least nine partners from everywhere in Europe. And we glued together several
of these infrastructure and we bridge them to the regular European e-science infrastructure.
So at the end, that was quite interesting because for a regular scientist, you know, he could
launch its jobs on the grid, so we call that a grid computing, on the European grid, at the end
the end it could end up being processed by your desktop PCs on the internet. And this totally
transparently with the same level of security, traceability, accountability that is required
by the European grid infrastructure. You can do things without being authorized to do that
and so forth. So here I learned a lot because it was really a practical thing, you know. We really
established a real infrastructure.
This was very important in my, in my, in this work.
I would not do I-exec without this experience.
Never.
When I listen to this, I'm kind of struck by how,
it seems that there are these desktop grid computing
infrastructures for various kinds of research, right?
Like, so there's this European machine that, then there's,
I think, some, this B-O-I-N-C that also has some kind of great
computing with volunteers producing computing power.
So I think in various kinds of research, you do have the distributed
computing, but this hasn't really percolated down to the mass market where like a
normal developer when he's building a web application is using that kind of
computing infrastructure.
Why do you think this technology as such hasn't jumped from a research use case to a
wide commercial, everybody is using use.
like scenario? Yes, because in fact it's very effective but for a specific for I would say for
very specific application. So typically you mentioned B. You know, so Bynk it's the middleware on
which many projects of this kind are built like SETI at home folding at home and this kind of
thing. And typically Bynch there is a I think there is a kind of misunderstanding about
it's a very good, it's an excellent software and there's no problem.
about that, but it's effective for only one kind of application.
It's the applications that are, so first embarrassingly parallel, very few I.O.,
no communication, all the tasks should be independent, but there should be a huge number
of tasks, really a huge number of tasks. And typically at home, it's this kind of
application. So you have the radio telescope and it streams data and all those
data are being processed, but it exists for years now, like I don't know, 50,
years, something like this.
Okay?
So for this kind of application, yes, it's super effective.
But most of the time, people don't have application of this kind.
If you are, I don't know, you are an engineer and you're working on drug design and you
have to, you know, do a simulation for your medicine or something like this, usually you
want your research today, tomorrow by the end of the week.
But you don't want to recruit people on a forum, launch your application, and have the
result like one year later.
So that's the reason why it has not been so effective.
And actually we wrote paper about that,
that compare the cost of doing a computation on the cloud
and paying for the resources
or doing a computation on the Internet
and not pay for the resource,
but pay for people setting up the infrastructure.
And what happens is that you really have a threshold,
and the threshold is quite high, actually,
before being, you know, for they able to go.
Okay, that's pretty clear.
So the use cases then are just not.
mass market use cases or even industrial use cases. They're very specific, very targeted use cases. So you need a large amount of inputs, or sorry, a small amount of inputs, but a lot of parallel threads happening at the same time. Is that exactly? Exactly. Okay. And tell us about, you mentioned in your, I think it was your DevCon talk. You talked about this thing, fog and edge computing, which I think takes distributed computing to the mobile phone network. Is that right?
So yeah, that's a very important evolution that is going to come.
It basically comes from the fact that we know already that, you know, the cloud computing, as it exists now,
is becoming a bottleneck for many applications.
In the future, it's going to be really a bottleneck.
You know, in the future, we're going to have more and more data generated by more and more devices
that are really distributed.
So think about, you know, for instance, the cars, doing video,
and you want to do deep learning on this video because you know you can do some
security based on that you know automotive car and this kind of thing so the
more you're going to have those data generated in a very distributed fashion and
the more the centralized cloud as it exists now is going to be a buttonet and so
what people are thinking of is to kind of you know move a part of the processing
that happens now on the cloud along the network you know up to the up to
the two to those devices so Fog and
age, okay, I'm always a little bit confused between what is fog and what is edge, but
that's basically this idea.
So some, for instance, are relying on the broadband network that is, you know, operated
by the telco provider.
So Dutch Telecom, AT&T, Orange, etc.
And for them, for instance, we did a study, of our colleagues of me did a study a couple
of months ago that compared the cost of having a cloud, you know, centralized and the cost
of operating this cloud, you know, in a hybrid way, centralized and also distributing
along the network.
And that's really a cost saver.
It costs, it's really cheaper to have, you know, to distribute your cost.
At the moment, what people often don't know is that at the moment the cost are a lot for
electricity and data and network and not necessarily, you know, CPU type.
That's very important.
So the more you can decrease this, the lower TCO is going to be.
I see.
So with centralized computing, which you're saying is we're not paying for necessarily CPU time.
What's more expensive in any type of computing is the data storage, the network bandwidth
and the electricity to run the machines.
Exactly.
And there are also some competition.
I mean, it's also a shift in, you know, in it could be a shift.
shift in who is making the money.
So at the moment, for instance, in France, just to take an example, but I guess it's similar
with other countries.
Telco providers don't make so much money with respect to what's going in the tube.
So for instance, there was, in France, there was an argument between one of our internet
provider, code free.
fR and Google with YouTube.
And they were not super happy to kind of give the money for free to YouTube, you know, to Google.
I remember this.
Yeah, you remember that.
I remember because every time I want to go on YouTube, it would be super slow when I was using free.
Exactly.
And so with this shift, you know, with this shift and having the data center along the network,
it means that the money, you know, it could be Google paying to have access to this infrastructure at some point.
So, you know, there could be huge, you know, industry fight below this, I mean,
under the shift of paradigm between centralized and this.
centralized cloud because it's not going to be the same operator.
And at the moment, fog and age, it's pushed by Huawei, it's pushed by Cisco.
It's not the same guy doing that.
So future we tell.
Let's take a short break to talk about Jacks.
Jacks is your wallet, your complete user interface to cover all your blockchain needs.
I've been using it and I've been loving.
And Jack supports a lot of different cryptocurrencies.
It's supposed Bitcoin, Ether, Lyde, Ethereum Classic, Zcash, Augurrp,
or a rep and they're adding many more, keep responding to users' needs.
Now with Jacks, the nice thing is that you can manage all of those coins within a single wallet
and you are in control of your own private keys, they're not on their server,
and there's a single 12-word seed that you can use to backup your wallet, all your coins,
and sync them across different devices.
Talking about devices, they're on pretty much any device that you can think of.
You can get it on PC, Mac, Linux, you can get it on small devices.
like Android and Apple and iPhone.
You can get it on tablets or even browser extensions for Chrome and Firefox.
And on top of that, in JAX, you can actually exchange different cryptocurrencies for each other
because they've integrated a shape shift.
And more partnerships and integrations are coming down the line in 2017 that are going to make
Jax even better.
So Jax is really making blockchain and cryptocurrency accessible for the masses, easy to use
for the masses.
Make sure to get your own Jacks wallet at jacks.
Atj.com, or you can get it from any of the app stores you are using.
We'd like to thank Jacks for their supportive epicenter.
I think we've got a pretty good understanding now of, you know,
the distributed computing ecosystem, you know, where that came from
and what it's used for and the use cases there.
And it seems like it's a very well-established form of computing today,
as far as you've explained it.
So now take us to the next step.
What does blockchain technology bring to distributed computing that we didn't have before
and maybe lead us into Iexec and how Iexec is addressing specific pain points or specific problems
that we see today with distributed computing?
Yeah, so that's a very important point.
So the blockchain, I think, is changing everything.
I mean, I've been a long observer for all these kinds of systems.
I went through all of them, but blockchain, it changes a lot of things.
It changes a lot of things because now you can decentralize the business,
you can automate the payment, you can have, you know,
very close interaction between several businesses,
and the more I discover about this, you know, the more ideas I have.
Before that, doing like a decentralized market network,
it was just impossible.
I know you can do that, and it's not so difficult to do.
Honestly, there are some challenges, but it's not that big.
And so what we did so far with AI exec is kind of merge, you know, those two technologies together,
blockchain and distributed computing.
Tell us what Iexec is seeking to do. What's the big vision?
So the goal of Iexec is really to decentralize cloud computing.
So what we want to do is to establish a market network and in this market network managed by the blockchain.
So it's, I mean, we are not managing the market network.
It will run autonomously on the blockchain.
In this market network, you're going to have providers for servers, for application, for data.
So typically, you know, in a nutshell, it's like servers like infrastructure as a service,
data is like data as a service and applications like software as a service, you know, as it exists.
So we call that a cloud because, you know, that's what people know at the moment.
But of course, it's going to be totally different, because it's going to be much more decentralized,
it's going to be much more, yes, undistributed, I mean in terms of infrastructure,
and much more open.
And with security, transparency, resiliency, all, you know, this blockchain, it really changes
the way you design distributed application.
And there are really a lot of new notions and new concepts that are still not there
in distributed computing.
For instance, the idea of consensus, it's not something that really is.
exist in cloud computing for itself.
Typically in cloud computing you have SLAs.
So you give a kind of probabilistic guarantee
that your contract is going to be met,
but you're not 100%, you know, it's an SLA.
You give some numbers for availability.
So here there are some, I don't, I mean,
I don't claim that all of the concepts coming from blockchain
are going to be adopted massively by all, you know,
every software and distributors.
computing, but there are going to be some synergies between the two, this is for sure.
So there are a couple of other projects that are trying to somehow merge distributed computing and the blockchain,
and two of them are Golem and Truebit.
And from the outside, their problem statement seems similar to yours.
But can you tell us what's the difference between these three projects and what is unique about IXA?
So what is unique to Iexe compared with the two other, it's really the fact that we come from, we come with a background in cloud and distributed computing.
Okay, so we know a certain number of things, we know how to do them.
And we also, I think it's even more important, we know that there are some specific things that we don't really want to do.
We know the limit, if you want, and that's really important.
What is really important is that when dealing with such kind of infrastructure, you must
give your user the good paradigm for using this infrastructure.
That's something really important.
So with respect to Golem, I think the main difference,
so of course there are similarities.
And I really like Golem, I really like what they are doing.
And I think we are going to, I hope we're going to work together,
because there are some commonalities that would be stupid to work totally,
separately.
With respect to Golem, I think the big difference is the vision.
But Golem claim is that they want to do a supercomputer based on this paradigm.
This is not a story that I buy personally.
A supercomputer, so first the definition is very simple for a supercomputer.
Every six months we establish a list, it's called the Top 500, and in this list you have
the list of machines that are the supercomputers.
Okay, to be in this list, you must run a benchmark, it's Linepak, it's a Linear-Algebra benchmark.
You run this benchmark on a benchmark.
machine, you look at the number of floating power
operation per second you can achieve, and you're ranked,
and if you rank among the top 500, you have a supercomputer.
So I mean, that's the definition.
So having a supercomputer based on that, I mean, doesn't make sense.
And people don't want that.
So I exec has a very different vision for that.
Our vision is really the distributed cloud.
So it's really, we don't, I don't believe in one big application
that would use thousands and thousands of machines.
What I believe in is thousands of DAPs
doing off-chain computation
and accessing a huge computing infrastructure
totally distributed.
That's very different.
The way you design the software for that
is totally different.
Incentives must be different,
the performance are different, et cetera, et cetera.
From what I understand from what you're saying
and the feeling that I'm getting
is that perhaps using the term super-counter
computer is somewhat misleading because that's not really what this is. Could we expect something
like Gollum to really compete with this top 500 list of supercomputers?
I don't know. That's a problem, not mine.
No, but I think that maybe the difference between, so I don't know very much about Gallum,
I'm hoping we have them all at some point, but between Truebit and I exact, it appears to me that
the difference is that Truebit is trying to bring complex computations to smart contracts,
while Iexec is trying to bring the blockchain
to distribute the computing?
No, I think that we are, in some way,
we are closer to true bit than to go then.
Because all goal now, I mean,
in the first iteration of Iexec,
our goal is really to be able to do
off-chain computation from the smart contract.
And for instance, at EDCON,
this is what we demote.
And we demote this, okay, that was a toy example,
the vanity.
So we demote a Bitcoin vanity,
address generation. So typically this is a smart contract. The smart contract describes
the task that is going to be executed off-shane. The first goal of I-exec is to allow
those daps, so what we call daps, the distributed application running on the
blockchain, the form of a smart contract, to perform a part of the computation
of chain. So this is really our primary goal now. This is really what we want to do.
first. And the way we do this is to execute that offshade using those, I mean, using this
distributed computing technology. Okay. So it's like you're building, you're bringing the tools
of distributed computing over which you have worked 10 years or even more to, to solve the
problem of smart contracts, not having enough gas to do many things. So the smart contract,
there's a gas limit, it cannot do more than these steps.
So you're creating a method by which the smart contract can delegate the part of the
computation to your distributed infrastructure, get back the results and then continue processing.
Exactly.
This is exactly this.
This is exactly this.
And I think that's super important.
So we are going to do this first for computation, then for data, and then for machines.
Okay.
So at first, it will be basically your smart contract.
we give the smart contract the ability to do ocean computation on a restricted set of machines.
So this is for the first six months of the project. So here it's not anyone who can provide its machine.
This machine. We are basically managing those machines or it's partners who are basically
we are providing these machines. And later on during the year we'll extend the network,
the market network first to other server providers, data center providers,
and then later to data providers.
And at the end, you're going to describe your computation,
you know, mixing, provide, you know, some providers for data,
another for an application, another for a server.
And then I exactly do all the magic, which is, you know,
deploying the data, deploying the application,
doing the computation, the verification, doing the payment, etc., etc.
So that's going to change.
I think, in my opinion, it's going to change a lot,
you know with respect to the kind of application you will be able to execute on
on Ethereum so it's a great potential for that
So today we have this whole generation of DAPs coming right and whenever you see a builder of the
DAP and whenever I myself think of a smart contract I'm always kind of limited by the
gas and things that I can do and I will build that my
a particular way. And today all of these ICOs are happening and they're
all they're all architectures are kind of built around this constraint.
Now tomorrow is something fundamental like I-Ex-X or TrueBid, I think
two-bit's problem statement is all similar, it comes along and suddenly a new
generation of architectures opens up and maybe that new generation of
architectures will also bring about new projects that do exactly the
the same as current projects are trying to do, but they just do it better because they're using
better infrastructure and they end up killing these projects.
Exactly.
Exactly.
It's going to change a lot of things.
And, you know, for instance, at EDCON, there was something really a funny idea, EDCON.
If you remember, there was a project, I think it was called Eter Risk.
It was kind of an insurance, okay?
Insurance.
So you can describe something that is not, I mean, you know, that could fade and you can insure
this.
And if it fails, you get, you know, reimbursed, like an insurance.
So that was quite interesting because all this kind of project, they are all described as smart
contract on the blockchain and you can interoperate with them.
So, you know, up to now, I've been doing fault tolerance.
So if a machine fail, okay, I try to find another one and start again the computation or this
new machine.
But, you know, with Ethereum, it's kind of funny.
You could let the user decide that maybe he doesn't care about that.
you want just to ensure it's computation.
So that's funny.
You know, instead of trying to find a new machine,
the guy can simply, you know, take an insurance,
estimate the risk,
and maybe be reimbursed by another third party,
you know, final short service if the computation.
And all this kind of business, you know,
I mean, you know, cooperation between businesses.
I think it's going to be much, much easier
to put in place with solutions like blockchain.
and Ethereum than it was before.
You know, before we had web services,
but web services and payment,
it doesn't come very well together.
So we had a huge standards about web services.
It's huge.
You know, I mean, people have been working on that
for years and years and years,
and I don't even think that there's a single solution
that allows you to do payment between different,
between various web services when you are using a series of them.
So, and in the future, I think it's going to be,
really much easier to do that with the blockchain.
So what the blockchain is missing at the moment,
it's really an infrastructure for any kind of application.
That's really what's missing now.
Sometimes you, in many places,
you hear the quotation, which is similar to like,
that the blockchain is not the killer technology,
but the blockchain is the technology
that will enable the killer technology, right?
Like the, so what we have today is something,
but it's not the actual.
deal but like using the blockchain will build these other things on the side and all of that
combined is going to be a kind of that is going to power the real tech revolution right and
and from that perspective like being able to outsource computation outsource data storage all of these
features seem to be quite quite important yes I think so so with that background perhaps
we could move into the question of like how you're building IXXX
and what are the main components you're using.
So like give us an overview of the components.
Okay, so the way it works at the moment,
and the way it's going to evolve.
So at the moment, what we did so far,
it's really building a POC, proof of concept.
So proof of concept, it means that we are confident
that we can have something up and ready in a couple of months.
So now it's working as a POC,
But what is interesting is really an end-I mean we can do it really end-to-end.
So at the moment what we did is that we can, we breached, we built an oracle that
observe some smart contracts on the blockchain.
So we are using the smart contract as a way to provision resources and do the payment.
Okay, so you have smart contract that describes tasks and resource provider.
And when you are doing some transaction on those smart contracts,
contract, then we can observe this using a bridge or an or an oracle, provision the computing
resources, deploy the app, the application, the data, do the computation and bring the
result back to the smart contract.
So this is how it works now.
In the future, so what is interesting is that, and this is something people may not have understood,
is that at the moment, of course, we are the only one, you know, deploying this.
this I mean this oracle and those computing resources but of course that's
totally open and in the future we're going to have many many more I mean
businesses certainly or communities running this kind of infrastructure so you
can think it as a sort you know as the way it's organized now in minors you know
in pool of minors so you can run your your
miner on your own being a full node and being also a minor but most of the time people
organize this in you know mining mining pools so I guess that for the future is
going to be a little bit like this you know many different so we call them worker
worker it's good because compared with a minor you know mine are not do only mining a
worker can do very different tasks which very likely that we have this this name so
we call them worker pools and so at the moment are kind of a
isolated if you want. So they are all connected to the blockchain. They can all
observe what's going on with the blockchain and you can all reserve, I mean you
can reserve all their resources through the blockchain. But in the future,
they're going to be much more connected and this is going to be a sort of,
this is going to be a kind of side chain, some hope. What is important to
understand is that on the Ethereum blockchain at the moment and I think it's
going to be like this for a
couple of time at least for many years I think you are quite limiting on the
logic you can run on the Ethereum blockchain and there are parts of the
algorithm that you need to do when you do a distributed computing that is
not likely to be that you that is going to be very difficult to execute on the
Ethereum blockchain so anyhow you must have some you know some component
running side-by-side with Ethereum this is important so for example just to
give you an example. Three years ago I think with Mircha Mokka, so one of my
colleague, we proposed an algorithm for doing scheduling. So scheduling, this is the
algorithm which decides which task is going to be executed on which nodes. Okay, and
with Mircha we designed an algorithm which is a multi-critory algorithm. This is really
important this kind of algorithm because it allows the user to say, for instance,
you know, requirements such as I'm ready to pay a lot, but I want my computation to go as fast as possible.
Or conversely, I want to pay a very cheap price even if it takes longer.
But you can put several criteria, so it can be energy, it can be trust, it can be location, etc., etc.
So this kind of algorithm, the way it works, it's memory intensive and compute intensive.
So the one we proposed was based on Promethe method.
It's basically you do a big matrix, you evaluate the different criteria,
and then you do pairwise comparison.
So, I mean, you can't run that in a solidity smart contract.
This is never.
And so you must have some, you know, a different, I mean,
different components running side by side with Ethereum that are connected with Ethereum
and to do a part of the resource management, etc.
etc. Another thing that is important is that the notion of consensus as it
exists in the blockchain is not always very relevant to the off-chain
computation okay at the moment it's clear I mean it comes from a Bitcoin and
Bitcoin it's about transferring money so the consortium must be really
really strong you don't want you don't want a transaction of money to be
likely to be proceed maybe it fade or maybe not okay and if you have some
herit from this from this when you do off-chain computation the the consensus
the notion of consensus it has to be more flexible because you have
computation where you can totally afford to have a fraction of the results wrong
can be totally okay and it really depends on the application you have different
way of you know certifying the results verifying the results it can be
You know, you can have a situation where, for instance, verifying a computation
cause the same than running the computation at once, or it can be very easy to, it can be much more,
how to say, it can be only a fraction of the original computation to verify the result.
Think, for instance, of rendering an image, you know, doing a 3D rendering, for instance.
You can verify the result just by spotting some pixel in the image.
And of course, it costs less now.
If you do some cryptographies, sometimes you just have to verify the keys or things like this.
So it must be a framework.
It must be a framework.
And so you must have the consensus on this side chain.
So this is work for the future.
It's not yet ready, but this is where we're going to go in the next years.
So you must have a system where you still have the trustability that exists on the blockchain.
You must be able to understand who did what and if a provider claims that he did 99.99% of availability,
you must find this in the side chain.
You must be able to find all the tasks that have been processed.
But still, the courses, you know, you don't need proof of work if there are some, sometimes
there are some errors that can be totally okay.
So it must be tunable.
And the performance also are going to be very different.
At the moment, the performance that you have with Ethereum, it doesn't allow us for distributed computing.
Not at all.
You can have millions of tasks executed, millions of files downloaded.
And if you want to do that, if you have one transaction per action on Ethereum, it's just impossible.
So in the future, I think that they are going to be part of the logic that is on Ethereum.
I would say the important things, you know, resource provisioning.
payment, description of the task, description of the resources, all these kind of things.
And a part of the consensus that is really done of chain.
I mean, the consensus itself is going to be done with a dedicated protocol of chain
that can accommodate with those performance requirements, low latency, high throughput, etc.,
even if you have some mechanisms where, for instance, you can go backwards.
you know, on a blockchain like Ethereum, I mean, that would be bizarre to have a situation
where you say, okay, I give you some money, but wait a little, maybe the money has not
been sent.
We have to wait for, you know, two days, and if nobody says that nobody reclaim the money,
you're going to have the money.
But for executing computation, that's totally okay.
You can say, okay, let's assume that.
All those tasks went right.
We assume that for, let's say, 24 hours.
and if it's not the case, we are going to figure out what went wrong.
But at least you have a result immediately.
So all these kind of, you know, algorithms,
I think that's to be decided in the future.
So what you're saying is,
because I want to talk about this verification part for a little bit,
because this is something that we talked about with Tribut last week,
and they have a sort of a very unique approach to using game theory
and the proof-verifier model to do verifications
of the computation on, I think, within the solidity smart contract.
What you're saying is that in distributed computing,
and correct me if I'm wrong, my understanding of this is wrong,
but I think what you're saying is that the consensus on the results already exists.
There's already protocols to come to consensus on results,
and there are different consensus models based on what type of consensus requirements
we might have.
So you mentioned one example where there is a threshold of faulty results that can be tolerated.
And these consensus mechanisms already exist, and they would be operated off-chain.
It's not up to the miners to figure that out.
It's not up to the Soliti smart contract to determine whether or not computations were properly executed
or the results can be trusted.
That's happening in the distributed computing network.
and then I presume that the network can then send proofs to the smart contract
so that transactions can then be executed on the Ethereum chain.
Is that right?
I think so, yeah.
I mean, it's really a case-by-case basis.
My assumption is that we are going to have a kind of a way to have the consensus
that fits most of the cases.
But for instance, the way we're doing that at the moment is by comparing the results.
So first assumption, I don't know the details about 2Beta.
I looked at it very quickly.
Unfortunately, at the moment, I don't have so much time to.
It's very, very interesting.
I love that.
Yeah, and I mean, we're only using it as a comparison because it's so fresh.
Like, we just released the episode a few days ago, and it's so similar.
I mean, yeah.
Yes.
So there's one thing that is really important.
Up to now, you know, so with desktop grid computing, you're using machine that you
trust. So we have mechanisms to do some what we call result certification. So
result certification is making sure that the result is correct. Okay, it doesn't
guarantee anything at 100%. So the first thing that is really new this blockchain
thing and that's really something that 2-bit does very well. It's the fact that
now you can punish people if they do wrong. In desktop
computing, you can do that. If someone decides to send you a fake result, okay, what you can do
is just discover that the result was fake, and then maybe you can blacklist him, but that's it.
Yeah, and I think that's the difference with Trubit. With Trubit, you're relying on one person
or one cloud computer to do that computation, and then someone verifies it. We're here,
it's a totally different use case. We're talking about specific types of computations that
are distributed and where those fault tolerances already exist, and we're not trying to say,
this is an AWS server that you're going to be able to use to do one specific type of computation
on one CPU. I-exec is for a totally different use case. So the verification logic, then obviously
also needs to be different. Exactly. But here, the good thing is that we can punish people.
So we can have some court. I think they call that court. So a system where typically, you know,
people agree that they're going to work together. Things happen. And if someone claimed that, you know,
it happened the wrong way, then you can have another system, I mean, you can have another step
in the verification, where you trigger some sort of verification mechanism.
And here you can, for instance, you know, duplicate the task, you know, a huge number of time.
You can have an anti-collision mechanism, this kind of thing.
And once you decided who cheated, you can really hurt him.
So this is really something new.
It opened a lot of new perspective.
So, in my opinion, it's not more difficult than it was before.
On the contrary, I think it's easier.
Because before that, you cannot punish people.
Now that you can punish people, and more than that, it's more than you can punish people.
I mean, it's more than that, you can reward people for having behaved correctly.
And that's something really new.
So with Iexam, for instance, or token supply is fixed.
But another way of doing that would have been to issue token whenever people behave correctly.
That could have been a very good way of incentivizing people.
In my opinion, we have much more way of designing new algorithms for this.
And it's true that from a research perspective, that's very interesting because it didn't exist before.
This idea of bringing game theory and scheduling, for instance,
Those algorithms have not been invented yet.
So that's a very good topic for a PhD student.
If some wants to start a new PhD with me, it can be a topic.
Let's take a break to talk about the Ledger NanoS,
the new flagship hardware wallet by Ledger.
I'll pass it over to the Ledger's CTO, Nicola Baca,
who can tell you all about Ledger's security features and SDK.
So Ledger NanoS is a personal security device
based on a secure element, a screen and button,
so that you can verify everything that is done on device
and make sure that you are really doing what you wanted to do.
Compared to our previous solutions,
this device is based on the latest generation secure element,
the ST-31 from ST-3 micro.
The SC-31 is using a secure arm core,
which means that you can have the same ease of development
that you would have on a generic microcontroller,
but benefit from the security features of a secure element.
Security features include an application firewall at the lowest level that let you protect applications from each other,
which means that you can load multiple applications on the hardware wallet, even post-issurance,
and you as a developer will be able to leverage these features to load your own application without our authorization and without any kind of authorization from the vendor.
We will be providing this device with an open SDK that let you do anything you want with this device.
We provide sample applications for cryptocurrencies, different cryptocurrencies, so Bitcoin, Ethereum.
We will also provide a Fido Authenticator, and you will be free to add everything you like.
For example, you could have some secure messaging, some encrypted chat, and you'll see that the solution is quite powerful and very easy to develop with.
The NanoS sets the new standard in hardware wallet security and usability.
You can get yours today at ledgerwallat.com.
and when you do, be sure to use the offer code Epicenter to get 10% off your first order.
We'd like to thank Ledger for their support of Epicenter.
The key thing I understand from your explanations is, so with TrueBit, right?
And I'm taking that sort of as a reference because that is very fresh in our minds.
TrueBit takes the approach that there's like one particular verification methodology that's going to work in all.
the cases and that is the smart contract delegates something to a to a solver
the solver gives the result somebody can challenge the result and then this the
smart contract sort of becomes the judge and decides whether to award award
award the challenger or the solver the money right so and by making this
into this the smart contract into a judge saying the solver did it correctly or
he didn't do it correctly you incentive
There's good behavior from the solver and that is how it works.
So that's like one verification model.
And what you're saying is, I-X-X-I is taking the approach that there isn't going to be one winning
verification model, each different type of application.
There might be multiple different verification models such as some tasks might, some
tasks, it might be easy to verify computations in some tasks very easily by just observing the result.
There might be some other tasks in which you need a verification game like that.
And there might be some other tasks in which certain faults can also be tolerated.
So there isn't going to be one winning verification model.
There's going to be multiple of them.
And you are building your system in a modular way that allows all of these mechanisms to be implemented.
Let's think, I don't know if you know grid coin, but grid coin works a little bit this way somehow.
The consort suite is done using Boeing.
and according to what happened on Boeing,
then you issue the token, the grid coin token.
So to me, it's going to be, you know, half, half.
So they're going to be, for instance, the court.
I mean, pushing the court on Ethereum, I think it makes sense.
That's a very good way.
I mean, there's no problem with that.
Writing smart contract that implements the court,
that punish people if things aren't wrong,
that are able to relaunch computation,
to know who would be.
wrong it's totally okay to do this on a smart contract there's no problem with
this definitely so why do you need a side chain we need a side chain because you
don't want all the consensus operation to happen on Ethereum because it's going to
be because for reason of performance because not all the consensus because sometimes
the consensus you know so for instance imagine a situation like that you know I let's say
I submit one thousand tasks you know you know
So you could have on Ethereum the fact that you sent 1,000 tasks,
the fact that everybody agreed that the 1,000 tasks were done correctly.
And then you can have on the side chain each one of this individual tasks,
you know, who computed them when they were finished,
if they have been launched again on a new machine,
if the result was wrong and launch again under this new machine, etc., etc., etc., etc.
Because you can do that on the side chain,
where the consortuit is not so strong.
I mean, you know, executing a task,
executing a task again, it's not,
you know, it's not as important as sending money, honestly.
You can really address the performance issue
because at the moment with Ethereum,
it's one transaction, pair one transaction.
So here you can dedicate a system that goes,
you know, really fast, no proof of work, of course.
On the side chain, you're not going to,
to manage money. So I mean the token or you know the the thing that you're going to use in the
structure in the in the distributed structure is not going to have any value. So certainly
not going to secure it in the way you're securing a proof of work, you know, a blockchain
that that uses proof of work. So maybe you can use Paxos. I don't know this kind of Byzantine
distributed for Tolerant. I don't know which one but honestly it's for me it's there's a
large variety of protocol that you can use to do that.
So we can think of this as like a three tiered system.
One is like the smart contract.
The smart contract has the highest level function,
which is task definition and payment.
Task definition, solution payment.
Yeah, and maybe it.
Then on top of it, there is a side chain,
which has like faster consensus,
still some trustlessness.
but not like that of a smart contract.
But it does lower value things like scheduling,
like deciding what to do first, what to do next,
deciding matchmaking which machines should do what particular thing, right?
Verification of whether something was done right, etc.
And then this side chain performs this role,
but actual computations are on layer three on top,
which is the actual machines that are computing.
Yeah, exactly.
that's the yeah that's this is it and then they then basically the machines with the
computation at layer three it ripples down into the side chain side chain verifies the
result and then from this side chain that result ripples down into the smart
contract and then so smart contract gets the result and maybe smart contract will
trigger a transaction to another smart contract or something and then another
another loops trigger yes at the end I think it's it's going to be like this
Most certainly.
At the moment, we don't have this side chain.
So at the moment, we're working, you know, with everything either on smart contract
or on a set of distributed servers.
But at least, you know, it allows us to start and to be useful now.
But, yeah.
I think you mentioned earlier.
I don't know if we're talking about before the show, but that there'll be a token.
And could you explain what the token will serve for?
and maybe before we wrap up, we can talk about how you plan on releasing this token.
Yeah, so the token is the, so this token is the one that is going to allow you
to allow your smart contract to do upshane computation.
And so this token will circulate between DAPS,
between DAPS, application provider, server provider, and data provider.
So that's really important because it's really the token that is going to fuel this network.
So yeah, so we're going to release this token.
So we're going to release this token on April 12 using a crout sale.
So you can participate to this crout sale using Ethereum or Bitcoin.
It's managed by a smart contract, of course.
And, yeah, so there are 60 million tokens on sale at the basic price.
So for 5,000 tokens per Bitcoin.
So how are you going to do this crowd sale?
Are you, so tell us then about the business model of Iexec, is Iexec a company, a foundation, a research body, and who's going to be managing the crowd sale?
I'm asking the question because, you know, crowd sales, there's a lot of crowd sales going on recently.
And some of them have turned out to be fraudulent or, you know, for lack of a bad.
better word, a scam, and you can kind of tell when a project is not really serious or when
the founders are making dubious claims and you can sort of siphon out what projects are good
or not based on the structure. We were talking about this earlier with regards to specific projects
that are being funded right now. But yeah, reputation is very important. And you obviously have a
reputation within your field and you have reputation to preserve as well, right? You want to keep that
reputation. So, yeah, convince us, convince our listeners. And, you know, this shouldn't be taken
as any type of investment advice. But, yeah, convince our listeners as to how you're building this
crowd sale and how they can trust that you will deliver on what you're, what you're laying out here.
as something you want to build.
Yeah, so I-Exec is a company at the moment.
We are incorporated in France.
So we are supported by our research institute somehow.
I mean, we are official spinoff in RIA.
We are also at the Chinese Academy of Science.
I mean, we are at the moment incubated in Beijing
by the Tsinghua University incubator.
So it's totally something totally official.
It's true that in France,
there have not been so many projects like this starting.
So I guess we are among the first one,
at least with this visibility.
Of course, I can't hide myself.
Google knows everything about me.
So this quote is really important for us.
In terms of funding, of course,
because it's a significant amount of money,
but it's really important because,
I mean, we can't start a project like this,
without building the market first.
You know, we have to build this market.
We have to first issue the token
before we can do anything with the technology.
If we don't do this token,
I mean, I work at INRIRA.
I often meet VC.
There is innovation.
It's something that is totally,
I mean, it very often happened.
In my team, it's the third startup
in less than five years, something like this.
So it's something really common.
There was one built around
There is one that was created yesterday, I think, or two days ago about security.
So there's nothing, you know, we could have the money by VCs, I would say.
How are you going to use the money?
And is the token sale being, so will the funds go to Iexec or are they being held by a foundation that is,
are you going, are you doing that model, the foundation model where the foundation is doing the crowd sale and contracting the company?
How you?
No, we want to be a.
that's very important for us. We want to be a company because we really do things.
So we have this infrastructure view of things. And because it's infrastructure, it's not,
you know, it's not a protocol like Ethereum for instance. We are really want to deal with
machines, the owner of the machines, the editors of applications, etc. So we really want
to have, you know, this enterprise to enterprise relationship. And if we do a foundation, I
I have the feeling that, you know, we would not go the good way for doing, for establishing
such kind of relationship.
It's a market network, so that's really important to do that.
And so that's, I mean, this token sale is really important because if we don't do this
token sale, I mean, for me, that's just impossible to imagine that we could build a real
market network, you know, the way of, I mean, this token sale, it's a very important step.
and first because it gives us a lot of visibility
that would be much, much more difficult to have such visibility
without issuing a token. It forces us to go
straight in this business. You know, I come from the academy
so I'm not a businessman and you know if I don't
jump right in the in the business you know it could last for
for months before I really do understand things. Here I have no choice
So we are doing, we are establishing this business at the moment of the court say that's really important.
And so the money is a usage of the money.
So usage of the money is basically for salaries.
Let's be clear about that.
In the token, in the token distribution, we are keeping a part for bounties.
So this is one aspect I would like to mention.
You know, for systems like Ethereum, for instance, there was the proof of work.
So proof of work, it means.
that you had the ability to enter the network,
I mean to know about Ethereum,
to learn the technology of Ethereum by mining.
This is actually what happened to me.
I had a couple of GPUs,
and I started to learn about Ethereum just by making them works,
gain some tokens, write some smart contracts, et cetera, et cetera.
Unfortunately, for us, our token supply is fixed.
So we are issuing at most 80,
87 million tokens. This is in the case where the quote sale is, you know, is full. And so there will be
no further token issuance. And so that's a difficulty somehow, because we have to grow the
network. So what we need is that we kept a number of tokens for bounties, and those tokens will
be later, you know, distributed to developers, to resource providers, etc., etc.
etc, to help people growing this network.
And so this is a bet that we do and we hope that it's going to be successful.
And what will the business model of Iexec be once you've built this technology?
So, yeah, that's really important to mention this.
Of course, it's a distributed market network.
So, I mean, tokens, there's no fee.
It means that token goes from consumer to providers, and we don't take any money for that.
So that's a real question.
How are we going to make money out of it?
So my claim is that, okay, for the blockchain part, it's really, you know, innovative.
And so breakthrough technology, et cetera.
The ICO, it's something totally new.
But for everything, yes, you know, it's, okay, it's a different way of doing cloud computing.
But the business for a company is not that different than Docker, for instance.
You know, Docker, they have the good tool, they have the good way of shipping applications to their infrastructure.
We're going to be like, you know, like Docker.
So we're going to have the tools, we're going to have the software, we're going to have the documentation, the support,
to ship those applications to this new kind of infrastructure, even though we are not managing the infrastructure.
But Docker also does not manage the infrastructure, I guess.
You know, they are working.
So, okay, we're going to have certainly some sort of premium feature and maybe based on the infrastructure.
And maybe based on quality of service or trust or, you know, like GitHub, you know, in GitHub, you have things that are public.
But if you want to have some private account, then you have to pay for that.
You know, this kind of a free month feature.
So this part is going to be very classic the way you're going to make money.
But I mean, I guess if I do the correct assumption, there are going to be a lot of daps,
a lot of daps using our technology, using Iexec, using machines provided by,
everyone and for all of them, they will need some very good enterprise feature, enterprise support,
and this kind of thing. So here, honestly, it's not going to be so exotic. It's going to be very
classic. All right. Well, we're at the end of the show. Thank you so much, Jeet, for coming on.
We've been wanting to have you on for a while. So Jin and I were both in France,
So we kind of cross paths once in a while in Paris.
And finally we're able to get you on.
So I'm really happy that we're able to connect and finally get around the podcast.
So the crowd sale is, I guess, starting the day after this release is released.
So where can people go to learn more about that?
And how do people get involved in I-exec?
and or how can people find you?
Yeah, so go to our website.
It's IEX.E.C.
The website for the crowd sale is crotch sale.
i-e-X.
com.
Join us on Slack.
I'm available.
You know, you can go under Slack, ask me questions.
Usually I answer.
Okay, the next couple of days are going to be pretty intense.
so I might not be as responsive as usually.
I often also go to a conference.
I like that a lot.
Meet me at a conference.
Don't hesitate, come at me, ask me questions.
I'll be in Amsterdam for a Bitcoin Wednesday talk.
I'll be later at a blockchain conference in Berlin in June.
So don't hesitate to join to reach us by email.
And yeah, so we're accepting e-fair and Bitcoin.
and hopefully we'll succeed in our crowdsail.
Thank you very much for the invitation.
It was a really great pleasure to give me this.
Thanks for giving me this opportunity to introduce you I-exec.
I hope that it was clear enough and you understood what I-exec was about.
You're welcome.
And of course, anybody who's interested in participant in a crowd sale
should do their own due diligence.
We're not encouraging anything, although we think it's a very interesting project.
you should always do due diligence when investing any money in a crowd sale.
Yeah, we are not investment advisors.
That's right.
Supporting.
And usually we ourselves study the project for only two or three hours and interview our guests.
So, you know, we are not the experts here.
So once again, thank you, Jill, for joining us.
And thank you to our listeners for tuning in.
We are part of the Let's Talk Bitcoin Network.
You can find this show and lots of other great shows at Let's Talk Bitcoin.com.
Of course, if you're interested in supporting the show, well, there's lots of different ways you can do that.
You can follow us on YouTube, on Twitter, on Facebook, just about everywhere we are.
You can also leave a review on iTunes.
You can subscribe to the show.
Or you can leave us a tip.
The tipping address in Bitcoin and Ether will be in the show description.
And so we look forward to being back next week.
I'm going to do.
Thank you.
