Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Amrit Kumar & Dr.Ilya Sergey: Scilla – A Formal Verification Oriented Contract Language
Episode Date: June 6, 2018With the rise of smart contract technology, we’ve become acutely aware of the need for smart contract code to accurately reflect the intentions of its author; and for the code to have certain (safe)... behaviors in all circumstances. Creating the languages and software tools to enable ordinary developers to write safe contracts has become an intense research endeavor in the cryptocurrency space. Scilla is a Turing incomplete intermediate level language; inspired from the paradigms of functional programming and formal verification; that makes it easy for smart contract developers to automatically prove statements about smart contract behavior. For example, Scilla could allow a future multi-signature smart contract author to mathematically prove that funds in that contract would always be retrievable by certain addresses (and never get stuck like the Parity incident). The ability to mathematically prove such safety properties of the smart contract has the potential to be an enabling invention prior to widescale use of this technology. In this episode, we are joined by Dr. Amrit Kumar and Dr. Ilya Sergey to discuss Scilla, the smart contract language of the upcoming Zilliqa blockchain. In a previous episode, we’ve already covered the vision and technical approach of Zilliqa to solve the transaction scalability problem of permissionless blockchains. This episode focuses specifically on their smart contract language development efforts. Topics covered in this episode: Updated on Zilliqa’s progress since our last episode The technology of mechanised proofs Dr. Ilya Serger’s effort to mechanically prove safety properties of a blockchain consensus network Aims of the Scilla language Future capabilities enabled by the Scilla language Developer experience and perspective using formal verification tools How Scilla compares to Michelson, Tezos’ approach to smart contract languages with a similar end goal Current state of development of Scilla, and next milestones Episode links: Our previous episode on the Zilliqa platform Ilya Sergey's paper on mechanising blockchain consensus Scilla whitepaper Michelson, Tezos platform's smart contract language Zilliqa blog for updates on platform development Coq, a formal proof management system This episode is hosted by Meher Roy and Sunny Aggarwal. Show notes and listening options: epicenter.tv/238
Transcript
Discussion (0)
This episode of Epicenter is brought to by Knosis, an open platform for businesses to create their own prediction market applications on top of the Ethereum network.
They recently launched Gnosis X, a challenge inviting developers to build apps on top of the Knosis platform.
To learn more, go to Epicenter.tv slash Knosus X.
Hi, welcome to Epicenter, the show which talks about the technologies, projects, and people driving the decentralized technology and cryptocurrency revolution.
My name is Meher Roy.
And my name is Sunny Agarwal.
And today we have with us two guests
from the Zilika Project.
With us today are Amrit Kumar,
who is the co-founder and head of research at Zilika.
And Ilya Sergei, a assistant professor
at the University College London.
So let's go ahead and start off with some backgrounds.
Amrit, can you tell us a little bit
about how you got involved with the blockchain space?
Before everything, you know, hello to everyone.
And it's great to be back here.
So, yeah, I'm working with Zillica right now.
And before that, I did a PhD in computer security and privacy at India.
And then I came over to Singapore at National University of Singapore,
where I did a post-talk for around a year.
And where I started to explore blockchains,
especially from the privacy perspective.
So I was interested in Monero,
which was one of the privacy-friendly and privacy-enhancing cryptocurrencies.
and we tried to analyze how much privacy it gives in practice.
So this is how I got introduced into blockchain,
and then one day my advisor at NUS asked me
if I was interested in building a scalable blockchain platform called Zilika.
And this is how I jumped into Zilica.
Great. And Ilya, how about you?
How did you get involved with the space?
All right. First of all, great pleasure to be here.
It's my first time, and I'm about to enjoy it here.
All right, so my background is in formal methods and programming language theories.
So I did my PhD in Catholic University of Leuven, that's in Belgium, and then I was a postdoc in the Software Institute,
and then I took this position as an assistant professor or lecturer, as they call it, at University College London.
So since during most of my research career, I was looking at the properties of programs that, as we all know, can be buggy as hell and can go wrong.
So I was looking for ways to prove programs behaving correctly, being secure, and obeying certain properties.
Applications of blockchain technology, specifically around 2015, when Ethereum started to emerge,
seemed like a super exciting area where lots of research in PL theory and formal methods can be applied.
So since then, I started to look at this thinking about how can we possibly write safe and secure applications that run on
top of this distributed architecture, what are the properties that we are interested in, and
finally, what are the right programming language mechanism to express those sorts of computations
and those sorts of properties. And this is how we ended up collaborating with Zilica that I'm
currently advising, and we jointly are designing and building SILA as a program language on top
with the ZILCA platform. Awesome, cool. So, Amrit. So, you know, you guys were
on Epicenter in the past about I think November of last year. So could you tell us a little bit
and on that last time you told us a lot about the basics of the protocol and I think this is like
when you guys were like first getting started. Could you guys or when you first went public maybe,
could you guys tell us a little bit more about what are some of the progress and updates that
has come to Zilica since the last time you guys are on? Yeah sure. So at that point of time we didn't
have SILA of course. We were still working on designing and having this idea of
how we want our language to be. So we have progressed in two different directions. One is
of course on the SILA site where we have, you know, we released our white paper that
explains the design rational in for SILA in January this year. And in parallel we were
working on implementing the Zillica core protocol. So we are at a stage where
I mean, our first test net has been released.
That test net allows you, that kind of implements a Bitcoin-like infrastructure where you can do payments.
We have a block explorer and a web wallet that allows people to interact with the backend
blockchain infrastructure.
You cannot join the network for now, but we are going to open it up very soon.
You can become a miner.
We still have a lot of things to do.
For instance, we have to put gas mechanics and incentive layer that's not there yet, but
We are actively working on it.
In parallel, we are working on SILA together with EIA,
where we are working on designing and implementing
the interpreter and the language spec,
where we'll see what kind of primitives the language
will provide and support.
And recently we did a demo a couple of days ago
where we demonstrated the first contract that can run on Zillica,
that was a crowdfunding contract.
So yeah, things are working pretty well and fast.
outside but there are lots lots lots of things to do very cool yeah when I
actually first read you guys this paper I was like super excited because I first
like you know a lot of people use the work like proof of work and Nakamoto
consensus interchangeably and I always used to bring up to people like technically
you could do proof of work BFT I've just never seen anyone do it and it was
really cool to see that you guys were actually trying to do proof of work BFT so
In your current test net, do you guys have implemented the sharding mechanism and stuff?
Is that like live?
Yeah, so we have sharding, which is life.
So in the back end, when you send transactions, those transactions get shardered.
The network is sharded.
So it's not like a big network as in Bitcoin or Ethereum.
It's sharded.
So we have shards built up.
And those shards process are only a subset of transactions.
So yes, sharding is life.
That's super cool.
I mean, I personally look forward to joining the test net when you go live with it.
So this episode is not focused on the sharding mechanism of Zillica,
but rather the smart contract language, which is SILA.
And like, Ilya, you've worked for a long time on programming language theory and software verification.
And like scrolling through your past papers, I see lots of work with this idea.
of mechanized proofs.
And mechanized proofs will of course play a role later on
in the conversation when we discuss Scylla.
But before we start on Silla, could you give us an idea
of what is a mechanized proof?
All right, thanks for the question here.
So before I actually start talking about mechanized proofs,
let me arise a question of what is a proof.
And this is rather philosophical.
So it turns out that when we talk about proving something,
We usually have a certain statement in mind,
like a sum of two even numbers is even, for example.
That already implies that we have a theory of natural numbers
in our head, and we know what an even number means,
and we know how the sum is defined.
So all these together are the other components
about which we can make statements.
And then there is a notion of logic in which we make these statements.
Well, most of the humans, they have certain sense of
they know what is true and what is not true.
But when we actually try to establish
that certain statement is true, i.e., it is correct,
we start invoking the rules of the logic.
For example, we know if A and B are correct,
then either both A and B are correct separately.
So this is like a simple rule that we all have in our heads.
Or we know that if A implies B or, and we know A, then B holds true.
holds too. So all this rules is something that mathematicians started to
formalize and spell out in a mathematical system around the end of the
19th century and this is how the basics of logics have been laid out with many
from an scientists such as GERL, such as Church and and many others and then
there came the notion of constructive proof so what are the statements that
we can possibly prove well and this largely depends on
on what the rules we use and what are the axioms.
So most of us are familiar with axioms of planner geometry such that the two parallel lines
never intersects.
So this is something we take from granted.
But there are other things that we don't take from granted.
We derive from these initial assumptions and we apply those rules.
Okay, so now what is the true statement?
Well, the true statement actually largely depends on your initial assumptions and on the rules that you use.
that you use to prove this statement.
In different logics, they come with different set of rules.
And most of us, they have certain version
of constructive logic in our head when we know
how to combine facts together so we can derive more composite
statements and mathematicians make it precise.
So if we can prove certain statement in a logic of choice,
then this statement is considered to be true.
And if we cannot prove this statement,
it's considered to be false.
So the notion of falsehood is actually
the whole letter of statements that cannot be proved.
So now approaching the idea of mechanized proofs.
Obviously, most of us heard of proofs that are wrong.
A famous example is Fermat's great theorem
that has existed for 300 years and only has been,
and it's only believed to be proven finally by Andrew Wiles
in 1993.
So why all the previous attempts were unsuccessful?
Well, because this sequence of rules
that mathematicians used to,
apply for proven the ultimate statement about the numbers in Fermat's theorem, there were some rules
which were applied wrongly, so some rules which were applied out of place. And for example, I'll
give you some, so let me tell you that the sum, okay, so the example I gave before the sum of two
even numbers is even. This is something that we sort of believe because we know how numbers function.
What if I tell you that the sum of any two numbers is even? Well, you know that it's false and you know
from experience but if I ask to prove you you will be probably invoking the
definition of natural numbers the definition of addition the notion of evenness
the notion of oddity and at some point when you will try to combine the facts
you'll see that you have no rule to deliver this final statement so this means
that the proof isn't consistent the proof is wrong and obviously when we write
such proofs on papers and the proofs might be about any arbitrary
mathematical object so I use natural numbers
as examples, but those might be programs
or those might be distributed protocols.
So it is often the case that the proofs contain
wrongly applied rules.
And this is where the idea of mechanized proof
comes to the help.
So if we can write a proof as a sequence,
or rather a tree of applications of rules,
building our the notion of ground up from the axioms
by applying more and more and more proofs
and maybe we're using the libraries of examples
decent proofs in the way the programmers use the software libraries,
then we can delegate the task of checking
the validity of such proof to a computer.
And this is what the software known as proof assistants
or mechanized theorem provers are doing.
They usually do not prove statements automatically.
That can be done, but usually for very simple statements
or statements within certain theories
that are known to have a decidable fact.
But the world is very complex.
And most of the theorems, they actually require
a human in the loop to construct these proofs.
And this is why we still have mathematicians.
And they're not out of job, because humans
are required to construct those proofs.
But quite frankly, humans are not required
to check the proofs.
This is something that the computer can do.
And if we give the computer the formal system,
such as the system of nature and numbers, which
is where our objects leave, we give the statement
about certain objects from the system.
And we give the proof.
of this statement. Then the software can put all these three things together and
check that every single rule in this derivation of the proof has been applied
correctly and then if it says yes every single rule is applied correctly and
your axioms are good and they are all consistent then you have the notion of
mechanically check and proof so that is the proof of the statement that you have
here. The question is what if your statement is not good enough but that's a
separate separate question but the mechanized proof
is this idea that the computer can actually check certain derivation and that that can be done
automatically.
So that's that's really fascinating right so essentially the way I see it is like any
programmer at some level they have data and they have functions and then they use
functions to build other functions and then there is like a hierarchy of small
functionality that is used to compose something big and method
is similar because like some complex statement like the Fermat's theorem could depend on
simpler proofs of other other simpler things which depend on proofs of other simpler things
which depend then on axioms that we take for the logic that logical system that we are
considering and so whenever in a sense the way I understand a mechanized proof is I could express how
I'm proving Fermat's last theorem as this cascading set of smaller proof statements, give it to a computer,
and then the computer will verify the whole derivation of the proof and will give high assurance that that proof is correct.
This is very much a correct vision. Thank you. In fact, this analogy that you have just distilled
with the libraries of functions that the programmers use
and the textbooks of theorems
that the mathematician refer to
when they actually prove new theorems.
This actually has a name.
And this is called Carrie Howard Correspondence
or Carrie Howard Isomerosphism.
So Haskell Carrey and Howard
are the two very famous mathematicians
who live in the 20th century.
And they noticed this very elegant
correspondence between the models of certain classes of programming languages such as
functional programming language so these are models that are now in power
languages such as Okamal, Haskell and some others so and the idea of mathematical
proof in the constructive logic so it turns out that the statements the
mathematicians write they are very much similar to the types that people
give to the programs written and the functions
language. So for example, if you have a function that takes an integer and returns a string,
you can think of it as of an implication that takes something that is an integer and they
gives you something that is a string. So even the symbol is similar, so that's an arrow.
And that's why this analogy is so powerful and this is why it actually has been very much used
in implementation of the proof assistance that now sort of combined proving and programming.
So writing Ethereum is the very same as writing a type annotation to your function.
And proving the theorem is very much similar to implementing this function.
It's like your programming, but your types come before your implementations.
So in the programming it's usually the other way around.
So you first write the implementation and you think of the types.
And in fact, it shouldn't come as a big surprise that programmers,
especially those who are programming languages such as Haskell and Okamal,
which have very, very powerful type system,
they already do some sort of mechanistic or improving,
even though they might not realize it.
So when they make sure that the type program type checks
against the types, they give,
so the compiler tries to construct the proof
from the program and the types,
because the theory there is very simple.
So you don't need to have a human in the loop
to make this additional third object, which is the proof.
So compiler does that, and then the compiler itself checks it.
So the process of type checking in languages
like Haskell or Camel is, it incorporates,
to some extent, a very simple version of theorem
proven.
And the statement of theorem means that the program
satisfies this type.
And the fact that the program satisfies this type
means that the program has certain properties
and it doesn't go wrong.
So that's the famous statement.
So the well-type programs don't go wrong.
If you can assign a type to the program, it doesn't rush.
So here we already have industrial-scale theorem proven.
Well, type C-strands.
they only give you very simple properties such as basic shape of your computations but in order to prove some more complex properties such as safety and temporal something that we'll probably discuss later for that you need to have a human in the loop but yes so to elaborate on what you've said
theorems and programs they are actually the two sides of the same coin and sorry the theorem sometimes there are two sides of the same coin and uh proofs to theorems is the same as programs to types so that's carry-hoping
Of course, I'd like to actually take an example in order to sort of drive home this point.
And I mean, I'd like to take an example that sort of derives, maybe not derives, but is related to one of the papers you published,
which is mechanizing blockchain consensus.
Essentially in this paper, as I understand it, you've implemented a distributed consensus algorithm and the blockchain
data structure and you you prove that certain properties of this blockchain network will always
be met so i'll try to take a simpler example right so when for example i'm a blockchain node
what i'm essentially doing is i'm hearing of transactions coming into the blockchain node and
these transactions are collected in a mempool and then periodically the software that is running on my node
we'll compile some of these transactions into a block.
And it will try to add that block to the blockchain.
So there is some piece of software that is taking raw transactions
and then compiling them into a block.
And then this block has to be valid.
So we might imagine that we want a simple rule.
The simple rule is when we are doing this process of taking transactions
and compiling them into a block,
we don't want an older transaction which has already been confirmed on the network to be included in this in this block.
The transactions ought to be unique.
That is a property we desire out of the node software.
And so our node has a previous list of transactions and it saw new transactions.
It was now compile.
But we as these users of the software want to prove that this node running software will,
never create a block which repeats an older transaction.
And the way I understand it is, like,
you can prove that a particular implementation
that you are using will never make such a mistake
using mechanized proves.
Is that right?
Yes, this is very correct.
So let me just request this development
in the light of the previous analogy.
So you have, so what we did, we actually
defined a new set of objects, objects,
that represent the implementation of a very, very simplistic blockchain consensus protocol,
and the objects are essentially the execution instructions that each node can invoke locally.
So this is what we can write statements about.
So the statement that you phrase it, we actually didn't prove that
because that was sort of embedded by design.
As I said, the model was simplified in multiple possible ways,
and because we were focusing on slightly more global properties.
But the statement would be,
here are the instructions that the node can execute.
Assuming that it has freedom to execute them in any order
so the whole thing just doesn't crash.
Would it be the invariant that at any point of time
the note doesn't propose a stale transaction,
where the stale transaction is such that has been already proposed?
So this is what's called a safety property or an invariant.
And this is something that we can prove, and the proof will involve case analysis and
introspection on the way the semantics for these instructions is defined.
So again, your objects are these instructions and semantics, your statement is this invariant,
and the proof is something that you can write out of the semantics.
But you see, here's an interesting point.
So what we had to do is to build the whole new small theory of blockchain protocol implementations.
That's not granted. That's not something that we can just take from an existing implementation.
So what we needed to do is to embed it into the general mathematical framework
that allows to build these proofs and check them mechanically.
But yes, this is possible.
And in fact, we focused on somewhat more global properties.
So what we proved is an eventual consistency in the form of consensus,
meaning that if you are a local node and you interact with other nodes,
then if at some point,
there is no communication between you and other nodes then you know that all of you
share the same knowledge of the ledger in other words you have reached the consensus
how did we prove it well we took the rules of the system and formulated the
property that implies implies this agreement notion so that's that's that's
the answer so are you guys planning on like formally verifying then
your actual blockchain implementation.
So like I know that the smart contracting system that you guys are creating is designed to be formally verified, but so similar to how Tezos is writing their entire code base in Okamel so that they can formally verify the code base itself.
Is this something you guys are looking into doing in the future or even right now?
Okay, so let me ask some part of this question.
To verify the entire infrastructure is a very very very,
ambitious project. So that just like to give you an estimate projects of this scale
actually a smaller scale, we're fine something like mechanically defining and
verifying a small OS kernel or a small compiler for a subset of C this is
something that the humanity has conducted and that took literally tens of many
years because you really I mean nothing is taken from granted. You need to build
the whole system ground up in this framework
define the semantics of every single primitive,
every single bit vector operation and whatnot,
and then proof properties of that,
and then compositionally build the proofs of the entire system.
So we are not sure if we are planning
to undertake such an effort right now.
And furthermore, so the fact that something
is written on the Camel doesn't make it immediately verifiable.
It makes it easier to translate into the language
of theorem proof.
So that's a big help, but it's not like a Camel lens
some foundational verification results.
So what we will probably try to start from
is extending this work that Meher mentioned
that we have done on the mechanized blockchain consensus
and formulate the abstract model of Zillica protocol
in this term.
So we at least could be sure that foundationally,
the protocol has no flows in it.
That doesn't mean the implementation has no bugs.
That's the whole different level of complexity.
And typically it's done in some.
several steps. So at the top level you have your abstract
specification, abstract model, and you prove some general
properties thereof, and then you show that your implementation, or at
least some parts of it, refine parts of the abstract
specification. And by means of proving this refinement, you
show that actually the same properties, they are preserved. But this
step, this is the complex part. So proven refinement is complex, and we
might be able to do it in the long run for some parts of the
protocol, but not for, not for all of it. And that's
big thing so as I said nothing is taken for granted all the crypto primitives all the
system OS system interaction primitives this is all like if you want to have ultimate
guarantees this is also something that needs to be embedded into the proof
assistant and establish correct and approving cryptographic implementations
correct that's a huge endeavor by itself and several labs at INRIA MIT and in there
are currently taking on just for fine these libraries that implement SHA 256 and
and similar one.
So it would be good in the ideal world
if we could compose all these verification results together,
but as always, there is some friction on the boundary.
We don't all speak the same interfaces,
and it's a big goal.
So to answer the question, I think what is feasible
in the medium term run is to verify the properties
of model of Zilica, and this is what we will be probably aiming for.
But Amrith might be able to elaborate on future
I'll just add one point to it, which is, I mean, last time I talked about Zillica.
So you have a few things in an abstract manner that you're using, right?
One is proof of work.
We're using PBFT, and then you're using something called collective signing,
which is essentially if you have N people signing a message,
you could reduce that to just one signature.
So the signature size could be reduced from, you know, linear to constant size.
So you could pick those pieces together.
I mean, you could either prove properties about those pieces individually,
and then you could try to combine at an abstract level,
at a very abstract level.
But, you know, as Ilya said, you know,
proving properties about the implementation is another thing.
So, yeah, we'll rather go with, you know, with baby steps
and see how far we can go.
This episode of Epicenter is brought you by Gnosis.
Gnosis is an open platform for businesses
to create their own prediction markets on the Ethereum network.
Prediction markets are powerful tools for aggregating information
about the expected outcome of future events.
So this can be used for things like information gathering,
incentivizing behaviors, making governance decisions,
or even creating insurance products.
So in order to turn Gnosis into the most powerful forecasting tool in the world,
they recently launched Gnosis X.
It's a challenge that invites developers to build applications on top of the platform.
And the best applications per category
will be rewarded up to $100,000 in GNO tokens.
So throughout the year, Gnosis will announce different categories for the challenge,
and the current challenge has categories for science and R&D, token diligence, and blockchain
project integration.
Gnosis also provides the SDK, which allows you to easily get started with everything
you need to get coding.
And they also provide dedicated support channels throughout the challenge for teams and solar builders.
Are you up for the challenge?
Get started now.
To learn more and to sign up, go to epicenter.
dot TV slash Knosis X.
We like to thank Knosis for their support of Epicenter.
The connection to smart contracts here is different.
So when you have an implementation of a blockchain node that is extremely complex
and maybe like proving properties of the implementation is like tens of manned years of work.
But on the other side, for smart contracts, the programs might be simple enough to be
amenable so that aminable to proofs and the proofs might be useless. So for example, the
instance I was thinking of was there are many people that have these days crypto wealth,
lots of crypto wealth. And presumably they might want to hand this crypto wealth down to the
next generation in the form of a will. And the will can take the form of a smart contract.
So there's a smart contract.
I put my Zillica tokens or Ether tokens in it.
And then when I die, somebody puts in a message and then that smart contract.
So when I'm alive, that smart contract allows my address to withdraw the tokens from that from that smart contract.
But when somebody puts a message that I'm dead, I'm no more, then a different address which is controlled by, let's say,
let's say my child is then able to extract the ether or zilica out of the smart contract.
So a small program like that, a smart contract can be really useful.
And the place where mechanized proves perhaps enters this space is,
if I'm going to commit lots of tokens to this smart contract,
well, I want to know that all of this money will never get stuck in the smart contract,
that either me or my children will always be able to,
extracted, right?
And is it correct that you're focusing on mechanized proofs
in the smart contract language due to these sorts of applications
in mind?
Yes, so let me answer with that.
Yes, this is a fantastic summary of what we are trying to do.
So the properties of smart contracts, which
we are mostly interested in, are this large scale
safety and lineness property. So what you've given as an example is a perfect case of
lioness property that informally means that something good eventually happens. So
something good in this context means that a person's successors, children's or
grandchildren, will be able to cash the check. But that might not happen right now,
that might not take place now and certain conditions need to happen for this event
occur. And you name this condition, like somebody needs to send a signed message that a person has passed away and then the successor should contact with another signed message and whatnot.
So that is a great example of specifying this temporal property. So what you just said in playing English can be written in terms of, let's say, a lifetime execution property of the smart contract. So if we have smart contract and the way to
make statements about the way, make the statements about how it behaves over its lifetime.
So what changes does it make over certain conditions, where does it transfer money under
which condition that happens?
Then we should be able to write this proofs.
And this is how, this is what we tried to build into the design of SILA.
So that's the language whose semantics give you this notion of lifetime executions of the
contract.
So you can make statements about them.
And you can phrase the statements in the context of a proof assistant that already comes with all these foundations of still a contract institution built in.
So actually knows the language that you're speaking.
And then you can use the machinery to write the proofs and the software will check that these proofs are great.
So in fact, the property, like you mentioned, a very similar one we have proven for a model of smart contract in Kock, which is the software for checking proofs out there,
about it later. That's the property of a Kickstarter contract, which says that under certain
conditions, if you have backed the campaign, if the campaign fails, and the goal has not reached
and the deadline has passed, and you happen to be a backer in the past, then in the future you'll be
able to withdraw your money and do it exactly once. So this is the kind of landless property
that you want to establish, and we, and our language design helps you to do so.
Cool. So could you tell us a little bit then about more about what are the broader design goals of Skilla?
And like, so, you know, one was easy transposability to cock.
Another one I've noticed was a very big focus on preventing reentrancy bugs.
What are some of the other major design considerations that went into this?
Okay, so let me make a first step and then Amrit can add more.
Okay, so from a general perspective, I think it's very important to have a language that allows for formal certificates
to be published and made available to people.
So it's often the case in systems like Ethereum that the contract is compiled to bytecode
and it's advertised as something that delivers certain surface,
but essentially the clients interact with it on their own risk.
So we want to reduce this risk by giving a possibility to put formal statements
and proof specifications for what contracts can and cannot do
along with their proofs or some digest of the proofs to the blockchain.
So this is the concept that has been studied, but not in the context of blockchain.
It was very popular in late 90s.
It was called proof carrying code.
when a certain code comes with a certain specification
and the proof of the specification.
So now we want to try to do it in practice.
And the big challenge here is actually to come up
with useful specifications.
So this is something that cannot be done automatically.
So the contract might be flowed,
but so might be the specification.
So it doesn't expose the flow.
So the big challenge will actually
provide the right logical abstractions
to make statements about the contracts.
But so far, we are building the language
that even makes possible.
to specify properties like Meher suggested.
So that is my part of the vision,
and that might be able to complement it with additional details.
Yeah, so when we started out, you know,
we felt that we are building a new blockchain platform,
and we wanted to have a platform with smart contract features into it.
But then, you know, we felt that the way smart contracts exist today,
they have certain issues, and you need a way to kind of improve a point.
to kind of improve upon them. And one way was to, you know, structure the language
in a way that it, you know, you can eliminate certain, certain issues directly at the
language level and also to make the language, you know, kind of proof friendly to some
extent. So it becomes easier for you to, to write, let's say, cock proofs, or it
becomes easier for you to reason about, you know, the correctness properties or safety
or likeness properties about the contract. So, and that's why, you know, we kind of felt
that, you know, there's a need to develop a new language and this is how we started
started to SILA. I see. And so you mentioned it's like one of the goals is to provide a
platform for users or for developers to submit formal proofs along with their code. So a couple
questions here right. So one is so does that mean that the language, the code that goes on
chain is SILA code rather than so like you know you you guys in the paper it's it presents
as this intermediate language. So is the idea that it's this intermediate language that
will be pushed to the blockchain or is it the lower level language?
Okay, so let me take that. We were discussing this particular design choice quite a lot
and right now the conclusion is that we want to put intermediate level language on the
blockchain because we value tractability, readability and possibility
for formal verification more than the benefits,
whatever benefits the low level representation brings.
So I think there is a certain bias in the community,
since Ethereum was very, very extremely successful
as an implementation of a smart contract platform,
that a very low level representation should go there,
because that's probably efficient to store,
that's efficient to execute.
But, well, EVM is not what actually executed,
per se in most of Ethereum clients.
So it's jitted down to some low level architecture,
so it could run faster.
So if we do that with supposedly low level EVM,
so why can we just take a slightly higher level language
and compile it as we process the blog that comes down?
So we are not particularly concerned with efficiency
because that's going to be compiled down
to something low level.
And fractability is the main concern.
That's why we are going to put the intermediate level language,
i.e. Scylla.
If I may just add to this, you know,
If you look at certain auditing tools or services
that exist right now, if you look at the smart contract
that's going on the blockchain, it's in bytecode,
and then if you want to audit it, it becomes very difficult.
You either need to decompile it or de-assemble it
or figure out a way to manage it in human terms.
But if you have an intermediate representation
that actually speaks the logic that the contract actually
implements, it becomes much easier to understand
what the contract is actually doing.
rather than putting a bytecode on the blockchain
and then separately publishing the source code somewhere else.
So I think it's better.
I mean, of course, there are certain trade-offs
that you will have to play around with.
But because we are targeting security,
we felt that it's a better idea to actually go
with CILA kind of representation directly on the blockchain.
Yeah, I mean, this is always one of the very weird things
I found about Ethereum, where on the blockchain,
all there is is that EVM bytecode,
and then whatever
people want to look at the actual solidity code.
Basically, the default is just to go to ether scan
and look at the solidity byte code there.
And that's a very centralized thing
that maybe we don't want to like,
we want something that's more accessible
in a decentralized way.
Yeah, right.
So actually like this, this sort of brings
an imagination into my head.
And I'm going to lay out that imagination
to sort of sketch out the vision
you're, what I think the vision
you're building towards is.
So today if I'm a smart contract
user, right, and
let's say I'm going to use a multi-signature
contract, right?
And I'm going to put money into this
multi-signature contract.
Well, before putting money into
the multi-signature contract, a power
user like me wants to
know if that contract is security
audited, right? And I
then I'll go to this, probably
this audit from least authority
or trail of bits or something like that
and I'll read the audit report
and then if the auditor say good things
about the contract then I'll end up committing money to it
but somehow somehow in this process
I'm trusting the judgment of the auditor
right let's the program
I'm the programmer wrote the contract
and then the auditor audited it
and I'm trusting the combination of the judgments
of these two people in deciding that this contract
is safe so what
you seem to be building
towards is like 10 years later, when I see a multi-signature contract, I will go to the contract
and then link to the contract will be a set of mathematical proofs that my browser can verify
mechanically.
And these mathematical proofs will ensure that large chunks of the functionality of that
multi-signature contract are correct from a mathematical proof perspective.
I do not need to rely on the judgment of a human security auditor in order to be sure that this multi-sac contract is okay.
And sort of, Skila is sort of your way of building a language that allows this future to come about.
That is correct.
So instead of trust in the human auditor, you will be trust in the cold-hearted, soulless machine that takes the specification of the contract
and the proof and checks them.
And I hope it didn't come out bad.
It's not like we are trying to take out people's job.
Rather, we will build a new generation of new opportunities for job
because someone will actually have to write those proofs.
As I said, like full automation is hardly possible
and that requires trained, qualified mathematicians
and formal method experts to conduct some of those proofs.
But yeah, that essentially is the vision.
So the devil is in the details.
When we say correct, we usually don't specify correct in which meaning.
So the most critical part of this thing is actually to come up with right specification.
And this is something that we don't have the ultimate answer for, because it's very much domain specific.
You came up with the specification for this wheel transfer and contract, which was quite good.
Was it the complete specification for this contract?
Well, I'm not sure.
I think we will still need human auditors who will come up with the right.
specifications so the provers could prove them and then all will be safely put onto the
blockchain and the machines could check that the proofs are good but I don't think
we are at the stage when the specifications can be derived automatically I think
this is what we need humans for yeah if I just may add one point to this which is
you know if we open this up at the community level you know you could say that look
here is I mean this is how ERC20 standards came out right
You know, people came up with an idea as a draft,
and then, you know, people started vetting for it,
and then realized that, okay, these are the functionalities
or a specification of the desire from an ERC-20 kind of contract.
And then, you know, you had all these functions and interfaces that came out.
So you could have something along those lines here as well.
So you could say that, look, now that ERC-20 kind of contract has been established,
the standard has been established,
now what are the properties that you're actually seeking?
So, you know, you could say that I want my money to be, you know,
When I make a transfer, I want to ensure that the right amount gets transferred,
or if the mapping is, if the ERC20 contract means the mapping,
that mapping doesn't get manipulated or, you know,
changed, gets changed over time.
So, you know, you could have another line of this vetting process where community can jump in
and say, here are the properties that we expect this contract to have,
and then you could have another human people who would come in and say,
look, here is a proof that I have come up with.
can you vet this proof?
So, you know, it goes to another level
where you are not just building standard for contracts,
but you are building standards for proofs, for instance, and properties.
So, like you said, at the moment,
the process of generating these proofs is a very, like,
human-intensive process.
And it's like, like you said,
you almost need, like, trained mathematicians to do this.
So do you think this would cause, like, a lot of people
to not even, like, go through this process?
So for example, like, you know, in Ethereum, there are some very, you know, some decent tools like for very, very preliminary checking, such as like Oriente and stuff by your colleague at Loyal, actually, right?
And, but, you know, there are so many contracts on Ethereum that are like vulnerable to things that Oriente would have easily picked up.
And so if, like, smart contract developers aren't even taking the time to, like, Oiente is literally like a one-click thing, but still,
many people aren't even taking the time to do that.
How do we encourage developers to go through this rigorous formal verification process?
This is actually a well-known trade-off in not just in the contract engineering.
It's a well known trade-off in software engineering as well, where the community has spent the
last five decades building tools for testing, analysis and verification.
So by no means verification is the ultimate question that solves like all possible issues.
And one issue is the issue of how difficult is it to adopt the approach.
So even though we designed this code language so it would be verification friendly, we
will sure provide the suit for basic analysis and basic testing of the contract.
So this is sort of the light way to validate your codebase, the same way that the developers
validate the call they write in other domains.
Okay, so you can verify the compiler,
but then we will spend 10 years doing that.
Or you can just implement the compiler
and test it and have a 99% assurance
that it's mostly buck free.
And if these are the chances you're willing to take,
well, we'll give you the tools for that.
So like one really to be easy to implement analysis
that is in our roadmap is to make sure
that the contract doesn't leak money.
So with colleagues from in US,
We have recently submitted, okay, so we made it public.
It's not published at the conference, yes,
the paper that has an analysis for EVAM
that the text with a high degree of assurance
that the certain contract might actually block money
the same way, piracy wallet did that,
or it might give money to someone who is not supposed
to receive that.
And we are planning to build the same analyzers for SIL as well.
And as the second part, we will probably make
this analyzer certifying.
So for a property as generic,
And as simple as the contract doesn't leak money,
we actually don't have to, we will not have,
we will not need to have trained proof engineers
to prove that.
So there will be tool that will be checking that
and providing this certificate.
But that's again, that's only again,
because the property is very basic
and the property is very simple.
Like the property of this will transfer in contract
to something very domain specific
and that probably will require a certain amount
of human effort to deliver it.
But that's like with higher assurance,
comes higher cost of versus.
So dumb properties you can prove almost for free, complicated ones you will need to invest.
The other point that I would like to add to this is if you look at the applications or the contracts that actually exist right now on popular blockchain platforms like Ethereum,
they can be categorized into very popular ones and not so popular ones.
And those who are very popular ones, which are essentially, let's see, ERC20 kind of contract, or exchanges or, you know, some very auctions,
And for that, you could have templates, the way you have templates for ERC20.
So you could say that here is how an ERC20 contract should look like.
And here are the possible properties that you may want to prove on this contract.
And then you could have associated proof directly, either coming from, let's say, the open community that we have,
or from anyone who has experience in developing COP proofs.
The hard way which no one wants is to impose developer to submit a contract,
You cannot submit a contract without actually having a proof.
That would be really cruel, I would say, for anyone.
Because maybe you are very familiar with JavaScript or Solidity or Scylla,
but you may not be very familiar with cockproof.
So even if you want to do this, you know, you are still reluctant to do, you know,
in doing this because then you will really push away those developers
who actually have very cool ideas.
But just because they don't, they are not familiar with cockproofs,
you know, they can't build on you.
So yeah, we will have helper libraries for standard contracts
and we'll have some, you know, basic properties
that you can always prove for any contract, for instance.
And then if you are, you know, if you're a Cawke specialist,
you know, feel free to write your own contract
and your own properties and on your own proof.
Okay, let's first talk about what Cork is
because we have mentioned this Cork tool multiple times in the podcast
and Ilya give us an idea what it can do.
Okay, so if the question is about a very short overview of what the tool can do or what it was designed for.
So the tool is actually a verbatim, okay, first of all, it's very old and it's very established, so it's started in late 80s.
And it has been mostly a tremendous success in the past, let's say, 12, 13 years.
So with initially it was almost a verbatim implementation of one of those very powerful logics to conduct mathematical proofs.
the logic is called calculus of conductive construction,
and then it was extended a number of times.
But because of this correspondence between proofs
and the programs, it turns out that COK also
comes with a very decent programming language.
It's not too incomplete.
You cannot write non-terminating loops in it.
But you can write surprisingly many interesting and useful programs.
So pretty much anything you can come up with,
well, anything that probably terminates.
And then it turned out to be the case that,
In the very same environment, you can write programs
and you can write mathematical statements about them
and you can write proof.
So all these three components that I mentioned before,
they can be implemented in the same framework.
And that was a very powerful insight.
And since then people wrote compilers in Cog,
if we are also verified in Cog,
that's a famous ComServe project by Inria
and Xavier Lera, who was spearfading it.
And they wrote OS kernels
and to the extent that people were enrolled
distributed systems in COK.
And it's interesting because obviously COK doesn't come
equipped with all the distributed socket machinery
and all the backend.
So when there is a certain level of trust involved,
so what you write in COK is the logic of nodes.
And then you trust that the network implementation is correct,
and it corresponds to what is your mathematical model
of this network implementation.
But that is just all to witness that COK is very, very powerful.
And if you're a programmer and you look at it,
The code looks very much similar to code in languages such as Okamo.
So it's a functional programming language.
You can recognize similar libraries, similar primitives.
So if you actually want to write something in COG that compiles and runs,
you don't even need to know about its proof component.
It's fairly straightforward.
And that's, okay, I have quite a number of years of experience with COC.
I used it for proven distributed systems correct and concurrent systems correct.
And that seems like a natural choice to adopt.
So essentially we designed CILA to incorporate a significant subset of COK and add slightly more on top of it to account for the blockchain specifics and communication between the contracts.
But if you look at most of the CILA code, it will look very similar to a subset of CEC code.
And that's why it's so easy to translate CILA to COK and prove facts about.
So we've been mentioning a lot about like, you know, O'Camel and Haskell and these like more established functional languages that are also designed, or not designed perhaps, but are easier to translate to Cork, right?
So what made you guys decide to create your own language rather than try to adapt the existing languages?
So, for example, you know, one of the things that was happening, you know, Ethereum when created the ETH,
It decided to create its own smart contract languages like solidity and stuff and now, you know, it seems to me that there's
A general sentiment that like you know why did we go at like we shouldn't have gone and like created our own languages
Because now we have to create all the toolings around it and this sort of what some of the hype around WebAssembly is becoming where it's like oh this is awesome
We can use the more established tooling around Rust and go and C++.
So did you got what were some of the?
trade-off you guys had to make on like between using more established systems languages versus building your own
All right, so the process of designing a programming language is a series of very very painful trade-offs
And the trade-als that we had to consider is expressivity versus tractability
So Haskell and Okamil okay, let me focus on functional languages because those are closer to mathematics and that's the reason why there is a lot of hype in the smart contract community about how smart contracts should be written and
of paradigm. So they are fantastic languages. They're very expressive and you can express a lot of
computation in very few lines of code. But that comes with the cost that the semantics
of Haskell and semantics of Okamo, that is what the programmers need to know in order to
write programs in them. It's very big. And it makes it very costly to embed this language
in a formal reasoning framework such as COK. So in this way, the
So essentially the rationale was to scale down to the bare set of features that are required
to write reasonable smart contracts, keep it readable, but do not add stuff on top something
like syntactic sugar, something like type class operators, what Haskell has type kind, polymorphism,
and all these crazy goodies that dedicated Haskell hackers use.
So that's rationalize.
So we wanted to have a minimalistic calculus.
And this is essentially how the basic research in programming languages.
So if you want to have a formal system about which you can prove things, you try to remove all the goodies that
actually make programmers' life easier.
And this is why we position this as an intermediate level language.
So we don't exclude the fact that something like a very large subset of Okama and Haskell can be translated down to SILA when we come to this point.
But we won't be proven stuff about this top level programs.
We will be proving stuff about the programs that are represented in Scylla.
So again, the trade of expressivity versus attractability for the sake of formal reason.
So in the sense, the more expressive the semantics, the more the range of things programmers,
range of basic constructs the programmers can use in writing a program, the more expressive
language it is, the harder it would be to prove properties about the programs that result
out of it and in Scylla is meant to be meant to have very little expressivity but
amenable to proofs and then in the future presumably somebody could write in O'Camel have it
compiled down to Silla and then prove properties about the code in in Silla that is correct
okay so we have we have around like 15 minutes left and
one of the things we'd like to focus on and understand is how does SILA compare to other efforts that are going on in the formal verification and smart contract space?
So of course, like this idea that smart contracts ought to have proofs and the need to design languages so that theorem proving is easy has been taken up with by other projects and one of the most popular at least in terms of,
of funding levels is the Tesos project,
which has developed this language called
Mikkelson or Michelson, which seems to have very similar aims
to Silla.
Could you give us an idea of how the Tesos approach
and the SILA approach might be similar
and how they might be different?
All right, so the question is about Mikulsson language
in Tesos and Aros Silla.
Yes, so the design goes
are very similar in a sense that both Thesos community and us are very much interested in
formal verification of rigorous the stated properties of the contracts. Furthermore, we share a lot
of background because Thesos comes from Okamel Labs and most of the Ocamo Labs employees, well,
specifically Benjamin Canoe and Artul Bratman, who were one of the founders. They all come with
solid Ocamo background and formal META's background, so it's quite unsurprising that they
are after the same thing. I think the and they also are functional programmers. So
Mickelson is very much a functional programming language. So the devil was in the
details. So for some reason, Mickelson was designed as a stack based language,
something that mimics EVM or similar formalism closely, but it is not yet clear
what are the trade-offs with regard to the formal verification. So one of the
of the claim that the interpreter for this stack-based formalism is actually quite simple
and it can be fully embedded in coke. Well, I wouldn't say that the interpreter for Scylla
is much more difficult. But a reason about the stack is not something very natural.
So that's that we haven't seen that many proofs about Thessus contract. So it's hard to
have an apples to apples comparison and see what the coq embedding of, let's say, a crowd
funding, a Kickstarter contract in Texas, in beginning.
would be like and how the proofs compare in terms of length.
So this ultimately goes to benchmarking the proofs sizes
and see in chicken whose proofs are larger or not.
Since they don't have them yet, they might have them
soon that will be, that will be good.
So in fact, okay, so the idea of having this low level
stat based language in Mekelson,
let's do the second development and now they have liquidity
with is a more high level language,
which is actually quite similar to Scylla
in the sense that it also looks like a subset of a camel
and it has very similar functional programming.
The thing which is missing from there,
or maybe it was a design choice,
is the explicit communication between the contracts,
in a sense that each contract is kind of this sealed,
autonomous actor that only receives the messages
from other contracts and send messages to other contracts.
And this is only the way for the contracts
to interact on the blockchain.
So this is something that SILA as a language enforces
syntactically.
So you, like if you write the contract,
it's gonna be shaped as this actor that receives messages
and sends messages.
So it's not enforced in liquidity,
neither it is in Mickelson.
And that's why I can only speculate
the stating properties like what we discussed before
about this passing rule in the future
is going to be a bit more complicated
should they have a formal model of liquidity somewhere in Cork.
So yeah, so basically the main difference
is that liquidity slash Mekelson don't have them
this actor, message passing model enforced at the language level.
And the second is microsome stack based,
which makes it slightly more low level than still.
If I just may add to one point,
which is, if you look at, I mean,
since you're talking about other languages,
we should talk about this language from Ethereum,
which is bamboo.
And again, the idea was with bamboo is,
is the straight tradeoff between expressivity,
and the properties or the correctest safety and properties
that you can prove about the contract.
So I would say that it's, it's,
bamboo is another language which is Lexus expressive
to some extent, but it's more structured in the sense
that what they call polymorphic contracts.
And the idea is that, you know, the contract gives you a rigid structure
and it tells you how the contract will change itself
depending on how users are interactive with it.
So instead of having five, let's say five functions
or five transitions in your contract,
It would have one contract and then another contract and then another contract.
And this contract will eventually change depending on how users interact with this contract.
So it has a very predefined kind of behavior to some extent, how this contract will behave.
So I feel that, you know, the idea that we are doing in Sala is pretty close to him.
I mean, I will be able to elaborate more on this.
But I feel that bamboo has some elements that Silla has as well, to some extent.
This is correct.
Another kind of important difference I noticed as well was that in Mickelson, there seems to be a focus on making it purely functional, while in SILA there is this ability to have like some sort of state that you, that functions are allowed to modify.
And what were some of the tradeoffs you did on making that decision?
Okay, so the question is why Scylla is not purely functional while,
Michelson and liquidity seem to be a purely functional.
So this is actually an interesting question
because it very much comes down to the terminology
of what you count as a functional
and what you count as an imperative language.
So both Scylla and Michelson have state.
But in the Michelson, this state or liquidity,
which makes it even more explicit,
this state is made explicit.
So basically you pass this state around like
accumulating certain components to it and you can even ship it to another contract and you can get it back from another contract and then you can
finally give it back to the blockchain back end that will probably store it. So this is probably the level of
verbosity that we wanted to exclude. I mean I said that we owe for taking minimalistic choices, but having a state that the contract
modifies seems so paramount and so important that we decided okay, so we are going to have state. We are going to have
very small set of operators that interact with this state.
And well, and state is something like this,
this sort of mutable, a state is something
that formal methods community knows how to deal with.
So there are many verifiers,
there are many program logics that can prove
automatically the effects of program interacting with the state.
So we didn't really see the benefit
of making it absolutely pure functional.
And to this extent, Scylla also has
a bunch of other effects that are expressed in the semantic.
So in addition to change in state, we also have sending messages,
and in the future we will have emitting events.
So making these things part of the execution was important,
and obviously you can model all of that through some state passing mechanisms,
but we thought that for the way for writing certain specifications,
it will be better to implement them the way with it.
So bottom line, most of the state manipulation,
you can do in Scylla, you can encode in Tesos in Tesos Mikkelson as well.
It's going to be less straightforward and it's at the moment it's not clear whether it's
going to be exploitable in a certain way or not.
As I said, since Mikulsson implements interaction between the contracts using this
state passing mechanism rather than by sending messages, the implications are not clear.
So another question then, maybe somewhat related, is
You know, we discussed earlier one of the other main features of the Zillica platform is the whole
sharding mechanism. So what were some of the design choices that went into the language as well as the
overall VM design that were, like, you know, one of the big things that people often want need with
sharding is you need the ability to do like asynchronous contract calls and stuff. And so what were
some of the thought process of like sharding that went into?
the design of this system?
I mean, for instance, to start with,
we don't have an EVM the way VM the way EVM has a VM.
We are currently have an interpreter that takes
SILA contract interprets us.
Now, the problem is, you know, you have a short in architecture
and you need a way to kind of, as you said,
to pass messages in an asynchronous manner.
So it's a work in progress where there are three
different angles that we're exploring right now.
One is, can you ensure that all transactions that somehow may be in conflict because they are touching the same state or reading from the same state or manipulating the same state get started to the same shot.
So this is one idea.
If you can do that, then you know, you can eliminate certain issues that you're talking about your hinting at.
The other idea is that can you actually have some kind of a language support which Ilya would be able to elaborate on it?
Can you have some level at the language which allows you to kind of merge certain transactions going in different shots?
And there could be other ideas, for instance, can you have some kind of a conflict detector or resolvers somewhere in the architecture that helps you, helps you, those, you know, solve those problems?
So we are currently actively exporting all these three directions, and we have plans to write a paper around this, which we will be happy to make it public very soon.
So what we'd like to get a sense of is what is the current state of development of SILA
and what can the community expect to see in the form of releases over the next six months?
So what we have right now is an interpreter that it's not complete of course,
it's still work in progress, but it is in a stage where we can write
we can still write some interesting contracts.
So we had an example in the Sela white paper initially,
the Kickstarter contract.
That contract can be interpreted by our interpreter right now.
So we have that support already.
There are some other examples that we are currently working on.
We are working on examples like ERC 20, ERC 721.
So the interpreter is in a stage where it can handle certain contracts.
It cannot handle all contracts that you would imagine right now.
for instance, it doesn't have certain supports for certain data structures, for instance.
But we do have a working prototype where, you know, a user can deploy a contract to the Zillica network
and the Zillica network and node in a Zilica network can actually run that contract and commit the state to the blockchain.
And, you know, the entire chain is ready.
Our plan is to have the complete, you know, implementation of the interpreter ready by end of June.
when we are planning to have a second version of the test net,
where we have plans to show to people, you know,
here's how SILA looks like and you can play around with contracts.
Let me add to that.
So on the verification site, we also have the prototype implementation
of SILA in COK, which is done by means of what's called shallow embedding.
Essentially, instead of implementing the whole language interpreter in COK,
we only implement those primitives that COC doesn't have.
And since there is a lot of shared parts between the syntax,
the rest is taken from COC, verbatim.
And in this prototype, we can prove the properties like what we've discussed,
and the very same Kickstarter contract is implemented in this formalism,
and the properties are established.
What is missing, though, is the connecting link between what runs on the blockchain
and what is formalized in COX.
So that can be done by syntactic translation.
this is something that we will provide very, very soon.
So you could write your contract in SILA, then it gets extracted to COG.
There you prove the properties, you know that it's right,
and then you can go back to your implementation and run it on ZILCA blockchain.
Cool.
We look forward to the releases, especially in the next version of the TestNet.
If people are able to deploy basic SILA contracts, that would be pretty cool.
we look forward to that.
So Amrith and Ilya, it was a pleasure to have you on the show
and we hope we'll catch up again on Silla and Zilika
six months down the line when there's more to report.
Especially maybe what might be interesting is to discuss
the interaction of programming languages and sharding
in your paper there.
Zilika always remains one of the projects I closely follow.
So I'm really looking forward to what comes out of this project when it goes into the main net.
So for our listeners, thank you so much for joining us.
As you know, we release new episodes of Epicenter Bitcoin every Tuesday.
You can subscribe to iTunes, SoundCloud, or your favorite podcast app for iOS and Android to keep up to date with Epicenter.
You can watch the video version of this show on our YouTube channel at YouTube.
YouTube.com slash epicenter Bitcoin.
Also, we've recently created a Gitter community where you can chat with us and other
epicenter listeners at epicenter.tor.
I find that the conversations here are very intellectual because of the nature of the community
we have built.
So check our Gitter channel out.
And of course, your reviews and your feedback on Twitter and iTunes is very valuable to us
and it keeps us going.
So support this show with leaving us a review on iTunes
And we look forward to being back next week. Thank you for listening
