Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Mike Hearn & Richard Gendal Brown: Corda – A Distributed Ledger for Financial Services
Episode Date: October 31, 2016Attracting over 70 of the world’s biggest financial institutions to its consortium in just over a year, R3 has accomplished a formidable task. Aiming to rethink the fabric of the financial system, t...hey first conducted experiments testing blockchain platforms for their member and last year began developing their own distributed ledger platform: Corda. The effort to build Corda was lead by R3 CTO Richard Brown and the former Bitcoin developer and R3 lead architect Mike Hearn. In a wide-ranging discussion, we covered the vision of the project and why it represents a radical departure from existing blockchain platforms. Topics covered in this episode: The origin story of R3 Why existing blockchain designs didn’t meet their needs How the R3 team lead a design effort to develop a new platform from scratch The business problem Corda is aiming to solve The components of Corda’s architecture How Corda handles privacy Notaries and preventing double spends Open sourcing Corda and the plan to join Hyperledger Episode links: R3 Corda: What Makes it Different Introducing R3 Corda: A Distributed Ledger for Financial Services Corda: An Introduction [PDF] R3 Website E151 - Ian Grigg: Ricardian Contracts and Digital Assets Prehistory Smart Contract Templates: Foundations, Design Landscape & Research Directions This episode is hosted by Brian Fabian Crain and Meher Roy. Show notes and listening options: epicenter.tv/155
Transcript
Discussion (0)
Hey everyone. We're looking to hire a part-time communications manager to join the Epicenter team.
You can get more information about that position at epicenter.tv slash apply.
So if you're interested in learning more about that, if you think you have what it takes,
go to that website and you'll find the job description and the instructions on how to apply
for our communications manager position. Thanks.
This is Epicenter episode 155 with guests Richard Brown and Mike Hearn.
This episode of Epicenter is brought you by Jax.
Jax is a user-friendly wallet that works across all your devices and handles both Bitcoin
and Ether.
Go to JAAWX.io and embrace the future of cryptocurrency wallets.
And by the Ledger NanoS, the hardware wallet which sets the new standard in security and
usability.
Get it today at ledgerWallet.com and use the offer code Epicenter to get 10% of your order.
Hello and welcome to Epicenter, the show which talks about the technologies, projects,
and startups driving decent.
centralization and the global blockchain revolution.
My name is Brian Fabian Crane.
And I'm Meher Roy.
Today's episode will focus on Corder, which is a platform built by R3.
We have as our guests, Richard Brown, who's the CTO at R3 and Mike Hearn, who's the lead
platform engineer.
Very happy to welcome you both of you gentlemen on this show.
So let's start with Richard.
Perhaps a bit about your background would be nice, Richard.
How did you get involved in the blockchain space?
Hi, everybody. Yes, so I joined our three in September last year, so I guess 13 months ago now,
and I joined after my entire career, having been at IBM in various roles in IBM's UK offices.
So I joined IBM straight out of university as a software engineer, and I worked in various engineering and some pre-sales roles,
almost exclusively in finance, both in London, Europe and across the world.
And I think I got into this space.
I need to go back and check the exact time, the exact date.
But it was actually through The Economist.
There was a one to one column inch article,
just almost a throwaway little article in The Economist
about this strange thing called Bitcoin.
It must have been in 2012, maybe 2013.
And that's what peaked my interest.
And I guess I won't rehearse the story that's followed from then until now.
But I guess I got completely sucked into the rabbit hole.
spend a lot of time getting my head around it, being fascinated by it.
And then to be honest, putting it on the shelf for quite some time until the real hype
picked off a year or so later. And it just struck me that almost everybody who was purporting
to be an expert about it or comment about it, they just completely missed the point. They either
were making overinflated claims for how it was going to completely transform finance,
without understanding finance, or they were making ridiculous claims for how Bitcoin was the most
evil thing in the world without really understanding Bitcoin. So that's
That's what motivated me to start a blog where I spent a lot of time explaining Bitcoin
and blockchain concepts to bankers and a lot of time explaining financial concepts to the Bitcoin
and blockchain community.
And from there, I got more and more involved, did more and more work on it at IBM.
And eventually, when it became clear to me that the way forward was deploying this technology
at scale through collaborative efforts and through consortia when it comes to finance,
when I realized the R3 initiative was trying to do just that.
it was the obvious thing for me to do, for me to join in.
Yeah, and for those who don't know, Richard's blog is at Gendal.
at me, and it's an excellent blog.
I think you were certainly one of the best writers on this technology,
also together with another guy who's at R3 now, Tim Swanson.
And we were in contact with you a long time ago, actually,
and kept asking you when you just told the IBM, like,
you need to come on the podcast, we need to do an episode,
and you kept saying, like, I'd love to.
they don't let me ask me again in a month.
So we did that for like, I don't know, a year.
Maybe not a year, but a long time.
And then you joined R3 and, of course, ask again, like, are you ready now?
And I'm still not allowed to.
So we're glad that finally you're allowed to.
No, I'm glad to be here.
And I used to get, I remember the emails I used to get from you.
And I used to dread them because I could never say yes.
But the main reason I didn't say yes was when you come on things like that,
You want to have something to talk about.
You can you can you can you can commentate or you can give sort of broad views.
But I think these these these podcasts are really valuable and fascinating when when we're talking about a real thing with the people who are building it.
So I'm delighted to be waited until now.
Yeah.
And of course, Mike, who's been on this podcast twice before, I think.
And both times they got a lot of attention.
I remember the first time you were on, you made a statement.
that, you know, Bitcoin development has ground to a halt, which then got a lot of, a lot of
attention. And then, I think actually the last time you were on, that was shortly before,
that was sort of the Bitcoin XT controversy when you guys were trying to, you know, get bigger
blocks and get to switch away from Bitcoin core. And then shortly afterwards, you left.
So how is life post-Bitcoin?
Good. It's more productive.
Yeah. You don't miss the arguments.
No.
With anyone, right? That's the question. I don't think so.
But yeah, that was about a year ago, wasn't it? I think the last time I was on the show.
Yeah, around October, November time.
And I guess now from sort of your outsider's perspective, I mean, at the time, a lot of people disagreed with you today.
I think a lot of people have sort of, well, opinions, of course, still.
diverge, but I would think that your position back then has become more of like, well, yeah,
you were kind of right and in that a lot of changes didn't happen. Are you still following what's
going on? And how do you look at it now with some distance? Yeah, I still keep half an eye on what's
going on in the Bitcoin community. I don't, you know, I don't post or comment anymore,
anything like that. But I think, yeah, the way events have worked out, pretty much everything that I
predicted in the articles I was writing last year have come true.
You know, there's been media that said, oh, there was a show on NPR radio in the States
where the show was about Bitcoin and it started by the two journalists saying,
I tried to send money to the other guy and it didn't work.
And that was one of the things I predicted would happen.
So, you know, the bad media coverage of the problems.
So, yeah, I keep half an eye on it.
Nothing really seems to have changed.
So I haven't been paying too much attention.
But it's a shame.
Things don't seem to have changed much in one year.
Yeah.
And I think the worst thing is, even though I'm sure people still work on a lot of interesting technology,
even the guys on Blockstream, right, doing a lot of interesting things with segregated witness and stuff.
But there's just the community and the way of making decisions has become so divided that the project certainly hasn't developed very well.
But enough of that.
So that's not why we hear.
So R3, a lot of people, of course, know about R3.
At the same time, it's this kind of mysterious and strange entity, right?
It's sort of a startup, but it like is set up in a completely unorthodox way.
So can you share, can you guys share a bit about what's the origin story of R3 and what does that organization look like?
Yeah, sure.
Why don't I take that?
So, yeah, it's strange how we're seen as this mysterious thing.
We certainly don't feel mysterious to ourselves, and we do try to be open,
but maybe we can go some way towards addressing that here.
So our three was founded some time ago.
We came to public prominence in September last year, September 2015,
when we announced the founding of a consortium of large financial institutions.
I think there were nine at the time.
It grew to 42 by the end of, I think,
last November, and we now have over 70 members.
But Arthur's history actually goes back, I think, a year or so before that.
And I guess there's quite a lot of interesting history,
but perhaps one or two of the things that are most pertinent,
are that it was founded by our CEO, David Rutter,
who'd spent a career over three decades on Wall Street.
And although this wasn't his prime focus over those years,
His career was punctuated by success in building a series of very large-scale,
very successful industry consortia.
He seems to be the person in the world who's able to help bring together otherwise
competing institutions, bring them together when something collaborative needs to be built.
For those who know the financial markets, he was chief executive of EBS and of broker tech
of large collaborative efforts.
And all the way through 2015, as far as back in 2014,
after leaving his previous role,
a senior role in Wall Street,
he took a trip out to the West Coast
to understand more about what was going on.
And he got very close to what was happening in Bitcoin.
He was amongst the first talking public
about how the underlying technology,
what we now call blockchain,
and distributed ledger technology,
had applicability beyond the Bitcoin use case.
And he realized that if, at least his belief was,
if things were going to happen in mainstream finance,
in financial markets and beyond,
then this is inherently shared technology,
it's inherently distributed technology.
It requires organizations to come together,
to understand it, to evaluate it,
to figure out what needs to be done,
figure out what needs to be built,
and to do it collaboratively through a consortium,
So he drove, that was the insight driving the formation of the R3 consortium.
To fast forward to September, that got off the ground, I joined, we rapidly got to be 42 banks,
now north of 70.
And we began our work in earnest 13 months ago.
So the consortium is one thing, and the consortium is sort of like a place for experimentation,
where you get like those banks together and they kind of run these little projects.
and and but then the separate,
what you guys have been working on is almost like a separate project within R3, right?
Kind of.
So the thing we began with,
and we were open about this at the time,
was that there were always three primary strands to the work we were doing
with our members and with the community more broadly.
There was architecture,
and doing that through what we call the architecture working group,
and that's what I chair,
and I'll come on for that in a moment.
moment. There was always and continues to be a very laser-like focus on what we called use cases,
but I guess what most people would think of as product, so know to what uses can this technology
be valuable for members and for society. And then as you mentioned, there is also our global
collaborative lab, what's now our lab and research center, which is a powerhouse for running
project. And those projects might be evaluations of specific technology. That's a
the insight there is if multiple institutions are interested in evaluating the same technology
rather than each one running it to slightly different standards slightly different evaluation criteria
that's come together and run it once and share and publish the results working in concert with the
provider and we i think we were public much earlier this year with some of the some of the experiments
we've done but it's also where also where members and others can come together to work together jointly
on projects looking at specific specific product or or um
or commercial ideas.
So that's a real powerhouse for collaboration
amongst the member of financial institutions
and beyond.
But your question about technology,
we began, as one of the three strands,
was and remains our architecture working group.
And my mission,
and I actually wrote this in the blog post
that I put out earlier this month,
the mission given to me was set
as the very first decision of our state,
steering committee, that's the thing that the senior executives from all our members sit on.
The very first decision they made was to formalize the creation of the architecture working group
and give us the mission to establish the architecture for an open enterprise-grade scale
ledger for the processing of financial events and automation of business logic.
And so that was our mission to answer that problem, answer that statement.
And I guess we'll go through that in more detail.
But the vision right from the start was to do solid requirements-based engineering to understand what this technology's relevant might be.
Not all the things this technology can do is relevant to finance.
Understand what its relevance might be.
Understand what needs to be done to make it deployable at scale, securely, reliably and usefully in finance, and then go do it.
So in pursuit of this aim, like at the start of the year, there were many new stories that the cost,
consortium had taken some technology, let's say the Airis technology stack, and they had created,
I guess, like small experiments where commercial paper trading was being done using the Airis technology
stack. But all the partners of the Arthri consortium were kind of involved in that experiment.
And the initial impression that outsiders like me got was that Arthur is going to just not build
any technology of its own, but it's going to act as a filter for all.
of these projects like check all of them out see what ideas are good and then deploy them at scale
in the consortium that's what my initial thought process was but then i think in the middle of the
year you came out strongly in with with corda indicated that there would be an independent platform
and now like you've announced that corda is going to be open sourced on november 30th
so uh so the question is like why did you go down this direction of building something
bespoke or new other than just leveraging technologies that other people are building.
Sure.
Okay, so I'll start with that, and that might also be a good segue into some observations from Mike as well.
So the first thing I should say is throughout this process, and even with the open sourcing
of Corder, we don't claim that Corder is the answer to all the world's problems,
just as I've been very, very vocal in public and on my blog.
and so forth, saying that, you know, Bitcoin doesn't solve everything.
Bitcoin is good for the problem it solved, similarly with Eris and Ethereum and all these
other platforms. And we don't, we don't claim that Korda solves all these problems.
And what we do claim is it solves a specific set of problems that are pertinent to the financial
industry in particular, although not exclusive to the financial industry.
And we think it fills a gap that other platforms don't fill.
But how did we get to the point where we even decided we needed to build something like Korda?
And for that, it goes back to the mission.
statement of the architecture working group.
And excuse me, the work we kicked off at the end of last year once we were up and running.
The two questions that I drove through the architecture working group because I was petrified,
if you like, almost paranoid that simply trying to apply blockchain to business or applying
distributed ledgers to finance without a good, solid set of requirements and a good description
of what problem we were trying to solve.
It was not good engineering.
It was not an appropriate way forward.
We needed something more solid than that.
So as I say, we kicked off two parallel pieces of work.
One was to look at the existing platforms
and answer the question,
because it may seem obvious to those of us
who are steeped in this space,
but to answer the question,
is there anything genuinely new
in the blockchain
and the broader distributed ledger space?
It's not often the breakthroughs in computer science
come through. So let's be clear, is there anything genuinely new here beyond, say, the advent of
the cryptocurrency revolution with Bitcoin and the like? So number one, what if anything is new?
And then question number two, to the extent there is something new, what, if anything, is the
applicability to finance? You know, it does not follow, or rather it is not obvious that the technology
that was originally designed, some would argue, to disintermediate financial institutions. It doesn't
follow or it's not obvious that that technology is going to be one of the most important technologies
for them. You have to make a compelling argument for why that's the case. So without going into
all the detail, we can elaborate later. Taking those two questions in turn, what, if anything, is new about
this space? Well, the conclusion we reached, and again, it's obvious to those of us who are steeped
in this space, is that, yes, there is something new. This is all about building systems that are
deployed across and between large numbers of entities who don't necessarily know each other,
who don't fully trust each other, and yet which can bring all those entities, can bring all those
parties into consensus about the existence, the nature, the evolution of some set of shared facts,
some set of shared facts that exist between them. That seems quite an abstract definition,
and maybe it's not perfectly right, but for our purposes it's close enough. You know,
it captures Bitcoin. Bitcoin brings on trusting people into consensus.
about how many bitcoins there are right now, which addresses own them, who's allowed to spend them,
Ethereum. We bring people who don't fully trust each other into consensus about the state of
the Ethereum and the global Ethereum virtual machine. And we didn't really have that before.
We had distributed databases, but they're typically run by one organization.
We have systems that are deployed by a centralized institution that everybody else agrees to trust.
But data that is managed somewhat cooperatively across a large number,
of mutually distrusting organizations and that works reliably, yeah, that's kind of new.
That's really quite interesting.
So if you then turn attention to finance, you say, well, where in finance do we have examples
of entities who may want to interact with each other, might want to transact with each other,
but don't fully trust each other.
They wouldn't trust the other side to manage all their accounts for them.
But where there's a need to ensure that all relevant people are in consensus about the existence,
the nature and the evolution of some shared facts.
Well, you could argue, perhaps, with only a little exaggeration.
That pretty much defines the financial system,
certainly the back offices of the financial system,
where you've got people who are trading with each other
for each who need to keep track of their own records,
they need to keep track of all the details,
they need to build and maintain and manage systems
that track those agreements and those trades throughout their life cycle,
and they have to agree at every step.
If they calculate different values for who is what to whom,
then we've got a break, we need to go through an expensive fixing process or a process to fix it,
and there's a huge amount of reconciliation of paperwork and communication we need to go through
to bring all these disparate systems that should all be doing the same thing
to make sure they actually are all doing the same thing.
So that's kind of like the journey you go on.
What, if anything, is new in this space, bringing untrusting parties into consensus?
What might the application be in finance, anywhere where the same information is recorded in multiple places,
which is pretty much everywhere.
But then, after a long meandering talk, for which I apologize, answering your question,
why did we get onto the path of thinking we needed to build a quarter?
Well, you then look at the technology that's available.
And of course, it's also obvious to state that the technology that inspired this movement,
the technologies that take people like us on this thought process,
they weren't designed to solve problems like reconciliation in back offices in investment banks.
And that's no criticism of those technologies.
You know, Bitcoin is a very elegant, successful solution, but the problem it solved, as is Ethereum.
But neither of them were designed to solve the problems that just outlined.
So, and this is perhaps the last point on this.
So we reached a point where we needed to, we had, there's a fork in the road.
we could either spend a lot of time trying to amend, edit, and influence the existing platforms to move in a direction where they could adequately solve those problems, or we should build something ourselves.
And the conclusion we reached was it simply wouldn't be credible to go out into the market.
And remember, this is the back end of 2015, to go out into the market and simply tell everybody, you know, simply assert that the technology isn't quite right or doesn't do all.
we want to hear it's very easy to criticize it's very easy to poke holes it's
altogether harder to to show a workable alternative so we thought it was
incumbent on ourselves to show that yes these ideas we have but other ways of
building these systems based on blockchain principles to show that those
those approaches actually could work that actually could could address some of
the problems around scalability and privacy and expressiveness and so forth that
we that we saw in other platforms that was a key reason for bringing Mike
in and he'll talk more in a moment.
But that was the genesis of Corder.
And as we did more prototyping, as it evolved,
as we followed the, obviously, the inexorable logic of the,
the inextrable logic of the requirements and the analysis,
Corder developed, it matured.
We did not allow ourselves to be constrained by what other people has done in the past.
And, you know, in short order, we ended up with a design that is heavily influenced by,
but it's fundamentally different to the other platforms out there,
and hence why we think it's worth,
why we thought it was worth continuing to invest in
and why we now want to share it with the world.
Let's take a short break to talk about Jax.
Jacks is a multi-coin wallet created by the people at DeCentral.
Now, in the past, if he had a whole bunch of cryptocurrencies,
it was a pain to handle them.
You either had to leave them on an exchange, which was insecure,
or you had to have all these different wallets, which was a hassle.
Fortunately, now with Jack,
In the X, those medieval days of darkness, misery, and suffering are over.
Jack supports multiple cryptocurrencies and new ones are being added.
But it's not just storing cryptocurrencies you can do with Jax, but you can also exchange them directly
from within inside the wallet thanks to their shape-shift integration.
And since there's only one seed, Jax makes it super easy to back up and sync to the other devices.
works with Windows, MacOS, Linux, Android, iOS, and has browser extensions for Firefox and Chrome.
So go to jacks.io, that's J-A-A-W-X.I-O, to download the wallet and get started today.
We'd like to thank Jacks for the supportive Epicenter.
So moving on a little bit to the more technical side of course.
What are the kind of high-level decisions that you had to make in order to come up with
an architecture into those decisions?
There's going to be a technical white paper, which I'm writing in the moment, actually,
which will go into all of this in a lot of detail.
It's not quite a specification in the same way that the Ethereum yellow paper is,
but it covers all of the design points, a lot of detail.
So there are a bunch of things.
One is we felt pretty strongly up front that, you know,
there's two models of computation writing in this space.
There's the Bitcoin model of transactions with these inputs and outputs,
and you have the unspent transaction outputs.
and that's the database, the UTXO model.
They call it in the finance space.
And then you've got the Ethereum model,
which is sort of a distributed computer.
And we chose up front to go with the Bitcoin model
for a bunch of reasons.
Richard can talk a little bit about why,
but it had a whole bunch of advantages.
But at the same time,
a lot of financial developers actually liked
the Ethereum developer experience,
but it's a much more developer-friendly project.
You can use, you don't have to write code and assembly
for example, you know, is it a bit easy to think about.
And so a lot of the design decisions that we've been making are about how do we take this model of Bitcoin users,
which has all kinds of advantages around privacy and scalability and all kinds of other nice features that we want,
and yet make it easy to use for developers who don't really want to think about the details.
That a lot of the, and the other one was the design decision that was key.
And then these two had been made when I joined R3 already.
limited data distribution.
So data goes only where it needs to go.
There's no global broadcast in quarter anywhere.
And these two design decisions, as we explored them,
they led onto a whole bunch of other design decisions,
and they led on to even more design decisions.
This is probably the most design-heavy project I've ever worked on, actually.
Satoshi once said about building Bitcoin that actually,
he spent over two years on it, and almost all of the work was design.
It wasn't actually coding.
It was mostly designing it.
And this is very similar.
And I guess the advantage you guys had here was that Satoshi did this in isolation or maybe with some collaborators,
whereas you had the big advantage of having a significant organization with resources and with people to speak with,
as opposed to building something that's totally novel. And you can't really test so well whether the design works or not.
Yeah. Well, you know, I was obviously sort of the first full-time developer hired.
So one of the tasks that we've done this year in the past 20,
12, 13 months is build a developer team from scratch by hiring, right?
Primarily and, well, actually, almost entirely in London except for me.
And yeah, often, you know, a lot of the process of design and quarter has been,
we will come up with the design we think works based on our own knowledge of finance
and we have in-house experts and we have team members,
the team is a mix of people with computer science and cryptocurrency type backgrounds
and also people with backgrounds in banking and finance.
They sit next to each other.
It's a unified team and, you know, we would run ideas past.
and then also go to the members who all contribute developers and architects as well.
And we would write design documents and say, hey, guys, is this sensible?
Does this meet your needs?
Is this stupid?
What do you think?
And that feedback process, and they've been contributing in other ways too,
but the feedback in the discussion process with the members has been really invaluable for,
yeah, just doing some reality checks and making sure that we're not going off the course.
That said, you know, Bitcoin's design has a lot of really, really well-fueling.
caught out stuff in it. In many ways, it's still, you know, it stood the test of time well,
and in many ways it's still a pretty elegant design, especially the original one. If you go
back to 2009, when it was first released, Bitcoin's design has actually got worse over time
in my viewers. People have tweaked it, but the original design was actually pretty good,
and we've incorporated a lot of ideas from that, except for the blockchain. You can argue that
that's the most important idea, but actually, I don't see it that way. We don't see it that
way. There's a lot of ideas in Bitcoin that is not mining and blockchain related, and we've
incorporated a lot of them into quarter.
So when Richard was speaking of like we came across this idea that the fundamental
innovation in let's say the blockchain space for lack of a better word is the ability to have lots of diverse
participants agree on a shared set of facts, right? And we're trying to apply this to the context of financial
services, investment banks or banks, right? And Corder is a platform built to enable that.
So with that in view, what are the big design decisions that you took? And what are the,
let's say the computer science components that you're using, what kind of contracts, what kind
of virtual machine technology, what kind of consensus, etc.?
Should I just give a little bit on that agreement concept just to get that clear and then let Mike answer the computer science question.
Because Meja, you correctly identified something and I guess in my intro I hand waved over it.
I said the shared facts in finance or agreements.
And I use the word agreements because contracts make people think of smart contracts and people have already got mentally.
images and mental models for what they are, which may not necessarily map to what we're doing.
But when I say agreements, I mean contract. And the reason I say that is, you know, if you look at
anything, any relationship between a customer and a bank or a bank and another bank, at its heart,
it is a contract. We don't always think about it this way, but the reality is it is. So in the,
in the most obvious cases, you know, a very common financial instrument is the interest rate swap,
a very standard derivative contract. And it's a contract.
between two identifiable parties. We know who they are and we have to know who they are because under different circumstances
One will owe money to the other or the other will owe money to the first and they need to be able to enforce those claims
But it to contract we know who the parties are to it. We know it's life cycle
We know the events that can happen on it and we know the rules that apply under different circumstances
So that's quite clear but the the observation maybe the insight we made was
other things that don't look like contracts are also so when you put money into a bank you
deposit money with a bank. You're not really depositing money with a bank. You're lending it to the bank.
You now have a claim on that bank. There's no vault in sitting in, there's no, there's no,
there's no sort of vault in the basement with your name on it. So actually the act of depositing
money with an account with a bank is actually entering into or amending a contract you have with
the bank that says the bank now owes Richard this amount of money and under the following circumstances
Richard can ask for it back. So actually this idea of, of modeling, modeling, um, modeling, um,
financial relationships as contract, but it actually turns out to be quite general and generalizable.
And it then allows you to ask a question, which is, well, who are the parties to this contract?
Who needs to know about it? Who needs to observe it? Who needs to verify it? And under what circumstances
might other people downstream need to verify it as well? Very quickly, you can then begin to
ask questions, just as I've said, about what is the data that needs to be captured? What's the
overarching legal agreement that governs it? What are the rules that govern its evolution? Who needs to
who needs to sign any transitions, who needs to be told about them.
And perhaps importantly, how do we ensure that two proposed transitions to something
representing a contract and not in conflict?
That leads quite naturally, I would argue, to our selection of the UTXO model,
because the UTXO model, just as in Bitcoin, is the transactions in that system.
They're very, very explicit.
They say these are the current pieces of information in the system,
the current contracts if you like in our model I assert their current I assert
they are they are pertinent to me here is how I'd like to change them replace
them with these new contracts is the proof I'm entitled to and there's a self-contained
units that can be verified independently and in parallel with all the others
because it's completely completely explicit about which part of the data set it's updating
in a way that's much much harder to do in a general purpose virtual machine
so that's that's how we thought about it from a business context but of course
everything I'm saying here you can argue is actually just a set of requirement. How it's actually
implemented is of course a different story and I guess I'll let Mike figure it up. So I'm going
to assume the listener is familiar with how Bitcoin works. I don't know if that's a valid
assumption but it'll save a bit of time. So in Bitcoin every row in database is a quantity of Bitcoin
and then a little program that determines who can access it, write the script and the value.
In Corder, we want to store more stuff than just the quantity of cash, right,
a quantity of a single currency.
So this is, I think Richard has talked a bit about states already.
A state is currently in the current design, it's actually an arbitrary collection of objects.
It's an object graph.
We may restrict that a little bit in future,
but one of the things we've been doing this year is a lot of experimentation
and prototyping and proof of concepts with Corder to see how much power developers really need
and where we can restrict them a little bit to get other features and where we can't.
So currently, a state is a full-blown serialized object graph.
And if you want to represent something like cash in a quarter transaction,
all quarters like Bitcoin in that all entries in the database come from transactions,
and you can refer to them with a hash of a transaction, an end, an output index.
And to do something like representing cash,
the state would not only include the number of pennies,
the sort of equivalent of Satoshi's for Fiat money,
but also the issuer, right, because dollars issued by Barclays are not the same thing as dollars issued by a central bank.
I should say pounds issued by Barclays are not the same thing as pounds issued by a central bank.
You're exposed to default risk and so on.
They're not quite the same thing.
And then there's a few other, you know, things that you have to track as part of, even something quite basic like cash.
So Corder has a, it can represent currencies, but it can also represent other things like,
like we've mentioned the contents of deals and interest rates swap, all the stuff,
they're still putting the outputs where a Bitcoin transaction would have only a value.
We mentioned a little bit at the start, right?
We're using Ethereum developed its own virtual machine for this, so did Bitcoin,
for the task of expressing the logic behind new transaction types.
We chose to modify the JVM to do that because, again, this is coming back to,
you know, we're designing this for banking and business.
Virtual machines are well-understood technology.
This is not a new area of computer science by any means.
There are high-quality, robust industrial-strength virtual machines out there.
The most successful is a JVM.
In banking, Java and JVM bytecode is everywhere.
A lot of banks are now using things like Scala as well.
You can compile Haskell to the JVM.
A lot of languages which crop up repeatedly in finance,
you can run on this platform.
You've got tools, you've got IDs,
that it doesn't make sense to us to reinvent all this stuff.
So part of the work we've been doing with quarter is allowing you to define new transaction types by defining new smart contracts in you don't have to use Java.
And in fact, quarter itself is written in a language called Kotlin, which is very new.
But it's compatible with Java.
So we've done a bunch of things there.
Another thing which is very different and will strike people immediately when they look at, when they compare quarter with Bitcoin, is that in Bitcoin and Ethereum, when you create a new transaction, you sort of pick some random peers on the next.
network and you send it to them and they pick some more and they propagate it around, right?
And the network sort of gossips new transactions until everybody has seen them.
And nodes on the network don't have any real identity and they don't have any real obligations.
Effectively, these systems attempt to build a reliable component out of unreliable parts.
And such a thing is always statistical, right?
You can't always reason about how reliable the Bitcoin network is because the people making it up
can disappear at will.
they can arbitrary sort of deviate from the protocol and who even knows who's doing it.
This is how Bitcoin ended up with miners that won't make big blocks.
In quarter, it's structured more like the email network.
So every node is identified.
Every connection between them is secured with TLS and certificates that we issue, right?
If whoever runs a quarter network is issuing it, it doesn't have to be us, right?
You can run your own quarter networks, of course, if you want with different rules.
And then nodes only communicate with each other when they actually have a specific.
reason to and there's no global broadcast.
So if I want to send cash to you, then I connect to your node and I give you a cash transaction.
And then you say, oh, I don't know the dependencies of this.
So I can't verify it.
Give me the dependencies and I give you all the dependencies.
And then now you have all of the dependencies of that transaction.
You have the entire graph.
And then next time you send it onwards, you send, you know, you tack your own transaction
that moves the cash or spends the cash on the end.
And then you send the whole graph onwards to the next guy in the chain.
So the data propagates around the network lazily.
you see only data that you need to in order to verify the parts of the ledger that are interesting to you.
And this has a whole bunch of interesting consequences.
Like you can actually, in theory, you can run two separate cordon networks that are totally independent and then merge them together later, right?
Both sides need to agree to trust each other's notaries.
And we can talk about notaries a bit if you like.
But basically, once you've established connectivity between them and if they trust each other's identities and notaries, then you can actually do that merge.
And it all just works because there was never.
any assumption that all data was visible to begin with. You can't do that with blockchain-based
systems. And again, this is one of those requirements that sort of comes out of talking to people
who work at banks and they say, well, you know, gee, we would like to start with an internal
deployment maybe and then, you know, join the rest of the world a bit later after we're
comfortable with the technology. There's always going to be political concerns. Maybe some
countries want to have their own separate network for political reasons if they're in a fight
with the West right now. And maybe later it's resolved and you want to bring everyone back
together. There's all kinds of reasons why it's useful to be able to do that.
Today's magic word is accord. That's A-C-C-O-R-D. Head over to let's-talk bitcon.com,
sign in, into the magic word and claim your part of the listener reward.
I would prefer that we try to break down this excellent explanation, like through an example, right?
So one of the examples that Richard came up with is the example of the interest rate swap.
Yeah.
So let's try to just talk about what an interest rate swap is and then like fit it into this quarter model.
How would it work there?
And then we'll keep on adding parties.
So you might start with the basic interest rate swap.
So let's say it's Brian and I and we are two different financial institutions.
And what we want to do is an interest rate swap.
So we are going to take, I don't know, a standard interest rate index, maybe from, I'm not even sure, maybe the Fed Funds rate or something.
So there's the standard interest rate index that keeps moving every day.
And Brian and I kind of want to have a contract that let's say the interest rate is 2% today.
And if it goes to 3%, for every percentage rise in interest rate, I don't know, Brian pays me like $10 million.
dollars and every percent fall in the interest rate i pay brian the same 10 million dollars let's say
it's something very similar simple like that so interest rate moves depending on how much it moves
either i pay to brian or brian pays to me right so now so that's a contract now brian and i want to do this
contract and this contract fundamentally requires me to trust brian to fulfill my obligations if
if something is due to me and trying to trust myself, right?
And maybe we are gonna settle every three months.
So we're gonna do the contract right now
and then three months later,
we're gonna see what the interest rate is
on that current day.
And then whether if it's higher or lower than 2%,
we will calculate who needs to pay, who, how much,
and then we'll exchange that amounts.
Three months later, we'll do the same thing again, et cetera, right?
So how would this kind of transaction,
between two financial institutions be modeled as a CORDA object or a CORDA contract?
Well, you know, at the end of, once we've released it as open source, I can just point people to the code because
the code base contains, it actually contains two different, what we call CORD apps that implement
interest rate swaps. They will probably merge them at some point. They do different things
related to interest rate swaps. But basically, yeah, you start by defining a state,
to represent that deal and because a state is just a bag of objects you know you can represent this in a bunch of ways
the quarter spec incorporates parts of the um the java spec into itself because that's a very well-specified
platform like those detailed documents and how it works so you can do things like record timestamps um you know
using the standard library types and things like that you can define fixing schedules and so on once i've
put together a transaction with no inputs and one output so this is like a genesis transaction but there's no there's no
no blockchain, right? It just exists sort of floating in space. I give it a unique deal ID or
whatever to make sure the hash of this transaction is unique. Then we start what corticals
flow, right? So in a system where there's no global broadcast and all communications
take place between like specific nodes, you need, and also the protocols involved can be very
complicated. You need some way to manage that. You see these little like node to node protocols
crop up in other systems too, like Bitcoin has the payment protocol, Bip 70, it has
micropayment channels or just payment channels, where two nodes are sort of interacting outside
of the blockchain or outside of the standard network to come to some sort of deal or some
sort of agreement between themselves. And that kind of thing is the exception to the norm in
Bitcoin and Ethereum, but in Corder, that is the norm. That's how all communication takes
place. So I would create such a transaction. There's a component in a network called the
network map, which is basically a map of identities to nodes and vice versa.
So, you know, someone would type in, oh, I want to do an interest rate swap with,
you know, with Brian, right? And then press enter and off it goes, it finds
Brian's node, starts a flow with them, proposes that transaction. The other side says,
yeah, looks good, sends me back a digital signature. So at this point now, we have a sort of
mutually signed transaction, and we both agree on the hash of this transaction. So
the details are agreed at the bite level.
This is very simple, of course.
Then we want to start evolving that deal, right as it changes.
Every time we want to fix a new, we want to refix it to a new interest rate.
For example, there may be an Oracle in the network that knows interest rates.
So we can then embed the new interest rate into a transaction.
Quarter transactions have inputs and outputs just like Bitcoin.
They also have commands, which is sort of, because in Bitcoin, there's only like two kinds
a transaction, right? There's one to move money and there's one to create money. And you know it's a
creation, you know, it's a Genesis transaction because of the position in the blockchain.
Quarter transactions can do many different things. So you need a way to sometimes distinguish
what it's doing, and that's what commands are for. So you create another transaction and has a,
you know, fix the interest rate command in it. You would send, then you would start another
flow. The flow would send it to an oracle. The Oracle would sign because it sees it's valid. You
get that back. You send it to the other side. The other side verifies it says yes. I'm signing to,
sends it back, and there's a multi-step procedure here where you're moving signatures and
transactions and data around. Then we want to introduce notaries, perhaps, which are how a notary
defines, you know, basically notary is the part that stops double spending. Whereas in the systems
that most people are familiar with, the consensus mechanism is bound very tightly with the definition
of the network. Bitcoin has one blockchain, and that sort of defines Bitcoin and Ethereum, same
thing. A Corder network can have multiple different competing notaries, actually, and different
transactions can be de-conflicted, like double-spend, deconflicted by different services.
So you can then start involving one of those. That's another step. Maybe there's a step
where you report to a regulator. That's another step. These protocols can become very complicated,
and so one of the things Corder provides is what we call the flow framework, which allows
you to write these protocols in very straightforward code. It looks like the same.
simplest possible code you can write, I would say.
Basically, the flow framework gives you what appears to the programmer to be kind of
unkillable Uber threads, which can survive for days or weeks.
They can survive process restarts.
They can even survive upgrades of your node in some cases.
So you write your interactions like send message here, get a message back, send message
there, get a message back.
And then by the end, you have agreement on what's going on.
And that is then committed to your nodes vault, we call it.
Bitcoin and Ethereum have wallets.
and that's a little bit foxy for banks.
They want something a bit more robust sounding.
So we have the vault.
And then one of the things quarter states can define
is relational mapping.
So once these transactions have been processed,
they get converted into relational database tables
and asserted into a relational database.
And you can then join those tables
with your own internal apps state.
So you can have information,
which is both on the global ledger and in-house,
like your customer database or whatever,
and then you can just join that data together
in the normal relational way using join keys
and select statements and SQL and so on.
Okay, so that seems really cool.
So the way I tend to think of this is
that you're essentially taking all of these contracts,
like a contract between me and Brian,
which might be an interest rate swap or some other kind of contract,
representing this as a digital object.
And then,
every time we create this digital object and then we do transactions to modify the current state
of this digital object. So this kind of keeps moving. Every time we do a transaction, we ensure that
if me and Brian are the parties to it, then there's my signature on it and Brian's signature on it
and maybe a notary signature on it to make sure that some rules of the transactions are also
being followed. So all of these signatures keep on accumulating and that tells you.
me what the current state of my contract with Brian is. And now I could have contracts with
many parties like with Brian, with Richard, with yourself, and I could collect all of these objects,
collect all of these contracts together in one relational database. And that kind of shows me
what my firm as a whole is doing in the market, what contracts I am participating in the market.
Yeah, that's pretty much it. I would clarify one thing, which is you don't always need a signature
from every involved party, right?
The set of signatures that you need for any given type of transaction is defined by the
smart contract code itself.
You don't want to be in a situation where the counterparty is defaulted and, you know,
the courts have moved in and seized all of their computers and the electricity company
cut them off because they didn't pay their bills and now you can't update the database
to reflect their own default because they're not signing.
You can, you know, in some cases you can advance these agreements without all parties being
involved.
it depends on how the smart contract code is written.
And it's worth probably adding to me also.
There's a good, even in the non-default case,
I guess there are good game theoretic reasons why we need this as well.
One of the, we talked about the requirements
and what problems we were trying to solve.
One of the motivating examples we used in the early design work
was that of a, just a traditional option agreement,
a call option.
You might model that as an agreement.
Maybe I have the right to,
to, I have the right, but not the obligation to demand that I can buy a security from you.
They have deadlines, that they expire at a certain time.
If it were just before the expiry and I wanted to exercise it,
and I required your signature as well for it to be valid,
well, you might have just the slightest incentive to run a bit slow
and allow the clock to tick past midnight.
So that would be, I guess, a perfect sort of like business as usual example
of where only the exercisor and then the notary to commit to the timestamp is required to sign it.
It's also, when I guess we may not get into this detail, it's also why the notary, as in,
the cluster that provides confirmation of no conflict, is also the entity that commits to the time.
Those two things can't be separated.
Let's take a break to talk about the Ledger NanoS, the new flagship hardware wallet by Ledger.
I'll pass it over to the Ledger's CTO, Nicodabaca, who can tell you all about Ledger's
security features and SDK.
So let your nanoS is a personal security device based on a secure element,
a screen and button, so that you can verify everything that is done on device
and make sure that you are really doing what you wanted to do.
Compared to our previous solutions,
this device is based on the latest generation secure element,
the ST-31 from ST-3MICOR.
The SCT-31 is using a secure arm core,
which means that you can have the same ease of development
that you would have on a generic microcontroller,
but benefit from the security features of a secure element.
Security features include an application firewall at the lowest level that let you protect applications from each other,
which means that you can load multiple applications on the hardware wallet, even post-issurance,
and you as a developer will be able to leverage these features to load your own application
without our authorization and without any kind of authorization from the vendor.
We will be providing this device with an open SDK that let you do.
do anything you want with this device. We provide sample applications for cryptocurrencies,
different cryptocurrencies, so Bitcoin, Ethereum. We will also provide a Fido Authenticator,
and you will be free to add everything you like. For example, you could add some secure messaging,
some encrypted chat, and you'll see that the solution is quite powerful and very easy to develop
with. The NanoS sets the new standard in hardware wallet security and usability. You can
get yours today at ledger wallet.com. And when you do, be sure to use the offer code
epicenter to get 10% off your first order. We'd like to thank Ledger for their support of Epicenter.
So just if you briefly kind of bridge the gap here to the blockchain idea, right? Because in a way,
what you do in blockchain is that you say you have these transactions and they all, you know,
you put them together and you timestamp them so you know the exact order. And then there's this
global agreement on the order. Whereas here, I think you're still having the same idea that you
have kind of transactions or events that are between parties and they're sort of, you know, hung
cryptographically on top of each other or sort of chain to each other so that one can follow the
whole history, verify everything. But then you say, there's no need to have this global agreement.
There may be need to have some agreements at some point. And it's quite obvious also if you look at this,
who's supposed to agree on something, how much security is needed, you know, does it need a regulator,
or is it just between the two parties, is there a need for third parties to be able to verify that?
Like all of those things will depend a lot on the circumstances.
So it's almost like that's, it's not part of the fundamental thing, it's something you layer on top through the notary standard.
You have enormous flexibility there and saying, well, it's just going to be, however,
the business requirements wanted to be.
Yeah.
Yeah, the key thing to realize is if you don't have a blockchain,
you don't have a total ordering of all transactions relative to each other,
but you don't actually need that, right?
You only need an ordering when there is a double spend to resolve.
And quarter transactions don't define a precise time at which they occur.
They define time windows, which may be open-ended.
So you can express things like Bitcoins and lock time with a quarter.
transaction, but you can also say this transaction must occur within this window of time.
It can't be a specific time because, you know, there's no global clock, right?
You can define the sort of time window and then, you know, take it wide enough to take into
account of speed of light and so on, but there's never any points which you can say
precisely which transaction came first unless there's a double spend where you don't need to.
So why bother providing that expensive guarantee?
And in this example of the time window, so is that where?
Richard Point comes in that because I can, of course, make up whatever time locally and if it's not put in a blockchain, nobody can verify it. So that's where you need kind of a notary or a third party to provide the time. Is that correct? Yeah. So often you need some notion of a time for reporting reasons and to, you know, interact with the logic like the put option example. You know, we specify that notary, the machines in a notary cluster are supposed to
be synchronized to the US naval observatory time,
where which you can get from a GPS speed.
There's a mapping of that.
That time scale doesn't always,
like the raw time scale doesn't include leap seconds.
And there's some stuff in the specification
and the design documents about handling that and so on.
But ultimately, you know,
you don't know what the exact time the notaries will observe is.
So you can only ever specify it in a fuzzy timestamp in a way.
It's maybe worth just saying,
just one of the thing on the notaries,
because I know when we, sometimes when I explain this to people in other contexts,
there are often a lot of questions,
and I imagine your viewers and listeners may have a few as well.
So anticipating some of them.
I'll just give a little bit more colour on that.
So we use the word, we use the word notary, albeit we may have to be,
we may have to change that if it turns out it's incompatible with some European,
some European regions.
But we use it because we want to evoke the idea of
it's the thing in the architecture that it's providing that stamp.
For safety, sake, we think the notary should validate transactions,
but they don't strictly need to.
The function they're performing is to say,
yes, I saw this transaction and it doesn't conflict with,
i.e., it does not spend any of the same inputs
for something I've previously signed.
So it's providing the same guarantee you get from a blockchain
that says if a transaction makes it successfully,
into the blockchain it has out-competed any others that try to spend the same inputs and you
know which one got confirmed but that's a that's a that's a logical concept when as described
concretely there are many different ways of implementing a notary you could imagine a centralized
service clearly that that may work in some situations is suboptimal in others you can imagine a
high-performance cluster that isn't byzantam-volent and you can imagine a byzantan-fault
tolerant cluster that implements a notary. But then as Mike says, we can have many notaries,
many notary clusters implementing different algorithms or different qualities of service,
in different geographies with different characteristics, all on the same network. And the thing that
ties it all together is when a transaction is created and signed, the outputs each commit to the
notary that is authoritative and whether that output has been spent. So when you look at any given
transaction, you know whether the output of that transaction has been spent because it commits to
the notary at the time it's created, and we can have many different notaries. Why might we want this?
There could be geographic performance reasons. If all you're trading for a particular set of
trade is all with people in the same region, then having a geographically close notary
for performance might be something you need. There may be regulatory reasons where a regulator
insists that any transactions involving money issued by their banks is notarized on their shores.
And there are also arguments for how this kind of technology gets adopted incrementally,
where before we get to a fully decentralized model,
there are steps along the way that are part of the way there and get us there incrementally.
So this idea of multiple notaries, multiple consensus services on one network
is surprisingly powerful and useful in multiple domain.
Yeah. To make that concrete, we have a prototype of a raft-based notary cluster.
At the moment, that might be appropriate, you know, for City of London, you know, interbank trading where they're not going to maliciously double spend each other, right?
They just need a just need a mechanism to make sure they're no accidents.
And then we're probably, we're looking at using the PBFT smart algorithm and implementation for a global, a global notary that may include, you know, parties that sort of don't trust each other so much.
So one question I had is one of the hard parts about putting blockchain like systems or shared ledger systems inside
inside a financial institution context is they almost always have to interact with some other legacy system, right?
So I might be a financial institution, Brian might be a financial institution and we might have an interest rate swap.
But now the payment actually needs to go through, let's say the real time growth settlement,
system of say the United States right and now the issue here is that the money flows on
that other system but that other system is not cryptographic it doesn't have a
cryptographic proof that some event happened in that other system so we might have
this contract object and our contract objects the state must need to be updated to
reflect the fact that I have made a payment to Brian but I don't I may not
necessarily have a cryptographic proof that this payment was made so how
How does Corder handle something like that?
Yeah, I'm actually glad you asked about this
because this is one of those really boring topics
that no one ever thinks about when designing the systems,
but you can't deploy if you don't think about it.
So obviously the ideal scenario is everything is on the ledger,
including cash, and then for an interest rate swap transaction,
you can actually update the deal
and move the cash atomically at the same transaction.
Everything is nice, and it's all on one system and so on.
If it isn't, then you need to basically, you know,
but both sides. And it's not just like
interacting with real-time growth settlement
systems, right? Banks, well, or any
financial institution have all kinds of internal
reporting systems and, you know, they
want to print out reports at the end of the day, maybe,
all kinds of stuff that's specific to that
organization that needs to interact
with the global ledger.
So this flow frame, like I mentioned earlier, this thing
that gives you these kind of Uber threads.
The Corder platform, so a Cod app
is a thing shared between
institutions. It defines
the smart contracts where this determined
is very, very important and that code is precisely shared.
It also defines these flows.
But because these flows are just protocols
that are implemented by both sides,
they don't have to be exactly the same on either side, right?
They're not a part of the consensus mechanism.
And so the idea is that these flows can be subclassed, right?
They're just ordinary sort of Java classes.
You can customize them in various ways.
You can say, okay, at this point,
in the process of updating an interest rate swap
or doing whatever it is you're doing,
I'm going to run some customers.
code and I'm going to call out to some internal system, right?
Corder nodes are at heart.
They're built on a sort of industrial strength message queuing system.
We're using something called Artemis, but we'll probably make it plugable later.
Message queues are very common inside banks.
You know, you have messages going here and there, and they can be saved to disk and
we're into data bases and they can time out and they can be monitored and all these kind
of things.
So what you would do is, you know, you would customize your flow and say, okay, well, at this
point, but I want to make a payment, I'm going to override that method.
instead of trying to find cash in my vault, right, in my wallet to make this payment,
I cash through the message queue to some other system that will then go off and talk via Swift to,
you know, the RTGS or do whatever it needs to do.
On the other side, you know, the other side is also subclasted at the point in the flow
where it's waiting for payment, then again, it's interacting through these sort of message cues
or by making HTTP requests to their internal systems.
So the nodes know how to interact with existing.
legacy systems. They don't need cryptographic proof. They just wait at that point in the protocol
before they sign the next transaction on the global ledger, they just wait until the internal
system flags it up and says, oh hey, you know, it's good. We're good to proceed.
Flows, the current code doesn't support this, but it's a part of the design we're heading towards
implementing. The idea is a flow can also interact with a human being this way. So you don't only
interact with nodes on the peer-to-peer network and internal systems. You can also send messages to people,
and that's useful for saying things like,
this looks weird, should I sign it?
Or even you need to sign this transaction with keys
you have in a little trezor type device.
You know, the node itself doesn't have the keys
and yet needed to do that.
So this is why we have these sort of long-lived threads, right?
You can survive for days or weeks
because maybe the person's on vacation or sick that day.
So you need the ability to pause the execution at that point
until they come back, do their thing,
and then resume with that signature you've obtained from the person.
person.
Cool.
Yeah, I think that integration is going to be very important.
Now, one of the things that's important to cover on the technical side that we haven't yet talked
about.
So you mentioned that Korda uses the Java virtual machine.
Now, the most well-known virtual machine, of course, in the blockchain space is the Ethereum
virtual machine.
And a big reason of thinking behind it is that they said, OK, it needs to be deterministic.
So on every node, you need the same input,
need to be exactly the same output.
And that's why they developed a new programming language
and did everything from scratch and differently.
Now, the Java isn't deterministic, right?
And the Java version machine isn't deterministic.
So how do you handle that?
How is that, does that mean only a subset of Java
can be used or how do you address this problem?
Yeah, it's only a subset of Java can be used.
There's a whole list of things
that you need to do to convert to JVM
to being a listed.
completely deterministic thing. But if you just compare, you know, the EVM and JVM or
Solidity and Java, they are very, very similar, right? They have a lot more in common than they
don't have. And so, yeah, requiring people to learn these new languages, which often,
the EVM is, I would say, kind of quirky in many ways. The basic data width in the EVM is
256 bits, which leads to some very puzzling and strange sort of data usage scenarios and
things like that. Like you can define a byte array in two ways and one way is an actual bite
array and another one isn't, you know, going with something like a JVM avoids a lot of those
issues. There's some examples of things you have to fix. So you obviously have to restrict access
to things like file, I.O, network I.O., random number generators are lurking in a few places
you have to be careful of. Like every Java object has a thing called a hash code, which is very
useful because it means you can put any object into hash maps, hash sets. This is a functionality
that programmers just love and use all the time. Hash codes are, if you don't specify how one is
calculated for a class, then it defaults to being a random number generator. So that has to be
patched. You need the ability to terminate execution, right? So you need a kind of similar
concept of gas. That can be done with a, we're doing this with a bytecode rewriting phase.
And there's a bunch of other things. But yeah, the basic strategy is you define a subset of
the platform, you forbid features, right? You take away features that are incompatible with
determinism. And what you've got left is something that's still pretty complete and still pretty
useful. It has a lot of functionality that Ethereum developers want but don't have, but you've
done it for a fraction of the work, right? It's much easier to carefully review what's there and
then adapt it than to build everything from scratch. And it's worth adding one other things to that as well,
which is this intersects quite nicely with the decision to use the UTXO model
rather than the virtual machine model,
because that restricted subset of Java or the JVM bytecode said that Mike describes,
that is that is what we need for the consensus layer,
so the bit that has to run the same on every note.
But that's pretty much only transaction verification.
So if I send you a transaction that purports to do something,
we need to know that when I verify it and when you verify it
and anybody else who needs to verify it, verifies it,
we all agree it was valid or we all agree it was invalid.
All the extra work that you have to write
when thinking about small contracts, how you generate transactions,
how you chose what the transaction would do,
how you figured out which order to include things in it.
That's something that's just run once in Cordone and is run by the generating node.
So, and that you can do however you like, you're not subject to those restrictions.
Whereas in the full virtual machine model types, typically you write a lot more code in your smart contract,
and that's all stuff that has to be running, all stuff that runs in the consensus layer and everyone has to agree on.
So the separation means even that quite rich subset we allow, that restriction is only there for the code use,
for the specific cases where you need to verify a transaction, not where you're crazy.
There's a whole bunch of advantages to doing it this way.
You know, one obvious one is that you don't,
not every case where you're working with transactions
has to be sandboxed and deterministic.
If you want to make a nice GUI that draws transactions on the screen,
for example,
doesn't, you know,
if you've whitelisted the transaction types that you're using,
you don't have to worry about determinism or security,
right? You just want to access the data that's inside
and run them and see if they're valid.
And, you know, maybe if it's not completely deterministic,
it doesn't matter.
So it's super convenient to just be able to drop this code in as if it was a regular library and start working with it.
Yeah, for generating transactions as well.
You know that they're valid because you're constructing them.
So you don't really want any pain from crossing language barriers and things.
You just want to just want to start instantiating objects and working with it.
Okay.
Now, the design of Kordas looks really interesting.
It's probably the most fascinating technical design that I've seen around this space.
But for the final section, let's kind of.
jump into sort of the the business proposition with
platform like Corder.
So the way I see it is like what Corder could enable in the future
once it's running at a large scale is for a large financial institution
investment bank, it can allow them to have one unified view of all of the contracts
that they are participating in and have it like,
have it like in in one system right and when a lot of these institutions have these shared
shared global views of all the contracts that they are participating in it solves the problem of
them need to spend time on reconciliation or making sure there are no errors etc so how how does a
system like this impact the economics of financial services firms and does it does it benefit small
firms, it benefit large firms. What is the economic impact of removing the reconciliation process?
Okay, so I'll give maybe, maybe answer that through a few examples, but start with just restating
the essence of the problem space. So you use the word reconciliation, and so let's just be clear
about what we mean by that. So you gave an interest rate swap example earlier. So it is not
atypical, it is common in the case where we've negotiated a deal like that between ourselves
bilaterally, and we're managing it through its life cycle bilaterally, we're not using
it as a central counterparty clearinghouse. Then each of us will have built at least one system each,
often many, where that deal is recorded and where aspects of its life cycle are managed.
So to say we each have two would be an understatement. So that's four systems between us,
all of which are recording the same data, all of which have to agree on a large subset of their functionality.
But they're written by different people with slightly different functional requirements sets,
slightly different assumptions, slightly different reference data, slightly different bugs.
And so they mostly agree, but sometimes they don't.
So we have to put both in a prevent and a response strategy.
So there are lots of reconciliation processes that go on that affirmatively check throughout the life cycle of that trade,
that both sides have indeed reached the same conclusions as to what the current status of the deal is,
who is, what to whom, what needs to happen next.
So that's expensive.
It imposes what amounts to an IT tax on organizations.
They have to spend this money in order to be able to participate in that market.
And that cost is obviously, it reduces the number of firms who can participate,
but it also is experienced as higher costs by customers.
If we can move to a model where that deal is recorded accurately once,
and each of our systems is participating in this network
so that when I look at my copy of the deal,
I know that what I see is what you see.
We both know that we see the same thing
because we're running on this consensus layer.
Then we get massive benefits from sharing the consensus code
only needs to be written once.
We probably will still need to run some reconciliations
until we get confident,
but we won't need to run as many,
and they'll identify fewer breaks as we go.
So both the day-to-day costs go down, but also the fixed ongoing costs of running this infrastructure should go down as well, which I would argue, although I've not done the analysis on this, would allow for more players and hence more competition and more creativity and innovation.
So that's just one example.
The other example, and this is a project we did with some members late in the summer early in early in the fall, was to look at a very,
a very specific
specific regulation.
This project's not public,
so I'll just talk about this one in generalities,
but a specific regulation
that says, in effect,
that if two banks,
if two institutions cannot come to agreement
on who owes what to whom
across a portfolio of trades
accurately and within a certain time period,
if they cannot do that,
then they have to hold more capital,
which therefore means it's more expensive
for them to be in business,
the profitability will be suppressed,
So quite apart from the IT cost of running this infrastructure, there are regulatory reasons that directly affect the balance sheet if banks can't show that they are indeed in consensus and running similar or identical business logic with their peers.
So I don't want to overstate or oversell this because change takes time.
Implementing change and implementing new and better systems in one organization is difficult.
Doing it across multiple institutions at the same time is of course harder still.
still. So we should expect this to be an incremental journey that we're on. But the prize is big and real.
So Richard, if you can for a moment put on your speculative hat and be completely responsible and think about
what will that mean for the financial system? Like what is it going to look like 10 years or 15 years
from now once a lot of this change has percolated through the organizations and change sort of the structure of
banks and other financial institutions.
I'm not going to rise to the bait.
I don't know the answer to that,
but I do think I know one thing,
which is a lot,
not all, and I'm not here as an apologist
for some of the excesses,
but a lot of the complex instruments that exist
to solve a real client need.
We talked about those interest rate swaps.
At the end of that chain of transactions,
There's almost always company trying to hedge its interest rate risk or trying to ensure that it's protected against exchange rate fluctuations for some goods it's buying or for some goods it's selling.
There are people in the real economy at the end of this chain.
And because of the problems we've had in the past, there is a strong regulatory move toward ever more standardization of these instruments,
the moving of them towards central clearinghouses so that the regulators get far more confidence that they're going to be more confidence that they're going to be more confidence that they're going to be more.
there won't be any blowups.
To the extent that this technology,
and I think it will,
to the extent that this technology
allows that regulatory direction of travel to continue,
but we still get the customization
and the specialization we need
to actually solve the problems of the people
who are buying these products.
To the extent we can do that,
it will be a benefit to all.
So I can't predict what the financial system
will look like in in two years,
then alone 10 years.
But any technology that drives
a sort of safety and consistency
through knowing that there are, there are no mismatch trades, no incorrect views of the world,
while still allowing the banks to serve what is ultimately their customers is what I'm focused on.
I'll rise to the bait. It'll be easier to be your own bank.
Yeah, no, I think that's a great point, right? Because in the end, you would imagine that,
first of all, the cost of running all these systems are going to be dramatically lower,
and then there's probably going to be much more standardization across the systems.
So yeah, maybe you can finally be your own bank, no?
Yeah, I don't know if it would ever happen because there's lots and lost people in these institutions who can say no.
But it would be, I think it would be fun to get to a point where, you know, your online bank has like a internet ATM
and you can withdraw like digital pounds and, you know, use them as if they were like you would have used bitcoins.
And there would be withdrawal limits and so on, just like with a real ATM because of security and other reasons.
But it would be a nice thing to be able to get to the point where you can send money around just like you could with Bitcoin, but you're using more useful currencies.
We've talked a lot about making the existing financial system more efficient, replicating an interest rate swap.
Are there entirely new products and use cases that today don't exist that will become possible?
once this technology or technologies like this have become vitally adopted?
Well, there's a lot of discussion around using this sort of thing for supply chain management
and things like that.
Currently, and especially around management of trade, like international shipping and things
like this, which is currently an incredibly paper-based process.
I don't know if you would call them like new products, but the act of selling things around
the world is remarkably bureaucratic outside.
of internet services and so on, making it easier to sell things, making it easier to figure out
where things have come from. If you look at some of the things going on in the world around
trade deals and so on, right, the traditional approach to tackling all this bureaucracy and paperwork
has been building political unions and political blocks, and that's starting to run out of steam.
So maybe if instead of tackling the bureaucracy and problems of these things with political unions,
you just have really, really, really efficient software and really efficient tracking and things like that,
then that's another way of making trade easier and tackling these problems.
Well, I don't know, maybe.
Maybe I'd add to that.
I guess Mike gave a pretty exciting example in a view-your-own ATM.
That sounds like a new product to me.
It's not something that we are in the process of building,
but you can see how this technology could get you there if someone followed that train of thought.
The other thing, and maybe this is going a step further, is to think about what happens when
you have a network of Corder Notes upon which multiple applications, multiple Cod apps, as we call
them, are deployed.
The interaction and interoperability of those applications, I think, is also a source of potential
future innovation.
I don't pretend to have the imagination to say how that would be used.
But having applications that can within the bounds of the consensus layer and the verification logic
manipulate the state objects of other applications, build upon them, and reuse their functionality.
I don't think we've even begun as an industry to think about where that might take it.
Okay, so we're almost at the end, but there's one thing we do want to address very briefly.
So Corder, of course, is open source available for anybody.
Anybody can do anything with it.
It also doesn't have a built-in token or something like that to monetize it.
So there's a different project that you guys are working on called Concord.
Can you share anything about what Concord is, how it's different from Corder,
and maybe how that relates to the R3 business model?
Yeah, sure.
So exactly as you said, Corder is the software that our team has been building.
It will be open-sourced under the Apache 2 license.
on November 30th, it's our hope to contribute that to the hyperledger project.
But as you say, that software, unless it's deployed and solving real business problems,
is an interest in curiosity, but it's not particularly useful. It's just interesting.
So the other piece of work, and we've spoken a little about this in public, is essentially
driven by the thought experiment that says, right, okay, that's great, we've identified these
problems that need to be solved and for which Corder is a foundation of the solution.
But this, almost back to the founding principles of R3, this is useful when it's solving real
problems for, in our case, our case, banks and financial institutions. This software needs to be
deployed across them. They need to have nodes. They need to be secured. They need to be connected
to each other. The messages need to be rooted. And just as I hinted earlier, this becomes more
valuable to institutions and their clients and to everybody when when you don't have to
deploy a different network for every application you want a common shared
inclusive network that the people not just as other people can come along and
deploy applications on top of and that in a nutshell is project Concord it's the
vision for how we take Corder open source software and deploy it as a network upon
which other people can then build applications so we get the benefits from
sharing that infrastructure
Yeah, that's the idea.
Okay, well, I think we're at the end.
We've been running quite a long time, but this is a very exciting project,
and it will be very exciting to see what happens when it comes out just a few weeks from now,
depending when we'll release this.
And I'm sure it's also something we'll come back to it, right?
So I think it's our expectation and probably certainly your expectation that this will become,
while I use, probably lots of people will build applications on top of it,
will build technology around it
and it will be exciting to see what impact
this is going to have on the financial system.
Yeah, let's hope so.
Yeah, so thanks so much for joining.
Indeed. Thank you for having us.
And of course, we're going to have links to the white paper
to some of the other resources that are out there
and the Mike's technical white paper is also going to come out in a few weeks.
So probably not at the time this comes out,
but we will certainly also tweet it out and share that.
So yeah, thanks so much for listeners as well for joining us once again.
So Epicenter is part of the Lessor Bitcoin Network.
You can find this show and many others on Let's Start Bitcoin.com.
And of course, you can subscribe to this podcast in any podcast application or watch the videos on
YouTube.com slash epicenter Bitcoin.
So thanks so much and we look forward to being back next week.
I'm not
the
name
I'm
MOYA
MOYA
MOYA
