Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Mark Miller: Agoric and the Decades-Long Quest for Secure Smart Contracts
Episode Date: May 7, 2019We were joined by Mark S. Miller, Chief Scientist at Agoric. Mark is a computer scientist who has done ground-breaking work on many topics relevant to blockchain and smart contracts going back decades.... We discussed his visionary 1988 Agoric papers, which explored how markets could be applied to the world of software. We also covered how his view of smart contracts, which focused on secure bilateral agreements complements and converges with blockchain. Finally, we covered his new company Agoric and their conceptualization of higher order smart contracts. Topics covered in this episode: Mark's effort to prevent the government from suppressing the discovery of public key cryptography in the 1970s The legendary project Xanadu and its attempt to create censorship-resistant web publishing Mark's Agoric papers and the vision of markets for computation Why AI hasn't changed the shortcomings of central planning The difference between his view of smart contracts and Nick Szabo's Their decade-spanning work on making JavaScript the best language for smart contracts Agoric's work on higher order smart contracting Episode links: The Agoric Papers Computer Security as the Future of Law - YouTube Capability-based Financial Instruments (2000) Distributed Electronic Rights in JavaScript – Google AI Agoric at SF Cryptocurrency Devs - Programming Secure Smart Contracts - YouTube The Duality of Smart Contracts and Electronic Rights by Dean Tribble at Web3 Summit 2018 - YouTube Sponsors: Azure: Deploy enterprise-ready consortium blockchain networks that scale in just a few clicks - http://aka.ms/epicenter Cosmos: Join the most interoperable ecosystem of connected blockchains - http://cosmos.network/epicenter This episode is hosted by Brian Fabian Crain & Sunny Aggarwal. Show notes and listening options: epicenter.tv/286
Transcript
Discussion (0)
This episode of Epicenter is brought to you by Microsoft Azure.
Do you have an idea for a blockchain app but are worried about the time and costs it will take to develop?
The new Azure Blockchain DevKit is a free download that brings together the tools you need to get your first app running in less than 30 minutes.
Learn more at aka.m.m.s slash Epicenter.
And by Cosmos. Cosmos is building the internet of blockchains,
an ecosystem where thousands of blockchains can interoperate, creating the foundation for,
a new token economy. If you have an idea for ADAP, visit cosmos.network slash epicenter to learn more
and to get in touch with the Cosmos team. Hi and welcome to the Episena. My name is Brianne
and my name is Sonny Agarwell. So we're about to speak with Mark Miller, somebody I've had,
I've been wanting to have on for a long time and we had a conversation about Agorik. So Sonny,
I think you were familiar with Mark Miller before as well. So where did you learn?
about Mark and his work.
Everyone probably knows, but now, you know, I work on Cosmos, and we're focused on, like,
sending interoperability between blockchains and sending assets between blockchains.
And so back when Cryptokitties first came out, I had a question about Cosmos, which made me,
like, I was asking myself, okay, I can send my crypto kitty from my, like, you know,
Ethereum blockchain to another blockchain.
But how do I breed my CryptoKitty on the other blockchain?
And like, you know, if I want to breed my crypto kitty, because that's the whole point of the game, right?
You want to like breed all these crypto kitties and stuff.
And so I realized, oh, wait, every time I have to go back to Ethereum.
And like, then I kind of, you know, I think I've got who it was.
Maybe Zaki.
He mentioned to me that like, oh, you know, check out this agoric thing.
I think that's the key to this entire thing.
And so I kind of like started deep diving into it.
And I just got so amazed.
I read this paper by Mark called Financial Instruments as Capabilities.
And that paper, I'm like, whoa, this is so cool.
And just like it got me really obsessed with it.
And then so, you know, I'm really happy that now, you know,
Agoric, they've been actually very working very closely with Cosmos now.
They're helping us on the IBC specification and whatnot.
So, you know, it's great that, you know, Mark Miller has been like working on this stuff for like decades,
like him and like Nick Sabo and like, you know, these are, like, Halepinian.
These are like, you know, in my mind they're like considered like the greats of like the cypherpunk movement.
And so, you know, I'm just super excited to get the opportunity to work more closely with him nowadays.
Yeah.
I mean, there's definitely so many interesting concepts there.
And it's not an easy, it's not easy to understand, I think.
But, you know, people who kind of want to get into the weeds of these different mechanisms and architectures of decentralized computing and it's kind of like digital economies, I think we'll find it very interesting.
So before we get into the interview, you're going to be at New York blockchain meet too, no?
Yes, I am. And I think Friedricha will be as well. And so I'll be giving a, I'll be at token summit for sure, and I'll be giving a talk there.
You might see me interspersing in and out of consensus every now and then. And then Cosmos is going to be having a, you know, a meetup.
We're co-hosting a meetup with the Avalanche team in New York on the 14th. So if you're, you're,
if you're free to come around for that,
please come check that out as well.
Cool.
Awesome.
And with that,
let's go to our conversation with Mark.
So here today with Mark Miller.
And I actually tried to have Mark on the show around four years ago in 2015.
Somewhere I had stumbled on his work.
So yet done this work on kind of smart contracts,
you know,
very long time ago,
much before Bitcoin and blockchain and all that.
And I,
you know,
he wasn't working in the,
blockchain or Bitcoin space back then. But I found his email and I emailed him and he was at Google
at the time and he sent me a talk that he did in 1997 about smart contracts and the kind of
legal ramification and technology for ramification of smart contracts, which was just amazingly
prescient. You'd watch the talk and it's astonishing how many of the ideas that later became
you know, kind of widely used were there. So unfortunately it didn't happen.
back then that we had them on, but since then, Mark has transitioned.
He's left Google and he's working fully on kind of decentralized networks and digital money
and kind of the blockchain space in general.
So I'm really excited that, you know, finally the episode is happening and we're having
you on Mark.
Well, I'm very happy to be here.
So to start up, I mean, you've been part of this cyphur punk cryptography world for a long time,
But how did you originally became involved in that?
So I'm going to go all the way back to 1977.
I was working with Ted Nelson on Zanadu.
Zanadu and Augment were the two early hypertext projects well before the web.
And Zanadu was the one that had the vision of worldwide hypertext publishing as the
electronic medium for humanity.
And Ted and I were both very influenced by George Irwell in 1984, Ministry of Truth.
And we understood that the coming world of electronic publishing could be a force for oppression
and tyranny, or could be a great liberating force, giving us all privacy and freedom from
censorship.
And we very much wanted to do the second.
We saw it as our mission to lead the world into the coming of electronic publishing as a liberating force, and we didn't know how to do it.
In 1977, Martin Gardner was editing a column for Scientific American named Mathematical Games, and one issue of that column explained the discovery of the first public key algorithm, the RSI,
algorithm. And he did not actually explain the algorithm. He explained the logic of what you could do
with a public key system, both the asymmetric encryption for privacy and the asymmetric signing
for integrity. He painted a very nice picture of the power of this. I called Ted up in the
middle of the night, very excited. Ted, we can prevent the ministry of truth. We wrote away for the
paper. The paper did not arrive. And we found out that the reason it did not arrive is because the U.S.
national security apparatus, some part of it, decided that the paper should not be publicly
released. I'm going to say classified. I don't know what the legal category is, but they made it very
clear that they would consider it to be illegal to distribute the paper. I got really incensed by this.
I got passionate and angry in a way that I have really not in my life before or since, feeling quite
literally they are going to classify this over my dead body.
I went to MIT, hung around campus, managed to get my hands on a paper copy, or rather, gloves.
I was very careful to handle it only with gloves.
I took it to various copy shops.
There were no home copy machines.
I made lots of copies at different shops, put them anonymously into envelopes,
sending them from a variety of mailboxes, sending them out to home and hobbyist computer
organizations and magazines all across the country without any cover letter, just the article
itself. Fortunately, early in 1978, the U.S. government decided to declassify. They gave the green
light for distribution of the paper. Communications of the ACM immediately published a paper in
the February 78 issue. And I will never have any idea.
whether the actions I took had any impact.
I don't have any particular reason to believe they did have an impact.
But the experience of doing that, for example,
handing copies of the paper to some select friends saying,
if I disappear, make sure this gets out.
This was a really radicalizing moment for me,
of realizing the power of cryptography to change the world,
to protect us as individuals from large and oppressive institutions,
that this was worth fighting for.
Such an amazing story.
And I think it's hard for people today to conceive this, right?
Because today you have to, okay, somebody writes a paper
and then some new science thing, they can publish it, right?
So the idea that the government could, like, try to,
Okay, disinformation is too important that, you know, the people shouldn't know about this.
That's, it's pretty, it's pretty amazing.
Yeah, there have been several phases of governments, of the U.S. government in particular,
impeding the progress of cryptography and impeding the progress of decentralized markets,
smart contracts built on cryptography.
export controls lasted until about 1998.
The e-language, my distributed cryptographic object capability language,
which a lot of the language-based smart contracting ideas came together.
We came out with that language during the era of export controls.
So we had to actually split the effort where we were,
where we were distributing it from the US without the cryptography.
And then Tyler Close, a collaborator, living on Anguilla, a Canadian citizen,
then reverse engineered how to put the crypto back in.
And the e-language was actually distributed from Anguilla during those days.
There was also the clipper chip trying to get trapped doors into mandatory trap doors into
to cryptography. And then 1998, export controls were lifted. And then after 2001 with 9-11,
there was the Patriot Act and suddenly this big chill in the air where Doug Jackson from Egold,
one of the first attempts at doing a cryptography, cryptographically based currency system,
in this case backed by physical gold, he was arrested.
And there was a chilling of the work from that forward.
So there was a lot of fighting going on.
There was the RSA T-shirts where people would have the RSA algorithm written on a T-shirt
and go across borders with it, kind of daring people to arrest them
because it's a free speech issue at that point.
This episode of Epicenter is brought to by Microsoft
and the Azure Blockchain Workbench.
Getting your blockchain from the whiteboard to production
can be a big undertaking.
And something as simple as connecting your blockchain
to IoT devices or existing ERP systems
is a project in itself.
Well, the folks at Microsoft had you covered.
You already know about the Azure Blockchain Workbench
and how easy it makes bootstrapping your blockchain network
pre-configured with all the cloud services you need
for your enterprise app.
Their new development kit is the IFTTTTT for blockchains.
Suppose you want to collect data from someone in a remote location,
via SMS and half that data packaged in a transaction for your HyperLedger Fabric blockchain.
The development kit allows you to build this integration in just a few steps in a simple
drag-and-drop interface. Here's another great example. Perhaps you're an institution working with
Ethereum and rely on CSV files sent by email. One click in the Devkit and you can parse these
files and have the data embedded in transactions. Whatever you're working with, the Dev kit can
read, transform, and act on the data. To learn more and
And to build your first application in less than 30 minutes, visit aka.ms slash epicenter.
And be sure to follow them on Twitter at MSFT blockchain.
We'd like to thank Microsoft and Azure for their supportive epicenter.
So you talked a little bit about Xanatu and how the thing that you guys saw there was
this idea of censorship-resistant publishing and just amazing, you know, force it would be in creating
freedom. Of course, the parallels to Bitcoin are like, you know, astonishing, right? Because people
would always speak about, okay, censorship-resistant money and basically speak about it in very,
very similar terms. Did you guys back then, as early as, you know, when you started working
Alexander, do already think about, okay, maybe there should also be something like censorship-resistant
money and what that could look like? I don't know that my thinking about
cryptographic commerce all the way to cryptographic money goes back that far.
I think my first exposure to really strong crypto for money, when did Chown's Digicash first come out?
Yeah, I think that was in the 80s, I think.
In the 80s, okay.
At the time we wrote the Agorac Open Systems Papers in 1988, we assumed
secure electronic money and micro-payments without really exploring how to achieve that.
It was more of assuming that there's some solution to that, then elaborating and exploring
all of the kinds of smart contracts, all the kinds of behavioral commercial institutions
and auctions and various kinds of incentive engineering, we called it, which now
called mechanism design.
We explored all of that as computational embodiments of contractual arrangements and institutional
arrangements, assuming that there would be an underlying money system.
I did do in my 1987 paper, Logical Secrets, a really terrible first attempt at a distributed secure
money, but the idea of doing a money with no central issuer like blockchain has, I did not see
anything like blockchain coming. I was much more thinking in terms of like Hayek's paper on the
denationalization of money, where you have many separate currencies competing with each other.
And this is in general, a theme I'm going to come back to, which is, in general, my approach was
decentralized, not the way in which people in the Bitcoin space, in the blockchain space,
referred to decentralize, which is mutually suspicious parties all coordinating together
to derive a consensus on single decisions.
That's one form of decentralization.
Let's call that coordinated decentralization.
I was much more thinking what I'll call loosely coupled decentralization, which is what we see in the internet, what we see in the web, where there's tremendous architectural diversity, there's essentially no decisions that everyone has to jointly make.
And high-xed nationalization of money was basically saying the same thing with money. Let many monies compete with each other, let reputation feedback and competition drive the system.
towards emergent robustness so any one money might fail, but if it fails, the competition
and drive customers to other monies. And we saw that as a model for commerce in general.
Right. Like Brian and I actually chatted once with James Dale Davidson, who wrote the book,
The Sovereign Individual. And a lot of people, you know, try to draw a lot of parallels between
that work and Bitcoin. But, you know, in that work, he's actually talking, you know,
talking about decentralized money in a very similar way that you were.
are. Like, you know, he talked about like cryptographic money, but he actually really had the idea that,
like, you know, there'll be many, many private issuances of money. Like a Swiss bank will issue its
own money backed by gold and, you know, this people will issue their own money. And then, you know,
users will kind of choose which money they want to use. Yeah. And in the mid-90s, um,
Dean Trouble, who's now one of the founders with me of Agorik, and that had been collaborating with me
all the way back in the late 80s, Dean Trouble and Norm Hardy, creator of the Kikos Objectability
Operating System. The two of them came out with a decentralized payments proposal.
You can think of it as a decentralized money called the Digital Silk Road, which was basically
routing payments through
pairwise bilateral relationships.
Each bilateral relationship has a credit window.
So I won't go into it.
It has many similarities
to what Interledger is now doing.
But the main thing is that it really was
this hyper-decentralized
in a loosely coupled manner
system of payments,
but then as you accumulated imbalances
for each bilateral
a relationship, you'd have to clear the imbalance through something else.
And that something else was just still assumed to be of a variety of competing real-world
monies with no new insight as to how to make those cryptographic.
So I want to give a special credit here to Nick Zobo, because during this period of the 90s,
first of all, his vision of smart contract was of tremendous influence on me.
but also the kind of thing that we now understand from blockchain,
Nick Zobo was trying to explain the power of that to me.
And I wasn't understanding it.
And I did not understand it until I saw blockchain,
until I understood how Bitcoin and Ethereum work.
And then there was this, aha, that's what Nick was talking about all this time.
So while I was thinking about the emergent robustness from competition and reputation feedback in a loosely coupled network where any one point can fail, looking at inspired by in the market, the dynamics of the marketplace in terms of what happens between businesses, Nick was very focused on the internal controls by which a large institution can, by having internal controls and public audiences,
and well-designed governance systems and separation of duties,
you can build an individual institution that can be much more trustworthy
than any of the individuals in it.
And Nick understood that things like Byzantine fault tolerance,
like massive replication with cross-checking and consensus mechanisms,
is kind of the extreme form of internal control.
so that we can now build a logical individual institution
that is much more trustworthy
than anything humanity has been able to build before.
And the kind of contract,
there's some kinds of contracts for which that's needed.
And the one for which it's most needed,
which was highest leverage, is money.
And it's no accident, I think,
that we saw it emerge first with cryptocurrency.
Or specifically, at least,
for money issuance. Like you said, there's your, the, you know, I like to call that the distributed
version. The distributed version kind of, you know, took, you know, the interledger protocol is definitely,
I think it very, very similar to what you're, what you're talking about here. But then, you know,
in interleger, right, it doesn't have like a native money. And it kind of depends on the,
it assumes the existence of some other settlement mechanism. While,
the, but on the Nixabo's vision, like, you know, it seems that, yes, this is good for coin issuance,
but at the end of the day, like, you know, maybe payments don't need to be on this.
So I think, you know, what I think is actually really interesting is that, like, the lightning
network seems like a combination of these two ideas where it's, you get, you, you, you use a base
redundant system for issuance.
And then you try to use a distributed system for payments.
And you can also use the base system along with issuance as a message board or like this reputation, right?
One of the issues I always had with Interledger is, yes, it assumes the existence of reputation.
But where does this reputation, is there a bulletin board where I can go tell everyone that, hey, look, this guy screwed me over.
There isn't.
And so that's one of the things that have redundant blockchain also gives you.
That's kind of what lightning does where like, you know, if you want to challenge someone, you can challenge them on the base chain.
So I think it's kind of cool to see that like, you know, both of your vision and Nick Sabo's are kind of both correct in partially.
Yeah.
It took me a long time to see that.
I think that's exactly correct.
I want to give a shout out to Jorge Lopez, who had studied both what was going on the blockchain as well as my old papers.
And he came to me with the integrated vision.
And then I saw that, oh, it's not that Nick's vision and.
and my vision are alternatives for competing with each other, they actually fit together,
and they're actually about different layers of the system. And that very much inspired what Agorik
is now doing, my new company. So the way we see the combined vision is that you still want the
overall system to be a loosely coupled network of mutually suspicious machines hosting
mutually suspicious computation talking to each other. But now we can view a blockchain
as a way to build a computer out of agreement rather than building it out of hardware.
By building it out of agreement, you now have a logical computer that's much more trustworthy
than any one physical piece of hardware campaign.
But now it's still, that logical computer is just one node on a much larger network.
And that larger network can include other communication, secure communication between chains,
secure communication between chains and not chains.
So all the kinds of coordination we were doing with cryptographic protocols in a loosely couple distributed system,
we can now do that as well on top of blockchain.
and include blockchains within that overall fabric.
Yeah, no, I think that's, it's really nice how we explain this.
And one of the ways that I, that kind of comes to my mind when the way you speak about it is that,
you know, you can think of, you know, often one talks with blockchains about, you know,
removing a third party.
But in a way, the blockchain is a third party, right?
It's just a decentralized third party.
Right.
So in many ways, maybe the way the interactions,
economic interactions work is not that different from the existing world. It's just that instead
of the centralized third party, have the decentralized third party. Whereas your work kind of goes
into, you know, more, in a way, it's a more radical direction in that it's actually
decentralized. You don't have the third party so much anymore. And then of course,
if you bring the two together that you have maybe some of this architectural differences in
terms of the way the interactions works. And then when you need a third party, you have a decentralized
third party. So yeah, I think it's super fascinating.
how you have this kind of different ideas and different ways that are they playing out.
Yeah, I think that there's some small number of institutions like money,
like Auger is another great example, a worldwide prediction market where you need worldwide
credibility without prior negotiation.
But most contracts are local, and they don't need to run on a globally credible blockchain.
And the transactions that they do, they can do against local representations of remotely pegged money,
which is what several parties, including Cosmos are doing, what we're doing, and what Lightning is doing,
where the transactions that don't need to themselves be on the blockchain can happen much faster and much more privately.
and then the outcome of the transactions can roll up into net inflows and net outflows
and have them roll up the outcomes eventually roll up into public blockchains
without having to reveal what the contracts were that they rolled up from.
So you wrote a set of paper called like a Goric Open Computing, I think, right?
And there was three different papers, and they had quite a lot of,
widely read and it had some impact, I think.
So do you mind walking us through, like, what were the core ideas that you were
exploring in these papers?
There are three papers.
The central paper is the one called Markets and Computation, Agoric Open Systems.
And that's the one where we really go through all of the layers of our vision and how each
layer builds on the previous layer and arguing for why our foundational layer was necessary to
support the higher layers. So at the lower layer, we talk about computational foundations,
distributed computational foundations, with encapsulation and communication of information,
access, and resources. And that's, encapsulation and communication is very much
sort of the centerpiece of object capabilities. Encapsulation is a form of property rights,
a form of ownership. Communication is a form of rights transfer. So together they form a core rights
theory. Information, access, and resources maps very cleanly to confidentiality, integrity,
and availability. Integrity turned out to be the core issue that most of our later work
through the decades has been on.
So object capabilities at the low level,
and then smart contracting and markets and auctions
for dynamic price discovery and adaptive price-based behavior,
including with regard to applying the invisible hand
to resource allocation issues,
things like auctioning off the next CPU time slides,
having markets in space and network bandwidth.
And then on top of that, a vision of how the coming
of distributed decentralized electronic markets
covering the world would be enmeshed with and part
of the human economy and really changed the nature
of the human economy.
So that was the central paper.
The incentive engineering paper, that's
the one where we actually sort of go into the, to the detailed design of some core auction
mechanisms for doing this allocation and some game theoretic analysis of it. And we call,
and the term and sense of engineering, we didn't know about the mechanism design literature,
but that's just our term for what has otherwise been called mechanism design. And then the
comparative ecology, a computational perspective, is another kind of big picture paper.
This one, taking a look at various complex adaptive systems that we see in the world,
systems in which coherence emerges from a process that you'd call some kind of evolutionary
ecosystem.
So we looked at real-world human marketplaces.
We looked at biological ecosystems.
We looked at some AI systems that were making internal use of evolution.
adaptation, ERISCO in particular, and we were trying to compare and contrast them in order to
learn what is the framework that would best create the selective pressure from which distributed
problem solving would emerge. And we very much supported the use of market mechanisms as a robust
system of selective pressure to encourage this this emergent growth of problem-solving ability.
So those were the three papers.
What was the context of these papers?
And so you co-authored these with Eric Drexler.
And so, you know, for people who don't know, he's often called like the father of nanotechnology.
And so, you know, that seems like a very far off from some of the stuff that you were working.
working on. And so I guess how did you meet with Eric Drexler and how did you guys decide to write these two papers, these three papers together?
So Eric and I have very aligned visions of the future. And Eric's work, when I first met Eric, he was working
on light sales, on basically solar sales for propulsion in space. And he was
presenting at a space conference. I was working with Ted on Zanadu. I think this was the late
70s, 79 maybe, the Princeton Space Industrialization Conference. And I explained to him about
hypertext and about Zanadu, and his jaw kind of dropped open and he said, do you know how important
that is? And I actually learned to appreciate hypertext through his view of it. He saw value in
hypertext that none of the rest of us had and really deepened our view of what was so great about it.
So we were talking about all sorts of things, but we were thinking in terms of a much higher
tech future, a higher tech future that would have, for example, the scale of computation
that we would have with nanotech-based computers, which is still many orders of magnitude
beyond the scale of computation we have today. And it was clear to us that at that scale of
computation, the central planning approach to coordination would not work and that you needed something
decentralized where the overall goodness of the system emerged through loosely coupled decentralization
through a coherent framework of rules. And it was that future orientation and also our fascination
there was another critical breakthrough, also, which came from, I was explaining to Eric my excitement
about object-oriented programming.
And when I explained to Eric about the power of encapsulation in object-oriented programming,
he said, oh, that's like high explanation of the utility of property rights.
And that was a big aha moment for me.
that aha moment, I think more than anything else, that led to the agoric work.
So there's many virtues of property rights, but the one that Hayek explained is in terms of
planet interference.
Hayek says that the central problem of economics is how is it that all of these separate
creatures, people, with all their various intentions and mostly ignorant of each other,
formulate plans to where these plans are to serve their interests and to unfold in a world
that in which the plans of other agents that have been formulated in mutual ignorance of each other
are all unfolding together.
How do you keep these plans from interfering with each other?
And Hayek said, well, one element is that by dividing
up the resources into separately owned parcels where each planning agent knows that there are some
resources that he has exclusive access to, he can formulate some plans minimizing plan
interference with other agents. Well, that's exactly the object-oriented understanding of
encapsulation is a way to enable programs that are formulated separately to be able to operate
on their own encapsulated data free from interference by each other,
and that enables these separately formulated plants to be composed together
to realize cooperative opportunities from the composition
while still minimizing the dangers of destructive interference with each other.
So that understanding made both our understanding of high-ex point
and our understanding of object orientation deeper
and led to the appreciation of object capabilities as a form of encapsulation and coordination
that is not just minimizing the dangers of accidental interference,
but also minimizing the dangers of purposeful interference.
Okay, so this is a very interesting concept.
So let me try to dive into this a little bit.
So you said that, okay, with all of these, you know, very powerful computers, let's say with nanocomputers and stuff, then the central planning approach wouldn't work anymore with computing.
But it seems like the way you're speaking about it is, let's say I have, or let's say a company, right, a company has various different employees and, you know, resources and stuff like that.
Now, within that company, obviously, there is a kind of a central planning approach, right?
That's sort of the nature of companies, right?
That you say, like, okay, there's markets between all the companies,
but then within each company, there is the central planning approach.
And then I guess there was the, you know, work by, you know, Ronald Coes and stuff
about, you know, what determines the size of these firms and the kind of transaction costs.
But are you basically saying that if you think of the different components of a computer program
or computer architecture, all of them should interact with some market.
mechanisms. And if that's the case, like, how does that align with property rights? Like, does it make
sense, let's say, for a company to own all of these computing resources? And then there's still
being some market where all of these computing resources, like, interact in, you know, sort of making
payments and trying to maximize their profits and stuff like that?
So that's a big question. It has many parts to it. First part of answering it is that I
think that prices and adaptive price behavior is not the important early step. I think the
important early step is a system of rights-based coordination so that things that are formulated
separately, mostly in ignorance of each other, can still be composed together, that that
that people can create reusable libraries,
where there's in the computational fabric
a notion of separately owned data and resources
so that we can compose reusable components
and get larger outcomes.
And the modern richness of software, I think,
has largely been based on kind of an informal,
hacky, imperfect, insecure rights-based theory
coordination. This is the encapsulation of conventional object-oriented program.
And within a company, you also have imperfect systems that are like prices. You have, for example,
on a single machine, you have various forms of priority. On a Google data center, you've also got
various priority and urgency knobs and resource allocation knobs. And all of these are self-reported.
There's some, you can think of it, if you want to think of it as a central planning scheduler,
you can do that, or you can think of it as an analog of an auction mechanism. But it's not a
central planner in the sense of it making the decisions about what priority other things should have.
Rather, all of the other things self-report their priority very much the way players in a market express priority by using money and produce price information.
So this is kind of a cheap analog of prices.
And the reason why you can get along with both with insecure encapsulation,
and imperfect price mechanisms within a company is because the company has various kinds of sanction.
Everyone within the company is trying to cooperate with each other.
If someone is seen as too abusive, you're taking advantage, then the company has other ways to react.
So companies have strong admission controls, whereas as soon as you expose this to the,
the outside market. Now you don't have those other forms of feedback. You need genuine protected
objects, protected boundaries, and you need, for example, Ethereum with the gas system,
has to have a genuinely robust system of selling resources not so much in order to have efficient
resource allocation, but in order to have not terrible resource allocation, it's not so much
a question of optimizing. It's a question of depassimizing. It's a question of avoiding the
really terrible behavior. And companies internally have other ways to avoid the really terrible
behavior. This episode of Epicenter is brought you by Cosmos, the internet of blockchains. Cosmos is live
and we couldn't be more excited to see so many projects already building on it. Blockchrane
technologies are evolving fast, and development shouldn't be one-size-fits-all.
As a DAP developer, you need the tools that will allow your DAP to scale, grow, and evolve
over time.
The Cosmos SDK is a user-friendly, modular framework which allows you to customize your
DAP to best suit your needs.
It's powered by Tenerent Core, an advanced implementation of the BFT Proof-State Protocol.
Cosmos takes care of networking and consensus and allows you to focus on building your application
in your language of choice.
Ethereum smart contracts will be supported soon, and the SDK makes it simple for you to connect
to other blockchains in the Cosmos network.
If you have an idea for a DAP and would like to learn more about the Cosmos SDK, or if
you'd like to connect your existing app to Cosmos, visit cosmos.network slash Epicenter.
For Epicenter listeners, the Cosmos team will reach out to answer your questions and help you
get started.
We'd like to thank Cosmos for those supportive Epicenter.
So I'm actually really glad you mentioned Google's data centers.
as an example here, because I read an article actually a few weeks ago talking about how Google's
actually using their deep mind AI to coordinate energy resources within its data centers,
and that this experiment of theirs actually reduced their cooling costs by 40%.
And so, you know, maybe do you think that like, you know, centrally planned
And maybe humans aren't the best way of doing central planning, but, you know, maybe this leads
into a larger political question.
But do you think AIs are on the, you know, break of being better central planners than both
markets and human central planners?
So first of all, I want to say, I don't know the particular system that you're talking about.
I know a lot about how Google operates more conventionally before they started applying
a deep mind technology to this issue.
but I also just want to mention sort of a reasoning by analogy here.
Back in the 1940s and 1950s in the socialist planning debate,
when Hayek and Mises would talk about what unfortunately came to be known as the calculation problem,
what should have been known and what came to be known in later years as the knowledge problem.
But at the calculation problem, it was, well, you can't centralize the knowledge needed for a central planner to act.
That's the knowledge part of it.
And then there's no possible way you can build a central planner.
You can create a central planning institution that would act.
And back then, the advocates of central planning were pointing at, look at these newfangled computers.
Surely these computers will grow up into central planning agents.
and they can solve the calculation problem,
and now we can do central planning.
And the thing that the asymmetry,
there was a false asymmetry that was assumed there,
which is they were imagining the market of the day
with the complexity of the market that they knew
and imagining that the planners were much more capable
than planners of the day
because they were using computers,
But they didn't imagine that the markets would also have players that were using computers
and therefore were all much more complex and interesting.
And in fact, the knowledge problem gets worse, not better, as the individual players get
more sophisticated and embody more knowledge that they're also not able to articulate.
You get almost a tutoring problem there where, you know, the central planner computer can't
simulate all of the millions of computers in today's economy?
Right.
So with regard to the deep mind thing, once again, I don't know that specifically, but what I'll
react to is the thing that it's planning is about temperature and power and such things.
And that's also not a set of resource allocation decisions that programmers have been
writing their programs to deal with. It just hasn't been on the radar traditionally so that there
is no local decision-making by programs to try to be adaptive on those regards. So it's essentially
a situation where we had no decentralized planning and very poor centralized planning. So it's a
situation where we're planning so badly that even a central planner can do better. Once you've got
that kind of sophistication in the agents that are subject to the plans and they are now also as
capable of reasoning about those issues, then you have to again ask, does the asymmetry go
away, where the central planner has gotten special technology ahead of all of the agents that are
subject of its control.
That's really nicely how you explained this.
And I must say I find it like kind of encouraging knowing that it seems to, if this is true
and you're going to hold true, then maybe it is something that will kind of work counter
towards some of the centralizing aspects that come with AI.
And so then one last question I have about the papers before we, you know, I want to go back
into talking about the core company.
but, you know, what about like the fact that like, you know, when Hayek, he talks about, you know, part of the issue, I think, is that humans are very complex beings, that it's, you know, part of the measurement problem or information problem, as you phrase it, was how do you measure people's utility functions, right?
Like, you know, we don't have a way of doing that.
But when we're talking about bots here, right, like, you know, just computers, like it's pretty, I feel like, at least until we have very strong AI is.
Like, they don't seem very complex creatures.
And so I think it might be possible to model these simplistic bots rather than humans.
And so I don't know if some of high-X ideas around this, like, complexity of humans comes into play or not.
The notion of utility function is a, I think is very much like the notion of the perfectly spherical cow.
There's this complex real world, both for people and the programs, where what you've got is behavior that has been shaped over time to be adaptive and serve some interests.
And then you have outside the system using the concept of the utility function as one way of idealizing the behavior to reason about it.
But there is no representation in the person's head or in the program of a utility function.
Programs have complex behavior that are written by programmers and modified by programmers over time
to adapt to whatever the complex job is that the program is doing,
both with respect to what the job is and with respect to how the program is performance.
the job. And the programmers modify and change it in complex ways to just be more adapted.
And it's very hard to reason about programs. What we know is that it's impossible in general
to predict what a program will do other than by running it. So then our computer systems
run the programs, discover what they do by running them. But I wouldn't
call that central planning. I would call that just a distributed system of the running programs.
Cool. And so kind of to go back into, lead back into the blockchain stuff, one of the things that,
you know, kind of interested me about this property rights, you know, I think in the blockchain
space, we have two very prominent models of property rights and fees, transaction fees,
that are kind of dominating right now and are very different.
You have the first one, which is, you know, kind of done by Bitcoin and Ethereum,
where, you know, there's a limited amount of block space or gas limit,
and people use fees to basically, it's essentially going in a constant auction
where there's a limited amount of block space and if you want to get in,
you have to put in a fee and it's, you know, the highest number of people get their
fees.
And, you know, there's a lot of innovation going around in that front.
Like, you know, Vitalik has a proposal for, like, you know, doing different type of auction
mechanisms and whatnot.
But then there's a complete other end, which, you know, I think this is one of the few
interesting things that EOS actually did, was they proposed a more property rights-based model
of fees.
So the more EOS tokens you have, you get, you know, let's say you, it's a little bit more complex,
but for simplicity state, you could say that if you own FIOS, you own FIOS,
of the EOS tokens, you have the rights to use 5% of the EOS blockchain's resources.
You have 5% of the disk space, 5% of the computation power.
And so that takes almost a more property rights approach rather than this constant auction.
So what are your thoughts on these two approaches?
So I don't know the EOS approach.
I also don't know Vatelik's recent proposals.
right now we don't have good composable systems of electronic rights.
And I think that that's really sort of the prior issue.
So in that sense, I'm responding positively to what you said about EOS,
even though I don't know the actual EOS system,
having a foundation in rights and rights transfer is, I think,
the right conceptual starting point such that markets emerge from interaction between multiple
parties within a rules-based, rights-based framework. And obviously, auctions is one way to do that.
Proportional share ownership rights is another way to do that. All of these things are worth
exploring, I don't have a strong opinion that one is better than the other. I will say that
that Agoric is planning to implement the escalator algorithm for scheduling on the Agoric blockchain,
but we're also want to encourage all sorts of different experiments there. Okay, this is perfect
because that's kind of leading us exactly where I wanted to go. So, I mean, there were the papers many
years ago, which had to name a Goric in it, but then much more recently also, you know,
you co-founded a new company that, you know, is also called the GORCs. So can he tell us a little bit
like what is the main vision of the company? What are you guys trying to accomplish?
So what we're trying to accomplish is to bring the world economy online. And right now,
there's a problem, which is the blockchain space, the world of smart contracting that we're seeing
has not been successful at penetrating the mainstream economy. That it's basically this separate
world and the business activities in the mainstream economy see a barrier there that they're not
getting over. So markets are all about network effect. We want to create
a distributed system of objects in contracts on different platforms,
blockchains, non-blockchains, permission quorum systems, individual machines,
both publicly and within companies, we want to span that entire network of activity
in a uniform framework of at the low-level object capabilities.
and then at the high level,
the system of electronic rights and smart contracts
that we want to build on top of that.
And the result is that we want to enable the mainstream economy
to be able to take incremental steps towards adoption
of the technology
where all of the steps towards complete public participation
are as smooth as possible.
I want to make an analogy here,
which is on the web.
The web, as we think of it, is mostly a public thing, but the fact that companies inside their firewalls have their own internal private websites and the content on those websites freely link into public pages and people inside the company following the links go from internal pages, navigate to external pages in this completely seamless manner.
that's good for the public web and it's good for the spread of the technology to apply to
to things for which public visibility is not appropriate.
And so do you see a similar function that O'GORIC will have in that, okay, people can kind of
seamlessly go from like traditional means of doing commerce to blockchain base and it like,
like this kind of friction goes away?
Yes, there are several barriers.
One of the biggest ones is that smart contracting right now is too hard and too dangerous.
We've seen smart contracts constructed by experts in which hundreds of millions of dollars
have disappeared overnight with no recourse due to simple bugs.
And in order to open this world to the mainstream, you have to make it much more reasonable for programmers who are not experts on smart contract and programmers whose experts whine their subject matter to be able to create business arrangements, contracts, institutions with much greater confidence that their contracts mean what they think they mean.
And our approach with object capabilities in E-Rights, which I'll get back to in a moment,
helps tremendously in creating system of compositional, reusable contract components
that enable that kind of construction with confidence.
We did a lot of this exploration I mentioned in my e-language.
for the last, for 10 years, starting in 2007,
I've been on the JavaScript Standards Committee,
getting the enablers of that into the JavaScript standard.
So JavaScript now supports a subset,
which is an object capability subset of JavaScript
that essentially includes most of JavaScript,
such that many old JavaScript programs run in SES,
we call it, Secure AcmeScript, which comes out of work we did at Google, and now it's work that
Agorik has done in collaboration with Salesforce. So the result is that we're bringing this to programmers
not just as an extension of the object-oriented paradigm that people already know so that they can
extend the intuitions they already have about objects, but we're even bringing it to them
in a language that 20 million programmers are already familiar with.
How does this relate to the language you guys have been creating with this Jesse idea?
Is that related?
Yes, it is.
Yes, it is.
So there's two subsets of JavaScript that we've defined, a very large subset we call SES,
and a very small subset we call Jesse.
So Jesse, where Jesse itself is a subset of SES.
in doing secure programming, there's sort of two fundamental stances you can take with regard to code.
There's, I want to protect myself from misbehavior by your code,
and I want to ensure that my code means what it thinks it means,
and in particular, when I express security policy in my code,
how my code should, let's say, enforce certain terms,
certain arrangements on your code,
that I want to know that my code is interacting with your code
in the way that I think my code was designed to.
So SCS is designed to solve the first problem,
which is that I can run your code that I don't trust
inside an SCS, if you want to call it a sandbox,
under object capability rules,
where I can confident that your code has only gotten the authority that I intended to give it,
that your code cannot escape the sandbox, cannot do things with more authority than it was given.
And because Jesse is a subset of SCS, your code might be in Jesse,
but if I'm just protecting myself from your code, I don't care whether you stayed within Jesse
or whether you're using full SES.
For my code,
JavaScript has many hazards.
You know, double equal, for example,
sort of the famous one that has crazy coercion rules.
So everybody's programming style for JavaScript says,
avoid double equals.
So in Jesse, we just define a subset that omits all of the unnecessarily dangerous things,
only includes the best parts.
And the wonderful thing is that the best parts of JavaScript
are a really good programming language.
So we've been essentially keeping our code in Jesse.
We expect to be also, we've been doing,
we've been collaborating with academics
on formal specification languages
so that you can verify that object capability code
means what you think it means.
We think Jesse is the candidate to apply those tools
do, that's how those things fit together.
Okay, great.
So, yeah, that's very interesting, right?
So all your work on smart, on JavaScript and secure JavaScript
and how that's kind of coming together.
So, yeah, you spoke a bit about JavaScript and how you guys enable smart contracts there,
but like, what is powerful about this approach and what are kind of the capabilities
that the approach that you guys take the smart contracting has?
So one of the things that makes.
our current world of software is so rich and so composable
is higher order composition.
And what I mean by that is we start with higher order functions
where functions can operate on data and compute things can operate on values,
but the functions themselves are values.
So higher order functional programming is where functions operate on functions
with no limitation.
Objects cause effects, take actions, and objects can also hold and manipulate other objects.
So a table can store any kind of object, but then when you reify a concept like a table into an object,
then you enable the kinds of things that objects manipulate to be also the kind of thing that the,
that the reified manipulation is.
Likewise, in the marketplace,
much of the richness of the market interactions we have
is the reification of the reification nature of property rights.
That property rights started off very literal,
but then any time you create a contract that unfolds over time,
the continued participation in the contract is itself valuable,
and by labeling that continued participation a property right,
then any contract building block that's generically parameterizable
over anything that's described as a property right
can now operate on the rights created by other contracts,
and you can compose contracts together.
As an example, like, you know, so I can imagine an options contract
where, you know, an options contract is basically me making a contract with you saying,
hey, you know, I want the ability to buy this from you at a later date.
But then I can turn this contract into an asset itself and I can go resell my end of the options
contract.
And so you turn contracts into assets and you can make contract out of those assets.
And, you know, you can have this iterative approach where contracts and assets are kind of
interchangeable.
That's right.
So we talk about the duality of contracts and e-rights.
Contracts manipulate e-rights and contracts that unfold over time create e-rights.
And ERTP, the electronic rights transfer protocol, is the top protocol layer in our system.
And it's essentially a set of object interfaces and specifications for generically representing a wide range of kinds of rights.
rights that are fungible and non-fungible, divisible.
So money, non-fungible things,
and the right to continue participating in contracts within our framework
are all reaffied as rights described by ERTP.
And then to the extent possible, we create contract components
that assume of the rights that manipulating,
that only assume that they're described by ERTP.
You can't always do that,
But we can do that with exchange.
You can do that with options and futures.
You can do that with a variety of auctions, single auctions, continuous double auctions.
So we have this tremendous opportunity to create highly reusable, generically parameterizable contract components
that can, in which you can feed any ERTP described contract.
And then if that contract unfolds over time, then it creates a new derivative.
right that in turn can be fed into other contracts.
Right.
And so, you know, for our listeners who want to, like, get, you know, much better understanding
of this, I highly recommend one of Mark's papers he wrote called financial instruments as
capabilities.
And to me, like, you know, when I was first trying to understand this whole capability stuff,
it like didn't make sense.
But then, like, after reading that paper, I'm like, oh, like, you know, it had a little bit
bit of pseudocode in there.
And it's like, okay, reading that, I'm like, okay, now I see how this makes sense.
and I can visualize how to put these pieces together.
The paper is, the actual title is,
capability-based financial instruments.
It was published in Financial Cryptography 2000,
which, by the way, also occurred on Anguilla.
Anguilla became a little haven of crypto activity,
initially because of the export controls.
Yeah, so we'll definitely link to that in the show notes.
And so then another question I wanted to ask was,
now that you have this ERTP system and this Jesse smart contracting language, you know,
you could have went ahead and created a simple smart contracting platform like Ethereum or
Tezos or, you know, any of these systems.
But it seems you guys are not just creating a single blockchain contracting platforms.
Could you talk a bit briefly about what the goal there is with that?
So again, it's network effect.
And it goes back to the different.
early visions of hypertext. I hadn't thought to make this analogy before, but Doug Engelbart's
augment system was kind of a single system for those who signed up to augment, whereas Zanadu was a
worldwide distributed, loosely coupled hypertext publishing system, where there's no one provider.
We want to enable contracts that span from the one extreme of completely permissionless
globally credible blockchains all the way to various systems that are more private.
But one of the things that I think is really important is most contracts are local,
most need for contracts are local, most actual real world contracting is local.
There's no need to create worldwide transparency into the internals of a contract that's done
by a small set of parties. And then there's a few arrangements, which I would call more institutions
than contracts, that do need that credibility. And we want to span that whole range. There's this
large trade-off space. We want one uniform mechanism that can sit on top of that diversity and span it
and enable contracts that started off being designed for one place in that fabric to be able to be
moved and continue execution in another place on that fabric.
Cool.
Well, thanks so much, Mark.
So I think that there's so much there to talk about and dive into.
And there's a lot of resources we talked about that.
We'll put in the episode links.
So if people want to dive in, there's definitely plenty to keep somebody busy for weeks or months.
And yeah, we're very much looking forward to also see in kind of what comes out in terms
of practical use cases.
I would work on a Goric.
and hopefully we can do another episode
at some point in the future.
So thanks so much for joining us today.
Yeah, you're welcome.
It was a real pleasure.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show
on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode
of the Epicenter podcast.
Go to Epicenter.tv slash subscribe
for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, the guest, or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
