a16z Podcast - What the Narrow Waist of the Internet Means for Innovation Today
Episode Date: April 24, 2020Here is Ali's tweetstorm on the Narrow Waist of Blockchain Computing ...
Transcript
Discussion (0)
Hi, welcome to the A16Z podcast. I'm Zoran, crypto editor at A16Z. Today's episode is one of our intimate
hallway style conversations, or as intimate as remote work allows anyway. It's all about the history and
future of protocol development. A16Z crypto partner Ali Yahya, formerly machine learning research
at Google Brain, wrote a tweet storm earlier this year about the narrow waste of blockchain computing.
We linked to it in the show notes. Ali observed that the internet protocol, which emerged at a
research labs and government funding decades ago, has taken the world from zero devices to
more than 15 billion connected devices today. What was it about the internet protocol that
allowed building so many applications on top? Helping us answer this question is A16Z general
partner in enterprise, Martin Casato, who pioneered software-defined networking. He co-founded
Nassira, which was acquired by VMware, and then he led their networking and security business
unit, which he scaled to a hugely successful business, so he knows a thing or two about this topic.
The two debate the tension between bottom-up design and top-down architected approaches to internet applications, including the role of standards bodies.
More broadly, their discussion is about how innovation plays out in practice, and they end by sharing advice for entrepreneurs today.
But they begin with a quick history and description of the narrow waste and the conditions that create it.
When IP emerged, it acted as a kind of aggregator over that whole fragmented computer networking world, because a key goal for design was to enable,
any networking technology to support any application that might need computer networking,
which is kind of in stark contrast to the fragmented world from before,
where you really do collapse the entire networking world into one world.
And one way that people tend to visualize this is with a famous hourglass,
with TCP and IP at the center,
and an infinite diversity of networking technologies below,
and an infinite diversity of applications that are built on top,
with everything going through TCPIP in the middle,
which is why they are known as the quote-unquote narrow waste of the internet.
Now, what this did is it created a powerful economic flywheel,
a positive feedback that ended up taking over the world
because as more providers of bandwidth came in,
that led to more developers building applications on top,
who through the users who, via those apps, demanded more bandwidth,
created more demand for new providers of bandwidth to come in,
and around the flywheel you go.
And I think the key insight here is that this was only possible because of the minimalist nature of TCP and IP.
And there were countless of other kind of competing standards like ATM and XMS that had more features and were in some ways more powerful, but were thus less modular and less evolvable that ultimately lost against IP's minimalism.
And so to use another word, IP was radically unopinionated about the tech below and the apps above.
and that's what enabled the network effects,
and that's what enabled them to remain useful
over the course of 40 years amid the rapid pace
of technological progress.
But I want to go back to something very interesting,
which is it is worth talking about,
like, what are the initial conditions that create a narrow waste?
I think they tend to happen in one of two ways.
In one way, you have a concerted effort
from a number of normally experimenters, right,
and enthusiasts and academics, in the case of the Internet,
where enough different constituencies work together and they create a standard and they start the
adoption and that creates hopefully something like maybe a network effect or at least enough momentum
that this opens and the open standard is truly an open standard in the sense of anybody can use it
anybody can adopt to it it's not driven by the single interest of a single company and
I really believe so much of the internet was that I mean it really came from government research
and academics and you know if you go through all the original papers it was labs etc and that created
this. Now, what's interesting is another way that you can create these is simply through
technology monopoly, right? Whether that is a single company driving it, like in the case of
X86, which again, that's not the only architecture. There's many hardware architectures out
there, but I think you could say that X86 has become a narrow waste, and I think you can justify
that. But another very interesting one is Linux. So Linux had openness. Linux just ended up
becoming a technology monopoly, not because there was necessarily some broad coordinated effort.
I mean, yes, Linux has been a coordinated effort,
but it was much more around making Linux conform to POSIX,
making the Linux a standard like growing the Linux foundation.
But now it's just become so prevalent.
And so many folks use it.
You can really run Linux on anything.
Like I have a Raspberry Pi in the next room over.
It costs $30 and it runs Linux.
And the most hardcore computation in the world run Linux.
And then on top of Linux, you can run anything as well.
And so I do think that whether it's through basically technology monopoly,
a company creating a monopoly or an open standard effort, I do think you can get these narrow
waste phenomenons. And I think they're very interesting to observe and be students of like we are now
because they really do dictate areas where you see massive, massive explosion of innovation.
Yeah, absolutely. I think I liken the evolution of some of these narrow ways,
which is generally the emergence of standards, where it's necessary for the network effect of a standard
to get bootstrapped somehow, by like enough people have to use the standard, adopt a standard,
and try to push forward a standard in order for it to get the momentum that then actually
makes it take over the world because there's a strong network effect that people end up succumbing
to. They have to adopt the standard if they want to interact and interrupt with the people
who already have adopted the standard. And I think what you're saying with the two different
ways by which this can happen, where you can either have a concerted effort from a community,
from a very broad set of players that decide this is going to be the standard that we're going to
converge on or a monopoly company that makes it happen. Both of those are bootstrapping forces
that essentially make a standard emerge as the thing that people rally around and gets the
flywheel of network effects started. And I think this is where crypto becomes very interesting
because crypto might offer a new mechanism for bootstrapping this kind of network effect
via the token that exists at the heart of it.
And so one way of summarizing that argument is, personally,
I like to think of the Internet protocols as having created as a kind of two-sided market.
And so if we look at it that way, then we can compare it to other kinds of two-sided markets
that have succeeded.
So for example, like Lyft and Airbnb are also examples of two-sided markets.
And in the case of, say, the Internet, you have the supply side is bandwidth providers,
the demand side is developers building applications that need that bandwidth. In the case of Lyft,
you have drivers on one side and you have riders on the other side. And the key similarity between
all of these is that they all have this cold start problem. It's very hard to bootstrap them
because a ready and willing supply side is unlikely to just show up if the demand side is not
already present and vice versa. The same is true in the opposite direction. And so as we've been
discussing historically, this problem gets solved through enormous influxes of external financial
capital, BSA venture capitalist in the case of Lyft and Airbnb, or in the case of the internet,
the U.S. government, that subsidize one side or both sides of the market to really get it going.
And now I think that this is the problem that crypto helps solve, because imagine if Lyft,
as an example, had raised less money than it did from venture capitalists, and instead of
subsidizing both sides of the market that I was trying to create with cash, it had.
had rewarded early riders and drivers with a small ownership stake in Lyft, the company.
Now, if some of those participants had been able to appreciate that that small ownership
stake would someday be worth a lot of money, potentially, then maybe they would have been
much more willing to participate. They would have been more loyal to Lyft over some of its
competitors, and they'd be maybe more willing to evangelize Lyft to yet other drivers and other
riders and as a result help lift market itself. And so I would argue that this is a fundamentally
more efficient capital structure for multi-sided markets because the capital that's needed to get
it off the ground doesn't have to come from external financial capital, be it venture money or
government subsidies. It can come instead internally from the participants themselves who have
human and production capital to offer instead of financial capital. Great. And so I think to bring
this back to the internet in crypto, to imagine if the internet had had a token that grants its
holder with fractional ownership over the protocol itself and grants the holder with a share
of all of the revenue that goes through the internet itself. And imagine if both the demand
side and the supply side could earn that token in exchange for helping bootstrap the internet.
So the question is, would that have made it possible for the internet to emerge without government
and subsidies. And how much stronger would the network effects that the internet has would be
if it had that additional vector for production capital to enter the system in exchange for
ownership in the network? Yeah. So I think these are great questions. I do think that it's going
to require us to kind of build them up from the bottom a little bit. The first one that's worth
saying, and listen, I know this is maybe being too pedantic, but a great thing about the IP architecture
is it allows for any arbitrary distributed system to be built on top of it, including crypto.
And so this is like clearly one of the outcomes of a very minimal architecture, which is great.
I also think there's an apples to apples question, which is a little bit different than your question.
I think we should get to your question in a second.
The apples to apples comparison was there was a pretty rigorous decade of debate on what is the minimal requirements needed to build a network architecture.
And they came up with basically best effort destination forwarding.
That was the solution.
Like no flow state set of things.
I do think it's worth asking the mechanistic question.
in crypto is many of the crypto solutions are solving two problems. Your point. So I think a lot of
crypto solutions are trying to solve the distributed or the federated trust problem, like how to
random people that don't know each other develop trust. And that often reduces to the Sibyl
problem, which is how do you prevent a Sibyl attack, like one person pretending to be many people.
So there are a common set of problems that many of the crypto solutions are.
are tackling. And I do think it's worth asking the question of, is there a common set of mechanisms
that would be a technical skinny waste? And then there's the question that you asked, Ali, which I think
is a very interesting question, which is, is there a primitive that's provided by crypto, which is
the narrowest primitive you could have that allows for the, it overcomes the bootstrap problem
of creating networks. That means networks can self-create without requiring
massive collective action, you know, or Monopoly Technology or venture capital. Which is a great
question. It's a super, super great question. So I think when it comes to crypto, at least I feel
that both of these two questions are very relevant in light of narrow waste, and we should explore
both of them. The first one is the mechanism, one in the second one, is this, is there a way
to overcam the bootstrap problem? And I think I'll just say it for those that haven't been
students of the internet and the web. The reality is one of the hardest things to do, even though
we talk about social networks and network effects, one of the hardest things to do is to create a
network. Actually, there's been very few in the history of the internet that have been created.
Yes, clearly the internet is a network. Yes, Facebook is a network. Yes, Amazon is a network.
But you soon run out of networks. Like how many companies or how many projects have actually
overcome the bootstrap problem of creating a network? And to your point, almost all of them
required a tremendous amount of infusion of cash or something else. And so of crypto is this
kind of magic way of overcoming that bootstrapping problem so you can self-create a network,
that's an enormous, enormous primitive to create any number of companies.
Yeah, that's exactly right.
And well, to your first point about there potentially being a primitive at the heart of the
stack of a crypto blockchain that may act as a narrow waste, I think that that also is
very much an interesting question that's being played out currently because there are countless
companies that are running every experiment imaginable. There's the company that's all the way
on one end of the spectrum, building an extremely vertically integrated blockchain that is
opinionated about everything from the peer-to-peer networking underneath to the consensus algorithm,
to the compute on top, even including the instruction set, and its opinionated about the
programming languages that can run on top, and the SDKs and the user interfaces that then connect
to the VM level. And so those projects exist, but there's also then the other
end of the spectrum with projects that are much more lean and much more unopinionated and modular
about the whole thing. There's even projects that are just doing the bare minimum, which is
consensus and data availability, and then having other participants in the ecosystem build
everything below and above. So it'll be interesting to see, I think, if it may be a similar
story where having an arrow waste, having an unopinionated central building block that decouples
everything that happens on top at the application level, from everything that happens below,
low at the infrastructure level might emerge and might have a similar dynamic that IP did for the
internet. Right. And maybe it turns out that the meta narrow waste is to your original point. It's
not a mechanism. It's not a mechanistic thing. It literally is the notion of a store of value
that's purely distributed. So a crypto store of value. And so it's not any of the mechanisms to
create that because it doesn't matter what cryptocurrency you use. The fact that you have this
notion itself is the narrow waste and we'll continue to see a massive proliferation of different
types of cryptocurrencies or projects that use crypto. Then you have the notion of this
federated distributed trust and that's a narrow waste and on top of that you can build whatever
you want. And it seems to me that that question is not yet answered. Yeah, exactly. I think an
important question is how expressive does the narrow waste have to be? Right. Of course. Absolutely.
people want to build applications on top, right? And there's this raging debate. Like, if you want to
build some of the applications that exist on top of Ethereum today, on top of Bitcoin, that's not
entirely possible. But then again, Ethereum itself has also certain limitations that other
blockchains are claiming will hold it back. And so the question is, like, how far do you go? How
expressive do you make this building block at the center of it all in order to enable as many
applications as possible on top and also not limit the kinds of providers of infrastructure
that companies and really catalyze the network effects that can get this to work.
So it'll be interesting because there's this whole gamut of different approaches,
some more expressive than others,
and it's still kind of the jury is to allow us to what the right answer is.
Great.
So here's why I'm going to be kind of a little bit religious.
Sounds good.
Just because, like, from my experience is dealing with standards bodies
and I was very, very involved in a lot of standards efforts in the last couple of decades.
And it's very interesting to think, okay, so what happens at this point?
So you've got a bunch of competing views.
You have those that feel, okay, we're going to reproduce a Turing machine that's purely distributed.
You have others that, like, we're just going to do a store of value.
You've got many in between.
Some are very vertical.
Some are very open.
So you have all of these views and all of this effort and all this energy and all of this talent
going into this.
So what do you do now?
Like, how do you resolve this, right?
So I have a strong opinion, and this is strictly my opinion on this.
So one view is kind of this kumbaya view, which is, why don't we all get along and we'll create
an open standard and it's better for everybody.
and like we'll go and sit in rooms and we'll argue it out and we'll hash it out and this should be
global coordination and listen in the case of the internet maybe that worked but generally in my
experience that doesn't work because a group consensus is a horrible way to make decisions and B
you can't design something from whole cloth meaning until you throw yourself at the problem for a
decade you don't really understand the implication so I am much more an advocate of
instead of trying to solve these things by globalized standards
bodies that cut across all of these different efforts. I'm all about, you know what, listen, Darwin
works in the open market and the technology adoption cycle as well as Darwin works anywhere.
So why don't, I'm just so happy we have this kind of broad, explosive growth of all of these
things. And I think we should all just sit back and watch which ones are the most successful.
And if we believe in one, we help work on that one. If we believe in two, we help work on those
two. But we don't try and preordain the solution because I don't think anybody knows what it is.
And then, you know, after the Hunger Games plays out for, you know, another.
three, four, five, six years, we'll start to understand the implications of them.
And at that point in time, we'll be like, oh, you know what, this group who really advocated
for this solution, they seem to be right because, oh, you know, they have the most development,
they have the most applications on top of it, they have the most success.
And then you get kind of more of a Darwinistic or free market consensus on the solution
rather than this preordained one.
And, you know, I think that, again, the Internet is a great example of this where you did
have a kumbaya movement early on, but it happened a long time ago.
I mean, it happened in the 70s and 80s.
And then they created all these standards bodies, which, from my experience, were effectively
useless for the next 20 years.
I mean, it was large companies arguing about things.
Very few of them got implemented.
I mean, it ended up just being like a distraction.
And then the things that really, really had a change were the ones that were a lot more chaotic,
a lot less organized like Linux that really took advantage.
And so I'm like, negative on standards bodies, very positive on massive experimentation.
and using the lab of the real world and industry to prove out these ideas.
I mean, the fact that you have an economic engine at the heart of every crypto protocol
enables that kind of experimentation to the end's power because all of these companies that are
experimenting now have a way of being economically sustainable.
Whereas previously, if you wanted to innovate in the world of protocols, you had to either
be part of some large company that's, for whatever reason, deciding to do research at a
protocol level, or you have to be funded by a university or directly by the government.
And so I think that, I mean, this could be a golden age of research and experimentation at the
protocol level because you do now have a native business model for protocols and a native
business model for open source that enables value factors such that you can have a startup
company go off and run that experiment.
Right.
That's great.
Where, you know, you do have this ability to do it in a distributed way.
However, that does not stop large companies.
from wanting to create consortiums and independent projects trying to argue that like they're
maximal on whatever technology. And I think it's fine to have all of these arguments. But I would
far rather see the Hunger Games play out than some argument about what is better and what is not
better. I mean, ultimately, nobody can predict the future. And I think that the more people focus on
what they believe in implementing, executing, getting application on and seeing how this plays up in
the market. I think that's what the effort should be spent, not trying to coordinate these
kind of global things. And at least in the internet age, so much of it was trying to do this
coordination. And no, what's interesting also about this discussion is narrow wastes are
very good for building global architectures that grow organically, because it decouples evolution
of every layer, and it allows experimentation. It doesn't require tremendous amount.
of coordination, etc. However, it tends not to be the best way, for example, a company
to architect something. Because it's probably, actually, we're talking about how unusual it is
for this to happen. Because if you were, let's say, IBM in the 80s and you wanted to provide
a solution for a customer, and the customer was like, listen, I'm going to have a wiring
closet. They didn't really call it data centers back then. I'm going to have a few wiring
closets and I need them to be connected and I need something to connect them. If you're IBM,
you're going to build, and you build the servers anyways, you're not going to go build some grand
unified architecture that solves the world's problems, right? And if you did, I mean, it's
very unlikely to be successful because you're very busy solving customers' problems. And I think
this also comes back to what's so exciting about what's going on with crypto and with blockchain
chain is you tend to get narrow wastes only when you have either an ethos that moves above any
particular given company. We'll talk about kind of instructions that in a moment because I think
that it's a very interesting discussion. You actually have communities. And through a lot of
experimentation, you define a subset of that functionality. And that subset of functionality is something
a number of people adopt. And once they adopt it, then you get this. As opposed to, let's say,
you're starting company X tomorrow, it's very hard to win some narrow waste architectural
war just because you're so focused on solving a customer problem. So again, these tend to be
these much more organic kind of community-oriented things that happen. Completely. Well, I think
I mean, this is kind of related to the whole question of modularity versus integration,
even in the context of just traditional business with Clayton Christians and arguing that
you want to be integrated, like vertically integrated, whenever the technology that
you're building is not quite sufficient to meet the customer's needs, but then at a point
where it does meet the customer's needs and starts to surpass the customer's needs, then it
becomes better to modularize to some extent, because then you can reduce costs by outsourcing
some of the modules, and you can create competitive landscapes that provide what you need for each
of those modules, and as a result, the cost offering of the whole thing becomes better.
and because you're already above and beyond what the customer needs, it's fine, whereas when
you're not, I think vertical integration takes you a long way to actually maybe get there and be able
to provide what the customer needs. So that's kind of one argument that I've been toying with
and trying to relate it to the development of protocols, and in particular development of the
internet and the development of crypto, because the internet seems to be a glaring counter example,
a glaring exception to that whole framework. Because back at the time when the internet emerged,
it was not the case that the internet was enough, that the technology was sufficient to meet
the needs of actual business use cases. But somehow, the vertically integrated solution,
like the information highway that a bunch of people were working on, ended up losing out
to the more modular, bottom-up, organic approach that was the internet. And I think my sense is
that the reason that the internet ended up winning is because in that particular case,
for something that is so global and so interconnected with everything,
the network effects are so powerful that you're able to turn this theory on its head
and actually have a better chance of success through a path that is bottom-up and very modular
than through a path that's vertically integrated,
even though you are not actually at the point where you're meeting all of the business use cases
that people want to build on top of the technology.
And I think maybe it makes sense to go through an example.
Like if you think of the evolution of mobile, it was preceded, like before the iPhone,
You had things like the BlackBerry, which was this vertically integrated device that had only a single application that ran on top of it, namely email.
And it was essentially an application-specific computer that run only that one thing.
And it was kind of a tightly controlled ecosystem that was built by RIM just to provide a good enough user experience for email and make it useful to paying customers.
Because doing something that's more modular and something that is more general would have been too difficult at the time with a technology.
existed. And only once the technology became good enough, then modularizing things a little bit
and creating SDKs and interfaces that way that Apple did with its app store became more viable
and became kind of possible. And that ultimately ended up winning as a model. But you had to have
the evolution of the technology happen before that could work. Yeah, I think that's right. I think
there's multiple ways you can also view this, which is, you know, customers don't really care
about architectures. They just don't. And if we're going to provide guidance to say entrepreneurs in
the space. You know, architectures change the course in the arc of technology. They change the
course in the arc of industry. They almost never changed the course in the arc of a single
company, right? The goal of a company should be to sell a solution to the customer in a way that
best fits that customer's needs. And maybe that includes some sort of notion of extensibility,
but it rarely does. It's normally whatever they need. And so I've always said, if you're doing a
startup, you should sell a product. You should sell into a use case. So focus on a business
use case. You should sell a product, a specific product, not an architecture. You should
architect a platform. So as you're thinking about your architecture, you should make sure it's
sufficiently general and evolvable so that it can tackle multiple use cases and it can evolve
over time. And then hopefully, as you build a business, you can build an ecosystem around that
platform. Right. And the largest companies have been able to do this. But it almost always starts
with an actual use case. It is instructive to talk about other narrow waste or maybe not so
narrowways, but that have also enabled massive innovation and architectural shifts.
And another one that I like to think about is the instruction set architecture.
And the best example for me is so the idea about building an instruction set architecture,
like say, for example, X86 or risk or whatever it is, is that you've got a set of primitives.
That's the minimal set of primitives required, or maybe in the case of SISC, maybe not so minimal,
but what is required for performance.
And that is standardized.
And on top of that, you can build any number of applications.
And then below that, you can have any sort of hardware architecture implement that.
Right.
And one of the great examples of where this has played out successfully is in the move to virtualization.
Hardware instruction sets were things that were run on processors, right?
And we over decades came up with an instruction set that was useful for building any application.
Like in the 80s or 90s, an Intel processor would run science, I mean, like it does today.
I mean, it run everything.
It would run science, entertainment, you know, business.
etc. I mean, that's what it was for. And then, because it really was kind of a narrow waste,
it was this one set of primitives that was relied on by a bunch of applications and ran on a bunch of
different classes of processors, whether they're created from the same company or not,
that allowed virtualization to happen. And so VMware came, and this is based on the disco paper
out of Stanford. And they're like, you know what we're going to do is we know that the instruction
set is something that's required. We're just going to go ahead and virtualize that and enable that to
run in software, you know, and there's a bunch of hardware tricks that they did that, but you
could emulate the entire thing in software if you wanted to. And that way you were able to
basically disconnect all of applications that have been created and move them onto software
because you had that narrow ways to do that. So there's a number of examples in computing
history where you have a finite set of guarantees, a finite set of primitives that you can build
against that are so stable, you can emulate them or evolve them or move them without changing
anything above.
And one more thing I'll say.
So if you compare that, an instruction set, say, to like a higher level set of abstractions
for a language, every time you go up the stack, you tend to have smaller and smaller
markets that it caters to.
So let's say instead of X-86, you decided to virtualize, I don't know, let's say, node.j.
Now, if you virtually node.js, you'd get all the node.j.js ecosystem along with it, which is fantastic, and it's huge, but it's nothing compared to all of compute.
Completely. That's absolutely right. I think this reminds me a lot to the, there's a famous quote. I think it's by David Clark, which is that interfaces are constraints that deconstrain in that if you have a well-defined interface, so in this case you have x86 or the instruction set of a particular computer architecture, or in the case of the internet, you have IP and TCP, and those are.
interfaces that are very well specified and are generally unchanging, that constrain because now
both sides that bridge that interface have to conform to it, but it also de-constrains because
now both sides can evolve entirely independently from one another. So like what you're saying
now, the fact that you have x86 as a very clear standard that serves as a narrow waste allows
you to do everything that has been done with virtualization below.
and allows for everything on top to evolve independent of that
without ever having to worry about how that layer of this type works.
And so I think this applies at the narrow waste level,
but it applies also between the various different layers of a protocol.
And this is, I think, back to your layering argument.
If you have well-defined interfaces between each of the layers,
then the layers can involve independently from one another
without being concerned at all about what the other layers are doing.
Whereas if you have leaky abstractions
and they're kind of interdependent with one another,
then that evolution becomes a little bit more difficult.
And if you actually look at the history of networking,
there aren't a lot of architectural principles, you know,
like in a lot of various systems you do.
But in IP, there are actually two.
And so if you want to go into one more level of detail below the narrow ways,
there are two principles which have really defined the architecture
and the ethos behind the Internet.
So the first one is called the end-to-end principle.
And the argument of the end-to-end principle
is that the only functionality you put in the network
is the minimal that it was required
to make the network operate and know more.
And so it implicitly says if there's anything that,
like something that runs on the internet,
that's something that's specific to an application
or to a use of the internet,
that does not go in the network, it goes on the ends.
And they were so strict about this
in the original architecture, they've decided the only thing that you put in the network is
basically best effort destination forwarding. So the only guarantee that the internet ever gave you
was we will try to get the packet from A to B. That's it. We won't guarantee it'll get there.
We don't provide any quality of services guarantees. We won't do any transformations on it.
We just will try our best to get there. And that was it. That was the beginning of the end of
that principle. And many of the things that you would think would be in the network and were traditionally
in the network like in the days of, say, ATM, we're implemented at the end host. And that's where
you get things like congestion protocols like TCP. So things like guarantee and reliability. And
so there's this very kind of extreme aesthetic that you only put the most minimal thing possible
in the network. So that's the end-to-end principle. It's definitely worth reading about. So there's
a second concept, which is pervasive in the internet, and that's layering. And the way that
TCPI views the world is protocols are implemented in the internet. So there's a second concept, which is pervasive in the internet.
layers. And so communicating in a sub-network, like, say, Ethernet, that's one layer. And then on top
of it, you'll have IP, which allows you to communicate, say, between networks. And then on top of that,
you may have transport protocol like TCP. And on top of that, you may have like some session
layer and then some applications, et cetera. So here is the ethos of layering, which is also
incredibly important. So the principle of layering is you should never have dependencies
between layers.
That means you should be able to run IP on top of Ethernet,
but also you should be able to run it on top of Decknet
and on top of IBM token ring
and on top of some random protocol created by anybody.
In fact, there have been RFCs on TCP over carrier pigeon, right?
So actually using physical birds to send these things.
That's right.
So the property of layering allowed these pieces
that you could kind of disassemble and reassemble.
And so taken together this,
end-to-end principle, which says only the minimal, minimal amount goes in the network, and
layering says you can put that on top of anything, that you could argue created this narrow
waste, which means you have this minimal set that enabled everything above and all the networks
below.
That's exactly right.
And I think one thing that is a general lesson is now being very viscerally relearned in the
world of crypto is, I mean, the end-to-end principle in a sense is being reman manifested, even
though people don't reference that paper, because in any distributed system, whenever you want
to upgrade the system, the number of machines that you have to go to and actually update
is a key factor. And I think the N2N principle in the case of the internet made it so that
only the end hosts in many cases had to be upgraded, with most of the nodes that are inside
of the network, not having to be touched at all because all they do is, as you said,
They just very, very simply forward packets from point A to B, and they don't even guarantee it.
And then layering would also say, if you actually had any dependency between layers,
even if one of those layers was not providing some direct function,
it would also be very hard to update because you would have to update multiple layers
as opposed to just a higher layer.
Yeah, so both of these points get evolvability.
The end-to-end principle saying you can now evolve the network more easily
because there are fewer nodes that you have to update.
and the layering principle enabling modularity,
which allows kind of recombination of different primitives for different uses
and not locking any one application to any one particular stack,
allowing them to maybe say not use TCP and instead use UDP
or instead use something else,
being kind of a key aspect of it being invalvable over the course of 40 years.
How do you translate, say, you're a founder, and how does this translate to advice?
Well, so I think if you're a found, well, okay, let's go back to the beginning
then I'll work my way. If you're a founder, I feel if you're not addressing a business problem,
like it's going to be very hard to start a company. And so my standpoint, and I think this is in
every aspect. I'm a core infrastructure investor, so I invest in core technologies. It's very common
that people get enamored with a mechanism, whatever that mechanism is, and they try and build a
business out of that. And I say, okay, you start with a business, and then you figure out the right
mechanism. So there's a lot of very interesting businesses that crypto enables that have been very
difficult with traditional technologies, because traditional technologies are not very good for
use cases, right? They're just not very good for that. So, okay, so it helps with that. So let's say you
understand your business. And now you've got the question of what mechanism to use. And so you've
got a couple of options. You can create your own. You can join another one or you can try and
convince a bunch of parties and do this kind of group close where you're all better together.
So of those three, I would say the last one is the one that seems very enticing, but is the
hardest to do, particularly if you're somehow pre-baking the solution. So I would say probably
best to use something that's existing and successful. If those don't work, great to create your
own. Oh, wouldn't it be great if that was generalizable and fixed the problems that the other
ones had? Or maybe you can augment them to it. But this idea of, I'm starting something new. There's
10 other people starting something new. Why don't we create our own fate shared global thing in a
cryptocurrency or token or whatever it is before we've created everything and yet we're all
agreeing to it, I think is the worst of all options. Yeah. Interesting. So I think the way that I would
on top of what you're saying is by first dissecting it into the various different layers
at which you could start a company in the world of crypto. If you're starting an application
or like, say you're starting a company at the application layer, then I think your arguments
are especially important here because you're building an application that ultimately will
touch and users will touch people who actually need something to be useful from the thing
that you're providing them. And so in that case, very important that as a founder,
you're solving a real problem in the world, and then only as a secondary byproduct of solving that
problem, you make choices about what underlying infrastructure to use, what protocols to build on top
of, and you can be fairly agnostic about what those are and essentially just be guided by the logic
of what works. If something works, then you're guided by the problem. You're guided by the problem,
so you find the best solution for the problem, right. And then if you're an entrepreneur who is not
building at the up layer, so you're building at the protocol level, and that's
could be either at the layer one blockchain level or in the smart contract level.
Your problem is a little bit different. I think your argument still hold, but they hold in a
slightly different way. Because say you're an entrepreneur who's building a layer two,
meaning you're building a smart contract on top of which other people will build end user
applications. Then in that case, your customer is no longer an end user. It's a developer.
Correct. Then I think that kind of shifts the lens of who you're targeting, but the vast
kind of the logic of your argument still holds, you want to satisfy a developer to build an
application on top of your smart contract. And that should be the reigning factor that drives all of
your decisions. If you go down another level still, you're an entrepreneur building a layer one
blockchain. Then your customer at that point is also a developer, but it's a different kind of
developer. It's a developer that's going to build on top of your computing platform whose customers
are also developers, namely the developers who are building
and user applications. So there are multiple levels here.
And I think that in that case, again, it's important to kind of have in mind the needs
of the developer, but because of your argument about how it's so difficult to predict
where everything's going to land, and you're just one of the experiments that are being
run, and you want to maximize your chance of being the experiment that succeeds,
I think that your argument, the logical conclusion of your argument is that a modular
approach gives you the kind of flexibility that you might need at that level, like if you're
building a layer one blockchain, to be able to win even if you don't know how everything will
turn out later.
Because it gives you the flexibility to basically swap out the modules that don't work for
the modules that do.
And if you build something that's very vertically integrated, then you might end up being the
experiment that didn't work and you don't have enough room to maneuver really to pivot and to adapt
into the approaches that do work.
And that might end up being a little bit harder and less likely to succeed than one that
is kind of less opinionated, more like the original protocols of the internet, more like
IP, which like kind of unopinionated.
We don't know what networking technology is going to work.
We don't know what applications are going to succeed.
We're not going to take a position on that.
We're just going to build something that very narrowly solves one problem very well.
Right.
Strongly agree.
You actually said it much better than I did.
Those are the implications.
But here is the pitfall that I think you should avoid, which is there is sometimes an inclination
of whatever I'm working on will be better if I've got five other people
to agree with me even before I've started.
And so what I'm going to do is I'm going to create the, let's call it the Marteen
consortium of whatever it is.
And I'm going to get five companies, big, small startups, whatever, to all agree to the
martin consortium of whatever.
And that way, if I implement it, it's therefore stronger.
And so the seductive property of that is you think that you're defending yourself.
But what normally plays out in practice is nobody's really committed to the Martine
organization or architecture. You're trying to solve too many problems. You can't really predict the
future though. You need people to agree to the future. And therefore, you often have these very
stillborn projects because rather than actually focusing on the problem at hand, which it is like
whatever the developer needs in the cases that you're talking about for L1 and L2. So instead of
focusing on that, you're focusing on this kind of human group consensus problem and trying to predict
the future. And so I would say you're absolutely right. I mean, you need to be modular for your
own sake, you definitely want to focus on the problem at hand and choose the right architecture
for that. I would not try and pre-bake some global standard, which is so commonly done, because
you just do not focus on the core problems that way. Yeah, completely. And then you end up
designing things by committee, which has its own problems. It's the worst of all worlds.