a16z Podcast - Governing democracy, the internet, and boardrooms
Episode Date: September 2, 2024with @NoahRFeldman, @ahall_research, @rhhackettWelcome to web3 with a16z. I'm Robert Hackett and today we have a special episode about governance in many forms — from nation states to corporate boar...ds to internet services and beyond.Our special guests are Noah Feldman, constitutional law scholar at Harvard who also architected the Meta oversight board (among many other things); he is also the author of several books. And our other special guest is Andy Hall, professor of political science at Stanford who is an advisor of a16z crypto research — and who also co-authored several papers and posts about web3 as a laboratory for designing and testing new political systems, including new work we'll link to in the shownotes.Our hallway style conversation covers technologies and approaches to governance, from constitutions to crypto/ blockchains and DAOs. As such we also discuss content moderation and community standards; best practices for citizens assemblies; courts vs. legislatures; and much more where governance comes up. Throughout, we reference the history and evolution of democracy — from Ancient Greece to the present day — as well as examples of governance from big companies like Meta, to startups like Anthropic.Resources for references in this episode:On the U.S. Supreme Court case NetChoice, LLC v. Paxton (Scotusblog)On Meta's oversight board (Oversightboard.com)On Anthropic's long term benefit trust (Anthropic, September 2023)On "Boaty McBoatface" winning a boat-naming poll (Guardian, April 2016)On Athenian democracy (World History Encyclopedia, April 2018)The Three Lives of James Madison: Genius, Partisan, President by Noah Feldman (Random House, October 2017)A selection of recent posts and papers by Andrew Hall:The web3 governance lab: Using DAOs to study political institutions and behavior at scale by Andrew Hall and Eliza Oak (a16z crypto, June 2024)DAO research: A roadmap for experimenting with governance by Andrew Hall and Eliza Oak (a16z crypto, June 2024)The effects of retroactive rewards on participating in online governance by Andrew Hall and Eliza Oak (a16z crypto, June 2024)Lightspeed Democracy: What web3 organizations can learn from the history of governance by Andrew Hall and Porter Smith (a16z crypto, June 2023)What Kinds of Incentives Encourage Participation in Democracy? Evidence from a Massive Online Governance Experiment by Andrew Hall and Eliza Oak (working paper, November 2023)Bringing decentralized governance to tech platforms with Andrew Hall (a16z crypto Youtube, July 2022)The evolution of decentralized governance with Andrew Hall (a16z crypto Youtube, July 2022)Toppling the Internet’s Accidental Monarchs: How to Design web3 Platform Governance by Porter Smith and Andrew Hall (a16z crypto, October 2022)Paying People to Participate in Governance by Ethan Bueno de Mesquita and Andrew Hall (a16z crypto, November 2022)As a reminder: none of the following should be taken as tax, business, legal, or investment advice. See a16zcrypto.com/disclosures for more important information, including a link to a list of our investments.
Transcript
Discussion (0)
24 is an important year for democracy.
This year, over 60 countries have and will head to the polls, representing just about
half of the global population.
But governance extends past the leaders of nation states.
In fact, as the revenue of some companies dwarf the economies of some nations, the way these
entities govern themselves is both more interesting and more important than ever.
Listen into this episode, originally published on our sister podcast, web,
with A16Z, as they explore topics ranging from content moderation to incentivizing participation
to new governance experiments, from DOWs to oversight boards. I hope you enjoy.
Hello and welcome to Web3 with A16Z. I'm Robert Hackett, and today we have a special episode
about governance in many forms, from nation states to corporate boards to internet services and beyond.
Our special guests are Noah Feldman, constitutional law professor at Harvard, who also
architected the Meta Oversight Board, among many other things.
He is also the author of several books.
And our other special guest is Andy Hall, Professor of Political Science at Stanford, who is
an advisor to A16C crypto research, and who also co-authored several papers and posts about Web3
as a laboratory for designing and testing new political systems, including new work,
we'll link to in the show notes.
Our hallway-style conversation
covers technologies and approaches to governance
from constitutions to crypto, blockchains, and DAOs.
As such, we also discuss content moderation
and community standards,
best practices for citizens' assemblies,
courts versus legislatures,
and much more where governance comes up.
Throughout, we reference the history and evolution of democracy,
from ancient Greece to the present day,
as well as examples of governance from big companies like meta to startups like Anthropic.
As a reminder, none of the following should be taken as tax, business, legal, or investment advice.
See a16cripto.com slash disclosures for more important information, including a link to a list of our investments.
So I want to start with like a really broad question. Maybe it's too broad.
but what is the right model for internet governance?
We have all of these companies that host platforms that people participate in,
that they build on top of, that they connect and communicate on.
And so a question is like, what is the right way to govern those platforms?
Right now, the most popular ones at least are owned by corporations
and they sort of get to call the shots there.
Is that the way that it should be?
realistically in like a world that we all live and work in and play in and may i start with
a better way a little bit on your formulation corporations don't exist in the ether much as we
sometimes like to fantasize either positively or negatively that they do they're products of law
usually delaware law because of the weirdness of how the american federalist system works and they're
governed by a plethora of obligations and regulations that are not only in their place of
corporation but are also state law everywhere and federal law and a lot of the duties that they have
they can't get rid of even if they tried to get rid of them so if you make and sell a product
you're liable for the bad stuff that happens to people as a result of that product even if you
pretend you're not and even if you don't want to be that's just to give an example of the duty you
can't get away from that means that all corporations are intensely regulated all the time already
even before we get to the specific regulations that are relevant say to us user-generating
content social media platform. So the first thing is you're going to be governed by state and
national laws. And in a more tenuous way, you're going to be governed by those international
laws that your government bothers to apply to you. And which, by the way, there are plenty of
governments that are democratic formulated. And, you know, some of a bunch of these laws are
presumably coming from the will of the people. Yeah. And some of them are undemocratic and lousy,
but they still affect you if you want to do business in a country.
So, you know, all of these things are in place.
That's the first thing I would say.
So there already is a lot of governance that comes from governments, like the good old-fashioned
kind of governance.
Then there's a further and really important question of whether there should be more governance
coming from governments, and if so, of what kind.
And that's very hot topic right now.
The U.S. Supreme Court this year is deciding three different sets of issues all related
to what are the rights of platforms, what are the rights of users, what is the role of government
with respect to social media. And until now, there's been exactly zero Supreme Court doctrine
on social media companies, and now we're going to have a whole bunch of it. So this is a major
transformational era. Unfortunately, we don't know the answers to any of those things yet,
definitively. We have sort of tea leaves from the oral arguments. So that's another really huge
issue. Like, how far should it go? How much regulation? Should the states of Florida and Texas
be able to weigh in on content moderation? Yeah. So you're referring to the,
the net choice case that the Supreme Court is reviewing. And what's at issue there is these
states would like these social media platforms not to censor or moderate certain types of speech.
I mean, that's how the states would put it. The companies would say that the states want to
force them to carry content that they may not want to carry or want to limit the ways that they
can moderate the content on what they call their platforms. And so that's, you know, it's like any
Supreme Court case, you've got two sides.
And those are the two main perspectives.
And then beyond that, there's, to me, a super fascinating question.
I know Andy's thought very deeply about this, too, of modes of regulation that platforms can
choose, first of all, for themselves and use internally.
Then there's potential collaborations as in between different platforms on collective
self-regulation, which is a form of governance.
And then there's the weird kind of hybrid structure that they,
The Facebook Oversight Board represents, and then in a different way, the anthropic long-term
benefit trust represents.
And those are methods whereby a company creates a kind of hybrid structure where there's some
independent actors whom the company agrees to be governed by for some limited set of purposes.
And that's another really interesting, and to my mind, kind of innovative and cutting-edge
way of doing this, which, like any innovative cutting-edge thing, has its pros and its cons,
and is also very much in an experimental phase rate.
I think we're already getting into something very deep and interesting,
which is like where are the boundaries,
where do we think the real world governance ends,
and when or why would any type of organization want to go beyond it?
It seems like there's at least a few reasons,
but no, I'd be curious what you think.
I mean, there's obviously just a sense as a company
or as an organization that you want to do more than government has told you to do.
That's like one possible motivation.
It seems like there's also something about,
the global reach of some of these platforms. We know that global coordination around regulation
is very challenging. And so in the absence of that coordination, you might feel the
obligation or the strategic need to do something to coordinate that yourself. And then there's
also something, this hasn't come up yet, but I think a third explanation is something to do
with competition. And I think going back to your original question, Robert, you know, in a world
where we had perfect competition in a very classical sense
between all different types of Internet services,
I think a lot of the governance beyond traditional regulation
would be done by users voting with their feet.
And the challenge we get into is the scale of many Internet services
and the network effects put them in this kind of interesting position
where they're at a very, very large scale,
which makes it in some ways hard for users to vote with their feet
because they like being where their friends are.
But at the same time, there's still a lot of economic competition across services.
So it's not obvious that these companies aren't really monopolies in the traditional antitrust sense.
And it's put them in this weird zone where there is a sense that users are unhappy being locked in to any particular platform.
But at the same time, traditional antitrust tools don't really seem to apply.
And in that weird gray area, it might make sense for large platforms to play with additional antitrust tools.
additional modes of giving their users the ability to decide together how the platform's going
to work in a world where they can't freely, individually move between platforms for the same
service.
And Andy, I don't think you would disagree with this.
There's also the advertisers who, in the perfect economic picture, would just go where the people
were, where the users were.
So in that sense, they seem less important.
But as we've seen in what you might call the X-Files, the advertisers have turned out to be
major players because they don't wait to see what the users will do.
I mean, they also do that, but they take preemptive steps based on their perceptions of what might happen and the reputational costs and all of those sorts of things.
And so they're big players too.
And they're another reason why a company, a platform, might want to have its own self-regulatory mechanisms.
And here I just want to add, in the super polarized world, there are no neutral decisions that you could say, oh, well, everybody should leave me alone because I made a neutral decision.
whether you take content down or leave it up, you're non-neutral.
Whether you amplify it or don't amplify it, you're non-neutral.
Depending on how much you amplify it, you're non-neutral.
Depending on your algorithm, you're non-neutral.
Everyone is sort of realized by this point, certainly in this ecosystem, that there's nothing
neutral.
And where a lot of people are mad at you and there is no neutral position, you might have
an incentive to offload some of the decisions just so people will be mad at someone
else and also because users might not trust you as the platform and then they might you might
think it's better off that people will trust somebody else more than they trust me even though they
may not trust them fully either i totally agree i think it's an exercise in trying to rebuild trust
and i think something nois said's really important which is like trust in tech companies is falling
in a lot of parts the world not in all parts of the world but in many parts of the world but at the same
time faith in government is falling in a lot of those same places. And so there is no obvious
actor to make some of these very hard calls in a neutral, procedurally fair way, where the key is
that I think you, as a user, you'd like to be in a position where a platform decides on a
piece of content or what app is allowed or what transactions are allowed in a way such that
even if you disagree with the particular decision, you still buy into the process.
by which the decision was taken.
And today, I think there's a lot of people who question both a company's ability to build such a process,
but also a government's ability to build such a process.
And in the absence of trust in either of those processes, it makes sense from a business perspective
to try to improve your trust in society as a company by finding a third option for a fair way to make that decision.
This sounds like a really good segue into the oversight board, because that is an attempt to satisfy
these conditions that you're talking about, to kind of offload some of that's responsibility
onto a third party, to enhance transparency and trust, hopefully, and create a process that
is reasonable and rule-bound for moderation and all sorts of decision-making behind the scenes.
Let's talk about the formation of that and the needs that led to its creation.
I think the core background situation that then Facebook was facing now meta when it,
when Mark Zuckerberg decided to create the oversight board, was the recognition that there were
some very, very hard content moderation questions.
They were still pretty simple at the time.
They were sort of, do you leave up content?
Do you take it down?
They were not as fine-grained as they now have become.
But in which he might have a strong intuition about what the,
the right answer was, but reasonable people could differ. And I think the insight that he was able
to see is that actually there might be a right and a wrong decision. I'm not a relativist. I don't
think that there are like always two decisions. No, there might be a right and a wrong decision
on this, but reasonable people could differ about what the right decision is. And it probably
didn't matter that much from the standpoint of platform governance which of those two decisions were
taken. But if he made it, it was going to take him a lot of time. It was going to take him a lot of
effort. He had no special expertise in any of the underlying questions. And people were going to be
really angry at him and distrust him so that no matter what he decided, it would decrease the
legitimacy of Facebook as it then was. And so the idea that I proposed to him and that I think
resonated for him was maybe this decision doesn't have to be made by you. In the first instance,
the company is going to make a decision, they're going to have a policy, and they're going to try
to file that policy. But if it's controversial, why not create a group of independent experts
and bring those questions, the hardest questions, to them, have them answer the question
in light of Facebook's principles, international principles, common sense, good judgment,
have them explain their decision and see if it worked. And I would say that the secret sauce there
is the explanation of their decision.
The secret sauce is that if you're answering a really hard question
where you admit there are different possible answers,
you should explain why you're doing it.
And that explanation, that practice of reason-giving
has the capacity to confer a lot of legitimacy
when it comes to hard decisions.
Because I think it reassures people
that thoughtful people gave thought to this,
that they had some degree of expertise,
that they deployed that expertise,
and that they tried their best.
It's not a magic solvent.
Some people will still be really mad about the decision, but everyone will be able to say,
I hope, this is the closest thing to a fair decision-making process that we could have come up with.
And my last point on this is, obviously, this idea of legitimacy and fairness did not come out of the blue.
It's borrowed from real-world institutions, especially courts, that often are not elected.
Often their decisions aren't appealable.
If it's like a Supreme Court or a high court, they're the last one.
word. They're not always right. They're often controversial. People get mad about them. But the institutions
on the whole have a fair amount of legitimacy. I mean, it's a funny moment to be saying it now,
and the Supreme Court has a lot of self-harm on the legitimacy front. But the Supreme Court still
has much greater legitimacy in the eyes of Americans. And that's even after a range of other things
that the Supreme Court has done that they don't care, that the majority doesn't care that's
undermined their legitimacy. So the idea that you can get legitimate.
from reasoned decision-making was borrowed from that realm and applied experimentally to the oversight
for it. And I think on the whole, it's working, not perfectly, but that it is working. But that's a topic
for further discussion. You mentioned that this is a funny moment to be talking about the legitimacy
of the courts. Now, of course, it's an institution that's hundreds of years old, but based on
the way things have shaken out, is there anything you would have done differently in your conception
of this sort of analog body
to a court system for...
Well, actually, if I can answer it in the reverse way,
there is something that I'm actually kind of proud of,
which differentiates us from the Supreme Court.
The oversight board is a pretty big body of people
who decide cases,
sometimes all of them, but sometimes in subsets.
And it isn't partisan.
So it's not set up so that when you change,
the composition by a voter two, everything flips on you, which is what happens at the U.S.
Supreme Court. And so, and then, you know, retirements and deaths. And so that kind of, it can all
change in a second. It's highly partisan and it's very responsive some of the time to political
changes are all features that we intentionally did not put in the oversight for it to lower the
temperature around its decision-making process. Is this something that the government should
take into account. Like if you were setting up a new government and a new court system, would you make
changes of this sort? Yes, absolutely. I mean, it's crazy that the Supreme Court, I mean, you have to go
back to 1787. People died a lot younger. Being on the Supreme Court was not that good a job. A bunch of people
who had that job would quit and go back to private law practice because they couldn't stand the job
and it didn't pay enough and you had to get on a horse and ride around and hear cases all that. It just wasn't
that pleasant. Now, people live longer. They get into that job and they never leave. You know,
they leave the court when they die, a lot of them, which is very, I mean, it's unhealthy. It's not good.
And we also have a much more polarized politics than they certainly did at the beginning,
although there have been periods of intense polarization in U.S. history. So, yeah, if you were
designing it from scratch now, you would have staggered terms with time bound. It wouldn't be, you know,
the luck of, you know, Justice Scalia died. Barack Obama.
was president, but Mitch McConnell was able to block the confirmation hearings. I mean, that kind
of crazy town, hardball politics should not affect the composition of the Supreme Court. And since
one of my jobs is that I watch the court closely, part of my job shouldn't be having to have
like a not quite expert, but pretty good amateur's knowledge of like the health of the individual
justices. If they're diagnosed with a certain cancer, what are the probabilities that they'll live
and how long? Like that's, that's distasteful and also like absurd that that should be.
part of our way we decide things. We shouldn't be thinking in those terms. I want to raise one thing
that I want to make sure we spend more time on how the board is doing because I think there's a lot
of interesting different aspects of that to unpack. One thing I just want to raise that no one
I have talked a lot about in the past and lots of other people have thought about that I think
is important is one one important difference between say the Supreme Court or US courts or
courts in general and the oversight board is sort of whose laws they're interpreting
So Noah referenced this earlier, that they're basically, they're considering cases on the basis of Facebook's principles or meta's principles, and I think that's the logical way to start. But one thing people obviously then point out is, well, you know, the Supreme Court isn't referencing a single corporation's rules. It's referencing democratically written rules that the legislature writes.
Nevertheless, it's an interesting, I think, thing to consider in the long run.
Can you get a sufficient amount of legitimacy, as Noah called it, from a court online that's interpreting the rules of the company when part of the concern is whether the company is setting the rules in the right place or not?
And so one thing people have talked quite a bit about, but I don't think we've cracked, is how would you democratize the legislative process as well so that something like the oversight.
Board would be making decisions with respect to democratically written policies.
Yeah, this is super interesting.
And let me add one more twist to the way things have been playing out, because this is
something that has not been broadly discussed or broadly covered, except in a very narrow
group of specialists, mostly academics and, you know, and NGOs who follow the oversight
board.
The oversight board has been really worried about the thought that they're following meta's
rules.
They don't want, they want, they're more ambitious than that.
They want to follow something more democratic.
but they can't use the laws of any particular country
because there isn't one particular country that's binding.
So what they've been doing is they're claiming to apply
international law, the international law of free speech,
which is not anywhere near as democratic
as the laws of a democratic country,
but it is sort of second order democratic
because these are treaties that were enacted by countries,
some of which or many of which were democratic,
and which all countries recognize as legally binding.
Now, the upside of that is that it sounds a lot more legitimate
than we're deciding these important questions
based on meta's rules, and it's also a little more democratic.
The downside of it is international law actually isn't that well set up
to answer these questions because it's not designed to govern what a platform should do.
It's designed to govern what countries should do.
That's why it's international.
It's the law of stuff between countries.
So the Oversight Board has had to be really creative.
They've basically relying on some work
by scholars, they've basically created a whole new body of what they call international law
that they say binds meta and binds should in principle bind other platforms.
And they're out there making this law and there's no one to tell them no.
So I just supervised a student who wrote a dissertation saying this is unjustified.
But my response to that was always, that might be true, but who's going to stop them?
And they're trying to do that.
So that's a bid that is out there, and it's interesting to watch, and as I say, it hasn't been very well covered.
But I also, I do want to get back to the point that Andy's talking about, which is what are the experimental, creative things that a big platform like META could use to bring democratic input to bear on some of the hardest decisions that it makes?
Well, it might be good to start with the history, which is that this is something apparently Mark Zuckerberg has thought about for a long time because back in the 2000s, when the platform was still pretty young, they had a sort of a disagreement among users about what the new community standards should be.
And Mark held a global referendum on the platform.
I mean, it's super interesting.
You can actually, on the Internet Archive, you can go and see the video.
he posted encouraging all the users on Facebook there were about 300 million at the time
to vote and and what they learned through that process was actually it's really hard to get people
to vote so out of the 300 million users I think something like 60,000 of them voted and the
reason for that's probably because you know most people are not logging on to any part of
the internet to do the hard work of governance they're logging on to have fun or see their
friends or whatever. And in addition, the decisions being considered were they were pretty
abstruse. You had to read these like 100 page documents and stuff. So I think it was a really
innovative idea that suggested early on this idea that there's a way to potentially make
decisions about a platform democratically, but it turned out to be challenging to make work in
practice. And so now if we fast forward to today, yeah, I think lots of people are interested in
ways to set at least some kind of broad, simple policies where the values that users hold are relevant
to the decision through some kind of democratic process, no one has really figured out how to
solve this core problem of participation as well as informed participation, as well as if
it's a platform that's global in scope, how do you actually get people in different countries
who speak different languages to even work together to figure it out?
What if they have radically contrasting values or goals?
There's a lot of really hard questions, but I think there's, yeah,
there's at least two veins of experiments worth going deep on.
One, which I think no.
By the way, voter participation rates are a problem, not just in online voting,
but in, you know, IRL voting too.
That's true. That's true.
Although the participation...
It's a giant thing and you could make it mandatory.
You know, you can make it you open Instagram and you can't use Instagram unless you vote on
these issues.
then you're not going to get highly informed voting, right?
I mean, these problems that he's describing, just to make them concrete, because I think
they're two kind of good examples.
One is the problem that, I don't know if people who are listening all have heard of this
or not, because it may be a little dated, but they call the Bodie McBoadface problem.
Oh, yeah.
This was Reddit, right?
The didn't Reddit.
It was the British government.
The British government decided they were going to use, you know, this new amazing thing,
the internet, to gather votes on what they should name a new battle.
ship. And, you know, the winning vote getter was Bodie McBoatface. Which I think we can all agree
is a fantastic name for battleship. I don't see the problem. Yeah. Her Majesty's ship,
I guess it was then Her Majesty's ship, Bodie and McBoad face, somehow it didn't land for the Royal
Navy. And then the other example that I like to use, and Andy mentioned this, is let's imagine
you're setting a nudity policy. Well, there's one policy that works in the south of France,
on a beach, which is more permissive than what we have in the U.S.
And then there's Saudi Arabia, which is a lot less permissive than we have in the U.S.
And it's not obvious that either of those should control the entirety of the platform.
It's not obvious that our model should follow up the platform either.
It's mostly a default that, you know, a lot of these platforms are U.S.-based companies,
and so U.S.-based standards end up controlling.
Americans think, oh, those are the reasonable standards.
But, of course, there's no inherent reasonableness answer to the question of, like, how much skin you should show.
That's a classic example of something that different cultures are equally confident that their way of doing it is the only right way to do it.
So that's a genuine challenge.
And then that leads to thinking, well, maybe we should have regions with different rules in different regions.
And it turns out it's harder to do that on a global platform than you'd think.
And most of the global platforms have tried as much as they can to avoid that.
they don't want this kind of extremely balkanized platform where it all depends on where you
logged in from. And, you know, if you have a VPN, you can get different standards. I mean,
you can imagine what a mess it can quickly become. So those are part of the challenges.
And then the last part of the challenge is all of our examples of democracy, Andy, you can,
you can correct me if this is wrong because you're the political scientist. But at least the ones that
come to my mind. All of our examples of democratic governance assume a denominator that is the
relevant group of people. It's the people who live in your town are going to vote on town policy or
your state, your country. We don't really have true democracy at the global level. And so that
raises the question of who's the denominator? Who's the relevant community to vote on these issues?
And maybe it's the users, but a lot of issues affect non-users. And so, you know,
Is it everybody, including non-users?
Is it a subset of the users?
That problem is, I think, the very first problem to grapple with,
and in a lot of ways, it's the most challenging.
Let's unpack the Voting McBow-Face thing for a second,
because I think it's really important.
It's a great example of two, or really three,
deeply related problems in the use of voting to make collective decisions.
The first is just the lack of participation.
So very few people actually voted on the,
the Boating McBow face decision, because why would most people pay attention to even know about it?
Second, which is closely related to the lack of participation, is that who then are the weirdos
who choose to vote when not very many other people are voting? It's people who have very extreme
weird preferences, like the kind of people who think it would be hilarious if there is a
vote named voting McBowface. And so those two things go together quite tightly. Low participation,
and then selection for unusual,
unrepresentative extreme views
or preferences or trolling.
So I think that's like an important thing to understand.
And then the third, which is also deeply related,
is just the general problem
that you're asking people to vote on decisions
and they have no skin in the game.
And that, again, lowers the incentive to participate
and lowers the incentive for people
with representative views to participate.
And I think Noah's...
What if you had to serve on that ship?
It might care a little bit more.
Well, that kind of gets for the Noah's denominator point.
I think there's a very broad challenge.
It's much broader than just democracy,
which is how do we govern things that are not economic in nature
in ways that are smart
when the people were asking to do the governance
don't have skin in the game.
And that was the problem with OpenAI's governance.
It's been a challenge with university governance.
The trustees, in some sense, don't have skin in the game.
You measure the skin by how much money they put in, they would say, I mean, not that
I'm always sympathetic to them, but the donors, we're the only ones with skin in the game.
You know, we're donating the money.
All you guys are doing is taking a salary.
That's a good point.
That's a good point.
That's what they say.
I mean, I'm not saying I'm on board with it.
Anyway, go on, yeah.
Yeah, and when we look at all these voting problems, if you're asking people to vote on
things that don't clearly affect them, that's a reason.
you might not get very informed participation.
Furthermore, when you ask large groups of people to vote,
you have this problem we call the paradox of voting,
which is that even if you do care a lot,
even if you do feel like you have skin in the game,
if lots of other people are voting,
then you know your vote doesn't matter.
It's not going to swing the outcome.
And so you can care infinitely much about the decision,
and yet your incentive to actually pay attention to vote is very, very small.
so we put those things together and we ask okay how are you going to do something democratic
to make a big decision and a big online platform i think you have to start with this fundamental
set of problems that are you want to get people to participate you want them to pay attention
you want them to feel like it matters and how on earth do you do that and i think there's been
several directions of experiments that are pretty fascinating one which noa alluded to earlier
is what Anthropic as well as meta
and others have been playing with
where you don't use voting.
You use these things called citizens assemblies.
And then the other is what's going on in crypto,
where you're experimenting with different types of voting
that try to draw in only the people who have
some sense of skin in the game
through the tokens that they hold.
So those are, I think, two sets of experiments
we can learn a lot from.
And I would just add one more framing thing here.
most forms of democratic government
in the history of the world
I tried to address these questions
by saying we're not going to have
direct governance where every single person
has to vote on every issue
because that's also like a time-consuming thing
and so one way to solve both the
how much time do you put in
and the how much do you care
and the skin in the game is to have
representative decision-making.
This is the move from like ancient Athens
where everyone can show up and vote every
to be clear, every free man
can show up as a citizen and vote in the assembly to a system where you elect people and your
representatives are professionals or quasi professionals and they do the decision making and the voting.
They have skin in the game. They have the job of caring about getting reelected based on whatever
incentives they have. And that gives them the incentive to try to guess or figure out what you're
thinking. And one of the really interesting things about the internet, and I think it's an important
background to when we dive into both crypto and citizen assemblies, is that from its early days,
the internet has been, broadly speaking, the ideology of the internet or the ideologies of the
internet have been really skeptical of the idea of elected representatives. There's this impulse,
and it's really interesting to explore why. Is it because of hacker culture? Is it because of coder culture?
Is it the personality type of the people who first got really good at using the internet
before everyone used it? I mean, there's a lot of possible explanations. But there's been a deep
impulse to get around the idea of elected representatives. And both of these experiments,
the citizen assemblies and the crypto or Dow organizations, are attempts to like say, we are going
to reinvent the wheel. Like we're not going to do the thing that all democracies have done,
namely rely on elected representatives. We're going to do something different and better.
And that's kind of appealing because especially if you have some skepticism in our elected
representatives, it's a good time to try to reinvent what they're doing. But there's also a reason
to be modest when all of the countries in the world that use a certain technology, namely
democracy, have converged to a first approximation on one solution to this problem, name the elected
representatives. And you're like, nah, they don't know what they're talking about. We can do better.
Maybe you can. It's definitely worth trying. But you want to have like some degree of modesty about the
probabilities of coming up with something that in 2,500 years are thinking about this,
no one has yet really managed to solve in any other way.
I want to build on that quickly because I think it's really important, and I completely agree
with no, 1,000%. There's something about technologists and people online that makes them really
crave direct democracy. And one of the key claims they make is that technology is going
to allow for direct democracy, that it didn't work in the physical world because it was
too burdens. This was going to be my question. Like, does the internet, is it a step change? Like,
have we entered a new era where, based on the tech that's available now, this is conceivable?
There are no certainties in the social sciences, but this is as close to a certainty as I'm willing to go, is
the technology is going to make no difference to this problem. And the nature is just the way it is.
It goes back to the paradox of voting I was talking about. It's very burdensome to be asked to
understand and become informed on every possible decision that a group is going to make.
And you might think, and this is, I think, the fallacy that a lot of people in tech have committed,
you might think that having more, quote unquote, democracy by putting more votes to the people
is going to get you, you know, a more empowered user base.
But the opposite is in some sense true.
The more things you ask them to do, the more burdensome it is for them.
to do it, the less that they'll choose to do it. And that then creates a really important and
dangerous vacuum because now you have very few people voting. And then interest groups can come in
and capture the decision-making process at a relatively low cost because there's not that many
other votes out there for them to compete with. And this is, I think, fundamentally why
Yeah, no successful society on Earth has stuck with direct democracy for very long.
And it's why technology is not going to solve the problem, because it's actually not a problem about voting in person being too difficult or anything like that.
It's a problem of information, acquisition, and analysis.
And I'll just note on that point, people have also claimed that we'll be able to provide voters with all the information they need to make every decision through some kind of app.
I've had a lot of undergrads in my office pitching me on the next app that's going to inform us all about politics so we can vote on everything ourselves.
And the problem is that information or that data is not enough.
You actually need to analyze it and then decide what to do.
And that's actually quite difficult.
And so being buried in data on what your community is voting on is just not enough to help you do this.
And so I think it's a completely fundamental problem that goes way beyond technology.
that has to be solved through other models besides direct democracy.
And I'll just note almost all of the big voting Dow's and crypto now have some form of
representative democracy as a result of this.
If you're listening to this and you think, no, that, you know, no and Andy are wrong, you know,
like technology.
Yeah, you guys are just elitists. Come on. Let the mob rule.
What I would say is, if you're an aspiring technologist, ask yourself, what's your vision
for the company you're going to have found?
And in 99.9% of cases, people are like, oh, I want to found the company, and then I want to have founder shares, and I want to control the decision making. And then you say, well, why? And they say, well, because otherwise, like, my VCs will first get involved, then they'll be shareholders. They're going to dilute my votes. And they're going to take the company, and they're going to do all kinds of stuff. I don't want to do with it. And I'm the one who knows, because it's my company. I spend that most time out. I care about it. That's exactly the problem that Andy's describing. So I always find it amazing when, you know, those students come in and say that because I say,
say to them, oh, are you imagining a company where you will have no special input into how the
company runs? You know, and every decision will be made collectively. And they're like,
are you crazy? Of course not. And I'm like, well, then there's a contradiction in what you're
proposing. So it's the, you know, the intuition, the technology will interfere. You can easily
fix that intuition by just asking yourself, when you get rich with your own company, do you want
to be in charge of it? And in my experience, almost everybody thinks the answer that is yes.
although there are some tweaks, right?
I mean, so another thing that Anthropic did,
and I know about this because I was involved
in advising them on it,
is they've created a long-term benefit trust,
which is a trust that appoints at first, just a few,
but eventually will appoint a majority of members
of the board of directors of Anthropic,
and the people on the trust
who are going to appoint those board members
are not themselves shareholders.
So what Anthropic is doing there is,
it's not the full open AI thing where the parent board was completely made up of people who
had no stake in the company, which blew up, as we know. This is a more modest in-between version
where there will still be shareholders on the board of directors. But the idea is to have some
external check. So it's a little bit better. But that's very rare. So let's talk about the
example of Anthropic. You're saying that in this case, there are actual shareholders that
able to serve on the board, and that this is different from META's Oversight Board, and it's
also different from Open AI, where people did not have a stake in the company. And so maybe there
will be a more representative collection of views in there for keeping the concern going and
also mitigating risks that could arise. Yeah, it's designed to avoid problems on both sides.
So take the Open AI problem. You want to have some, you know, break glass measure. And so,
the break glass measure was they created this nonprofit entity, and that was the overarching
body that the Board of Directors belonged to. And then underneath that Board of Directors
was the for-profit entity that was that we think of as Open AI, the actual company that does
stuff. If the people on the Board of Directors of the nonprofit who had no financial stake at all
believe the company was going the wrong way, they could break the glass and fire the management
of the company. And they did that. They believed that things were going terribly a ride,
that this was bad for the world, and they broke the glass, and they fired the directors of the
company, the management of the company. Until it backfired, and the glass blew back in their face.
Exactly. They didn't realize that what would then happen would be that Sam Altman would then
say, well, I'll just go to Microsoft, and I'll take every single employee in the company with me,
and then the employees all went online and signed up that they would go with him, which is maybe
not so shocking because of course you're going to agree to jump ship. And so then it backfired.
And then the members of that board directors had to resign. So what you need in the real world
is something that doesn't go that far where there are people who are not financially incentivized
who serve on the board. But there are also people on the board who understand the real world
and who do have financial incentives. And they develop a relationship with each other
so that it is possible for the people who are worried about a problem and want to break the glass
to do so in a thoughtful and responsible way where it won't blow up in their faces.
So that's what the Long-Term Benefit Trust is trying to do.
And it does give some reasonable responsibility to people who don't work for the company.
I mean, in that sense, it's a kind of cousin of the Oversight Board.
But it's different from the Oversight Board because those folks don't have day-to-day decision-making.
responsibility. They're there at the level of oversight and break glass in an emergency,
whereas the oversight board is meant to be in every day, you know, there are hard problems
that META is dealing with, and the oversight board weighs in on them. So I would call them cousins.
They have some conceptual similarities, but they're not, you know, they're not, they're not,
they're not siblings. They're definitely not twins. It seems a little bit interesting that you have
a certain set of people who have this opportunity working side by side with people who are,
I assume just getting a regular salary.
And then how you get those people to really care about, you know, the decisions that are being
made.
They have to care based on their reputation.
And this was relevant at the Oversight Board, too.
So how do you get the Oversight Board members actually to care that they do a good job?
It's not so simple.
But the short answer is it's nobody's full-time job.
And the jobs that they have are as activists or as scholars or as people who care about
free speech.
And so the idea was that they will keep.
care a lot about their own reputations, and that will give them the incentive to do a good job.
It's not perfect, but it was the best we could come up with. If it was their full-time job,
their incentives might be too closely allied, you know, with the power of their institution,
and they still probably want to do a good job, but it wouldn't be that they had a reputation
to preserve elsewhere. So that was a kind of complicated, compromised decision, because it turns
that it's a hard problem. You know, how do you get people to care about their jobs?
Usually it's they get paid more if they do it well, and they get fired if they do it badly.
And on an independent body, you can't really do either of those things.
You know, you can't, like, reward them for making good decisions.
And if you can punish them by firing them for making the wrong decisions, they're not independent.
So then you need something somewhere in between.
And reputation, I think, is the most powerful motivator.
And there, I would just say, you know, you might think, oh, no one's motivated by reputation.
But it turns out there's a whole bunch of people in the world who take the jobs,
that they have, a lot of them are academics who take the jobs that they have rather than some
job that would pay a lot better because they care about the thing they're doing. They like doing it.
And then so you ask, like, okay, now you have tenure. When I remember, you know, getting tenure
and thinking, oh, this is so great. And then someone said to me, this is so great, you'll never
have to do a day's work again. And I thought to myself, like, I never thought of it that way.
Like, I'm so much of a neurotic. I'm going to keep on working hard. But rationally,
if I were a real rational actor, maybe I would stop. And the main reason not to is then my reputation
would be in tatters. Then people would say, as they do say about some academics,
oh, there goes Feldman. He's one of those people who, the day you got tenure,
he never did anything ever again. And, you know, that hurts. So, and I care about it, not hurting.
It's impossible to picture you, Noah, stopping, working after tenure. Such a layabout.
So Anthropic is also one of the examples of these Citizens' Assembly. So it might be...
Yeah, describe the Citizens' Assembly's Andy. Let's let's go that into that.
I think the general idea here is you have cases where
you're either worried that too many people won't participate if you hold a vote or that they won't
actually know enough about the issue prior to having some kind of deliberation together and or
briefings on the issue to make an informed decision and so instead of holding a vote you
try to randomly sample a representative group of your citizens in the case of a citizens
assembly or users in the case of one of these online i'll call them user assemblies other people
call them different things uh and then you bring those people together the idea is because you've
randomly sampled they'll be representative of the user base of your platform you get them together
you have them debate the issue you have them hear briefings from experts so that they're
informative about the issue and then you have them make a decision through voting or some other
collective process on whatever the difficult issue is. And this is something, it's had some history
in the real world going way back. And it's closely related to this idea from ancient Greece of
sortition. What is sortition? Sortition was this idea that you would randomly choose people to be in
charge of various issues. It's election by lottery. Yeah, some people call them lotocracies.
also the idea is basically we need someone to have the full-time job of running some part of the
government we don't want to vote for them because that creates all kinds of weird incentives so
every year we have a lottery or every or twice a year and we pick one or two people who are
going to have this job and so and they only do it for one term they don't have to worry about
getting reelected um but for reputational reasons they try to do a good job because you know they'll
look bad if they don't do a good job and people will think well of them if they do a good job so you
pick them randomly, and then you do it again the next year.
It's sort of like jury duty, but, like, heightened to the absolute utmost level.
Well, the Athenians had very aggressive jury duty.
They had much larger juries, and they paid them quite well, which is interesting, too.
I think the main justification for sortition was this idea, I'm going to paraphrase this,
but there's this great quote from Douglas Adams, the author of Pitchhiker's Guide and several other great books,
that's basically like any person who wants to be in charge is ipso facto unqualified to be in charge.
And I think that was the idea of sortition is we don't want these, you know, untrustworthy, overly ambitious, potentially corrupt or sociopathic people who desire power to be the ones in charge.
We want, you know, the average, well-meaning patriotic citizens.
to make decisions instead.
And so the way we're going to make sure we don't allow that sociopath to insert himself
into power is we're going to randomly choose who's in charge.
And I just want to note, it's an interesting solution to that problem.
It also raises a bunch of its own challenges.
If you think that a big part of what makes democracy work is what we call accountability,
the idea that the person whose job it is to pay attention, learn the,
issues, figure out what maps to your preferences and make decisions on that basis, if it's the fact
that they're worried about whether they're going to get to keep their job is what makes them do
all those things well, then sortition is a terrible, terrible method because I have no incentive
to do anything. I've been randomly chosen to do this job, and now I just have to do it no matter
what? I don't have those
strong incentives that are coupled
with my desire to win re-election,
so there's no accountability in some sense
unless, like Noah was saying,
considerations like reputation,
patriotism, genuinely just
caring about the issue, are dominant.
So that's one of the key trade-offs.
If you do a bad job, maybe you'll get,
maybe you'll get ostracized and exiled.
They used to do that. They would actually
do that, too. But there's another key trade-off,
which is the background assumption of sortition
is that you don't need very much, that all the expertise you need to do the job, you already have
or can pick up really fast. So, you know, imagine you're running your startup and, you know,
you're going to choose someone to be your CTO. And you're like, oh, I'll do it by sortition.
Like, bad luck for you if your lots fall on me because I don't code well enough to be able to do a good
job of reviewing the other people's code that's necessary in the startup for the CTO. And so,
and there's no chance that in the six month or one year period, I could get.
up to speed fast enough. Like maybe if you gave me a year, I'd make some progress, maybe,
but, you know, I don't have the relevant expertise. So it's also partly based on a very amateur,
in a positive sense, amateur idea of government. Kind of, you know, you can imagine 19th century
British aristocrats who think that like anyone in their club, if called upon, could sit
in government and figure it all out, you know. But life is very complicated today. And the things
that government does today are infinitely more complex than they were in a, you know, an antique
society, you know, in Athens. And so even if you assume that people could have done a good job
then, it's not so clear they could do it now. And even the Athenians didn't do it for certain
jobs. Like, they didn't pick their generals by sortition. They weren't dumb enough to think that
they could go out and win wars by picking a random person to be the general. That required some
expertise. So that's a second problem. And then a third problem is, is this job the kind of job
you get better at over time? Because if you get better at it over time, you're not going to get a
chance to because you're in for six months or a year and then you're out. And so if it's the kind of
thing where there's upsides to doing it for longer, you're going to lose the upsides. Obviously,
there are also downside staffing someone in the job for too long. But, you know, it's sort of like turn
limits on steroids. So I think where I completely agree with all that. And so where this is
gone. So I was one of a number of people who helped design this user assembly for
meta. It's called the community forum. And based on these concerns, but also knowing that
having straight up user voting would be very challenging. We really wanted to focus on really
value-laden decisions where it's not really a matter so much of expertise, but really it's
about trying to capture in this community,
what are your core values that affect this decision?
And that might be a place where this is a more workable model,
but I do think it's quite limited.
And going forward, I mean, my personal view, I don't know what, no, thanks,
but my personal view is, I think these things have to be expanded in some way
to bring in more expertise and to allow for delegation, as we call it in crypto,
because of the things no one was just saying.
I think it's particularly hard that someone serves on one of these things,
learns a bunch, and then disappears.
And so developing ongoing expertise and the types of decisions
that are socially fraught, going back to the beginning of this whole conversation,
the kinds of decisions a platform might want to give over to its users,
we're going to want people to develop ongoing expertise
and a sense of accountability for that, I think.
And as Andy says, there's, there's,
There's also, there couldn't be a lot of upside. So, I mean, take a hard kind of values-based,
you know, ethical or moral question. You know, let's go back to the, what should the nudity
policy be on a social media platform, right? It's a hard question. And you could imagine that
if you take a thousand people and they're genuinely representative of the users, which is a tricky
thing, as we've talked about, but imagine that they are. And, you know, maybe they all have
an impulse one way or another way when they start, but then you give them enough information
for them to have some thoughtful conversations and then see where they land after, say,
you know, three longish conversations over three days. They may all land in a place that's a lot
more thoughtful. In fact, they will land in a place that's a lot more thoughtful than where
they started. And they may have a different perspective than, say, the people who work inside the
company or the outsiders who start off very committed to one point of view or another, either because
they're like, I don't know, they're pro-sex and they want to have as few close as possible
or, you know, they're deeply religious and they want to have as many close as possible.
And so, you know, you could imagine that you get, and I think it's plausible.
And I think, you know, I think in some of the experiments that Andy's run with meta,
you get a more thoughtful, nuanced, balanced, balanced answer.
And I think for those purposes, it's great.
And it's better than a focus group because there's maybe a little less control.
You know, in focus grouping, the problem is if you're good at running a focus group,
you can make the focus group say almost anything.
And here, there are more people, there's more space,
there are more protocols to stop the people who are presenting the question
from driving the answer in a particular way.
And it just makes sense that we would get a,
in general, we get a better result.
Not every time.
I mean, because there's some scenarios
where deliberation produces perverse results,
you know, where people get into a cascade where, you know,
this is like how people burn witches, you know.
Group think and herd mentality.
Yeah, you get into that.
and then everyone goes towards some crazy extreme thing, that can happen.
But there are a lot of techniques that Andy's design builds in that are designed to stop that from happen.
Now, part of this community forum is you're trying to provide information so people can make an educated decision about something and maybe change their mind about something.
So they arrive at a, you know, workable solution.
But my question there is, how do you ensure that the information and education that people are receiving,
receiving is non-biased or objective and, you know, because whoever's presenting you the menu of
options, they have a great amount of control on steering you toward a certain outcome.
Good question for Andy. And then the other tricky problem is, how do you make decisions about
what informational decisions about what's the right information? Imagine what we're discussing
is precisely the misinformation problem. What's our baseline? You know, where do we even start
on what is reliable? It's a super challenging problem. I think there's a
couple different answers, none of which are perfect. And this is a broad problem in all democracy,
not just the online world and not just these assemblies, though it's perhaps sharpest in these
assemblies since the entity organizing the assembly makes decisions about what information to present.
I would say a couple things. One is you try not to have a monopoly on information. And so,
a big part of these assemblies is that people get to discuss and deliberate, and they're
completely free to bring in any information they want, gathered from whatever source they want.
So that's, I mean, that's the first, most important thing, probably.
Then, of course, you try to do as good a job as you can of bringing in experts that are
credible to represent the different sides of the debate.
I think it's become in vogue a little bit on a certain part of the Twitter community or
ex-community, I guess it would be these days, to say that there's lots of fraught debates in
society where it's obvious that one side is factually correct. And so we shouldn't show both sides
that both siderism is itself a problem. Of course, there are instances and places where that might
well be true. But I don't think you can get people to make an informed decision about any issue
if you don't let them hear both sides or more than two sides of the issue, even if you think
the preponderance of evidence rests on one side than the other. You really,
I think can't get to an informed decision unless you've heard all the arguments.
And so I think that's a big effort in this is get a diverse set of experts to present all sides.
But, you know, long run, I think you want to be in a position.
If you really can build out a democratic system that goes beyond these assemblies,
you'd want to foster a competitive ecosystem in which different information providers crop up
and provide their different analyses and views.
And that's something I've talked quite a bit about with Dow's,
because I think that's an issue in crypto, too.
On that point, there's also a body of literature about, you know,
people in brainstorming settings.
And when there are bad ideas proposed,
it actually enables the group to arrive at better decisions over time.
Yeah, I mean, that idea, which, I mean,
the classic modern formulation of that is John Stuart Mill.
You know, his argument for free speech was,
we actually need the bad arguments.
because working through why they're bad and wrong and false
will help us get the right arguments.
And it sounds kind of like a trite argument,
but if you really think about it,
it's pretty deep because it's not obvious.
And it's interesting in that way.
I think another really interesting kind of aspect
of this whole challenge to me is jumping out
to the bigger political world of democracy.
You know, so you were asking before Robert about,
you know, it's a funny time to be talking
about the U.S. Supreme Court as a model, and I think the same in a way is kind of true about
these ideas of democratizing. But one of the aspirations of the community assemblies or forums that
citizens' assemblies or community forums that Andy's talking about is to get real disagreement
that is thoughtful that doesn't turn into polarized yelling at each other. And one of the ways to do
that is by narrowing down the issue. So let me point to one weird feature of our polarization.
One weird feature of polarization
is that people turn out to have really strong views
about things they don't know anything about
and they have them instantaneously.
So you're like, how?
Like, how do you, you know, this happens to me to something.
I'm like talking to somebody and they're like,
oh, I'm sure I think X.
And I'm like, I've known you for 30 years.
You don't know anything about that.
Why do you have this strong position?
And the answer is, as a time-saving device
in a world where there's so many issues
and they're so complicated,
once we've picked a team,
you know, we're blue or we're red,
or we're libertarian or we're anarchist or whatever we happen to be.
Once we've picked a team, there's now a list of positions that's associated with that team.
And so we just default as it's defensible time-saving heuristic to siding with our team on those
issues because we kind of think that someone else has thought about it or a whole bunch of
people have thought about it.
And probably this is where I'm going to end up because I generally agree with this group of
people on various things.
And one of the things about doing a citizen's assembly or a community discussion about a narrower topic is you can sometimes avoid this habit that we have of defaulting to a polarized position as a time-saving device because you're going to be told about it.
Like you're going to get the information in front of you.
And so it's not only that we're closed-minded and that we don't want to listen to the other side, although that does happen.
we're also partly closed-minded because we're barraged with so much information
that we don't have time to consider every perspective on every issue.
We just don't have the cognitive bandwidth to do it.
I actually think that's one of the problems with our current polarization.
It's not the only cause of it.
There have been polarizations in many societies that do not have anywhere near as much
information as we're getting.
So I'm not making some like, you know, this is the main causal factor.
But it is one reason that we do better in a well-designed citizens assembly.
often than we do like out in the wild of politics i think that's been one of the most interesting
and to me surprising learnings from them both in the physical world and online is they're
generally organized around the most fraught kind of culture wars type issues and when they've been
run one of the big learnings i think has been that you know if you get a relatively small group
of people together and you structure the conversation well uh people generally are pretty reasonable
It turns out they don't want to be so partisan or so hostile when they're put into the right environment.
And that connects to other things about our polarization, right, which is that it's much larger than it seems because we consume so much of it through these online platforms where we hear the loudest voices, but we don't see the in-person interactions that turn out to be usually less polarized.
I had a funny example of that for my own life.
I was moderating a panel at Harvard and some students decided they were going to leaflet against me
and they were handing out leaflets to people as they came in.
And the leaflet was like, it was like a four-page leaflet, but its text was entirely a download
from a Twitter, like a Twitter thread that someone had created attacking something that I had written.
And I was like, this is so, it was just so weird because it didn't translate well to the format of the, you know,
to the format of the, you know, of the leaflet.
But also, I just thought to myself the extremity of the formulations that this person was using,
you know, against me online, you know, they didn't really translate to the forum where, like,
we were having a thoughtful conversation.
And I sort of felt bad for the people who were leafleting because it's one thing to, you know,
call what I said, I think vapid horse shit, which is what they were describing me as saying.
Like, they seemed in context weird and rude to be saying that because everyone in the room was
having a rational, reasonable conversation, whereas on Twitter,
I'm sure it seemed great.
And then the punchline of it was there was like a substantive issue where I had, in the thing
I'd written that they made them angry, I'd made an argument, possibly wrong, but I made an
argument, and I gave evidence for it, and I supported it.
And their response to this argument was, it's obvious that this is wrong, so we're not even
going to entertain it.
And I was like, that just sounded so absurd in the context of a university.
On Twitter, it sounds great, right?
Like, Feldman is full of vapid horseshit, so why should he even bother to refute him?
all you need to know is that we think he's stupid.
And then that's enough to reach the conclusion.
But in personal conversation, very hard to carry that off without looking silly, I think.
Since you brought up Harvard, are you feeling this intense tension between the administration,
the student body?
It just seems like such a powder keg over there.
It was a very, very, very intense fall.
And I mean, very intense.
As intense as I have experienced on the university campus and I've been on and off of this campus
since 1988. So it was extreme. Now things are sort of slowly getting back to maybe normal is too
strong a word, but towards a calmer way of being. And I will say during all of this, you know,
our classes were normal, you know, like school went on. You know, people learned stuff. They studied
for exams. You know, we did have a normal campus life, but all of the intensity that was being felt was
was felt. So it was, it was an experience and not for the most part, I think. It's fair to say for
most people, they don't think it was a good experience. Have things subsided a little bit?
Yeah, I think so. You know, I mean, there's now, so what are universities bad at and what are they good
at? Universities are really bad at making fast decisions about anything. They don't like to do it,
and if they do it, they make bad decisions. Because if your personality were that you could really
react in real time, you know, you'd be like a traitor. You'd have, you'd have some job where you could
really take advantage of that. And of course, there are some people in university who are really
intellectually quick, but the best of them use that intellectual quickness to fill in their
debt. And then you make the decision over time. So universities are bad at the fast stuff,
and the fall was all fast stuff, responding to a news cycle, making issuing statements and
declarations. Now we're entering a phase where certainly here, you know, the university has two
task force is already set up to study these issues. There's two more, at least coming.
And this is going to be the part the universities do pretty well. Like take a deep breath,
figure out what we've done wrong, figure what we can do better, give reasoned explanations
of how we should do better in the future. And so the nervous system of the university will
return closer to what it's best at. And again, I want to be clear. There are some people
whose nervous system is always like, go, go, go. And that's really valuable in a whole bunch of
dimensions of life, but a university is not really one of them. And so we do our best when we
breathe a little bit. And that's now what's happening. And that's better. So in that sense,
we're heading directionally in the right way, though by no means are we there yet. I think it's a,
it's a really good and important example of just the general topic we've been discussing,
because a lot of it has to do with governance and the governance of these super important, very, in some
cases very old, long-running institutions that are not straightforward businesses and therefore
face quite complex governance issues. And certainly some of the ones, you know, I've observed
over the last, let's say, 10 years that make it so challenging is you have the exact same
uneven participation problem. So not every faculty member leans in and participates in the
governance of their university evenly. The ones who choose to do that may have different views than
the ones who don't. And then on top of that, you have, I don't necessarily mean this in a negative
way, but like the mission creep of the university, which is a lot less focused on protecting
academic freedom and developing the very best research in the world and a lot more focused on
a lot of other issues that extend way beyond research. And I think it's hard, it's hard to figure
out how to fix those governance problems because the institutions are so big, entrenched in so many
different areas, and trying to spread their attention across so many different things.
One thing is to step back from, at least in my view, to step back from thinking that by making
declarations, the institutional part of the university contributes to our understanding of the truth
and of knowledge. I don't think that's true. So if the deans get together and hold a meeting and
announce that the second law of thermodynamics is true. I don't think that gives us much special
knowledge into whether the second law of thermodynamics is in fact true. However, you know,
when the people who are at the cutting edge of some area of inquiry published their research in a
peer-review journal, there's reason to take that seriously. It's not always correct. It's always subject
to revision, but I would stop and listen closely to what they had to say and give it some, you know,
some epistemological, you know, benefit of the doubt because they're expert and they're
speaking on their area of expertise in a thoughtful way. And I think one of the things that's
happened is that our universities, and not only our universities, but it's happened in universities
have kind of fallen into this part of the mission creep that Andy is describing, of thinking
that they have to express a public statement on every matter of public importance. And I understand
the moral impulse to say the right thing, but that has to be balanced against, are you good
at that? And are you contributing to the university's mission, which is to pursue the truth,
in the broadest sense, you know, through reading, writing, and teaching.
But that's different from, you know, being in the announcement and declaration game.
And I don't think universities are very good at that.
And I think stepping back from that is it's not the, by no means, an overall solution, but it's the first step.
This is happening all across corporate America and beyond, you know, these calls for activism
and for institutions to take a position on various matters.
I have clients coming to me, corporate clients, every day, saying, we're in so much trouble about this.
You know, and I do try to tell them, there's no such thing as being neutral.
That's their first point.
Genuine neutrality is not possible in the world.
But within the framework of realizing that you can't be neutral, you can sometimes step back
and say, look, you know, we're not going to take a view on this or that important thing.
And the companies find themselves in these positions because they're lobbied.
And they're lobbied by people who are trying to affect consumer behavior often.
And sometimes they're able to pull that off.
I mean, meta's been subject to a boycott.
You know, I mean, other companies, big companies are also subject to various boycotts.
So you are living in a real world.
If you have customers, you have to worry about your customers.
But you also have to be aware that your customers could be all over the place.
And that's one of the reasons to have stepped back policies, including policies of
referring something to some other group of people and saying, this is a really hard one.
we're not qualified to decide this, and we're handing it off to somebody who is qualified
to weigh in on it. I agree, and I think the key value is the tying your hands. You really have to
make what we would call on the social science as a credible commitment. You have to get to the
point where the people who would try to force you to take a position on this issue that's
irrelevant to your core mission, yet important to them, believe that there's nothing they can do
to make you take the position. And I think the critical mistake that
that a lot of the most important universities made,
as well as a lot of corporations made,
was to give in to those demands
and then create common knowledge
that they can be forced to make those statements
and that they aren't committed to not making those statements.
And people often point to it,
but one of the few universities that had less of a problem with this stuff
has been the University of Chicago
because they had made a pre-existing written commitment
to not take such positions.
And that has turned out to be a huge luxury.
for them that other universities haven't afforded themselves because they hadn't made that
commitment or the commitment wasn't credible. And I think that's, I think, you know, the central
problem is that over a 10 plus year period, a lot of the top universities demonstrated that
they did not have a credible commitment to not taking those statements. And now they're
trying to walk that back, but it's hard to create that commitment on nothing. I'd be remiss not to
bring the conversation back to some of the subjects we were discussing at the very outset around
internet governance. And you're mentioning this concept of credible commitments in the context
of the University of Chicago. But when it comes to making commitments with internet services
and binding people to rules, I wonder what the path forward is there. When you look at something
like the meta oversight board, they have made some decisions that meta doesn't necessarily
have to adhere to. Now, it adds a lot of transparency to the process. But I wonder if there
are other systems that would, you know, bind services and corporations and applications
to the way that they, to some set of, you know, agreed upon circumstances. Of course,
the thing that comes to mind is DAOs distributed to autonomous organizations and how you can
sort of create these commitments in blockchain code that bind all participants. I'll just say, I mean,
my view on this, I think there's two things that are interesting about blockchain and
Dow's with respect to this topic. The first is this trust problem and being able to write and
commit to a process. So one of the first things, you know, projects do when they start in
crypto is essentially write a constitution, i.e. write down in code how they're going to make
decisions over different issues. And, you know, exactly how binding those are is up for debate. It's a
little bit complicated, but in the long run with blockchain, I think it really is true that the
constitution you commit to through that process in code is a lot harder to change without resort
to some kind of democratic process than if it's not committed in that way, especially if it goes
way beyond what can be protected through normal, real-world legal processes. And so I think there
is something really interesting about being able to build an online community where you can
make a sort of promise into the future that anytime this type of decision comes up, this is the
process by which we're going to make that decision. I think there's something quite interesting
about that. The second is more economic in nature. It has to do with, you know, one of the most
important types of trust for online platforms is the trust between the different sides of the
platform's market. And so, you know, almost all online platforms have this characteristic that
they're trying to bring together two sides of a market, drivers and riders.
or developers and users for the App Store and so forth.
And you really need the producer side of that two-sided platform
to believe that into the future,
they're going to have an economically beneficial relationship with the platform.
And I think one of the biggest challenges we're seeing in the space right now,
look at like the huge dispute between Epic Games and Apple, for example,
is that developers are starting to feel like it's still a tremendous opportunity
to develop on top of these extraordinarily good,
global platforms. But at the same time, the taxes they pay to the platforms are going up,
the decisions made around their services are changing in unpredictable ways. They would like to have
a much longer-term promise about how the platform's going to treat them. And that's another
place where I think being able to make a long-term, in some sense, immutable promise over the
economic relationship between a platform and its producers, I think could be important.
in a broad sense that's what a constitution is i mean a constitution the best metaphor for it is
you know odysseus in the odyssey when he's on his boat and he's going to the island of the sirens
and he knows they're going to be so appealing to him so beautiful their song so beautiful that he's
going to jump ship and so he you know he ties himself to the mast and tells his crew members like
don't let me go that's in one of the leading metaphors what a constitution is i mean you know you
know, the idea is sort of like, we know that when the chips are down and we're in a panic,
we're going to take away people's rights. We're going to silence them or we're going to take
away their right against arbitrary arrest. And so we try to bind ourselves from doing that,
and we try to create institutional mechanisms to bind ourselves. Notably, in government,
there are two parts to that. There's the written rules. And then there's the human ability
to interpret and maybe even override those rules. And there's a productive tension between those
things. And it goes all the way back to a debate, believe it or not, I hate to be so ancient
Greek, but we've done a lot of ancient Greeks today, a debate between Plato and Aristotle
about which is better, whether it's better to have the rules in the end in charge, because
no one's perfect, which is the Aristotle view, or whether it's better to have the wisest person
you can get your hands on in charge. Because the loss of the king. Yeah, because the rules won't
give you the best outcome all the time, which is broadly the Plato view. And they're both right.
You know, you need some back and forth or productive tension between those things.
And that's true in Daos, where, you know, you don't want the thing to lead to a total spiral down.
You need to have some break glass measure.
And it's also going to be true in, you know, in something like the oversight board, where the company sometimes has committed itself absolutely to following their rulings.
Other cases, it's asked for an advisory opinion.
but in the end, you know, if META chooses to stop listening to the advisory board altogether, it could.
They would just have to pay a reputational cost for doing it.
So, I mean, I think, you know, this really is in the realm of art rather than science.
You always need to have some of each.
You need to have some meaningful constraint, and then you have to have some capacity for flexibility.
And so it would be nice to say there's like a magic solution to this, but there isn't.
And there's no purely technological solution, but there's no solution that has no technology
because rules and the following of rules are a technology.
So you need both.
And that may not be the, it may not be the most thrilling conclusion.
Rules always win or people always win.
Actually, sometimes some wins and some the other wins.
And that's real life.
So maybe that's not a terrible place to end what I have to say on the topic.
I want to build on that in one particular way, which is like one of the things we study a lot in the history of democracy is this idea of constitutions and what we call in political science and political economy self-enforcing constitutions.
There's no external authority that can bind a country to its constitution.
And so the constitution only has power in the long run to force people to follow the rules.
if there's some track record of everyone having agreed to it for long enough
that it has some special power that it accrues over time
because there's nothing to stop someone from tearing it up and ignoring it.
And so that's why we call them self-enforcing.
They only really bind into the long run
when everyone has a long enough track record of agreeing to be bound by it.
This goes back to the Magna Carta where King John was forced to sign it, right?
And then he was sort of like, well, actually, I'm not going to pay attention.
to it that much, and then there
was a battle
over the legitimacy of the crown and
all that. Tons of history
of this, and the
U.S. Constitution is somewhat unusual
in the length of time for which
it's proven to be somewhat self-enforcing.
That same problem exists
online.
And I think Noah is right.
He was hinting at this, I think,
that
blockchain or ways of writing
immutable agreements into code,
do not in a comprehensive way
solve this problem of self-enforcement
because you still always
people have the option to fork and to leave
and to do whatever they want
or to change the code
as long as they agree to it.
And so there's still something deep
in like he was saying art
rather than science about it.
However, I do think for the online world
blockchain provides something pretty fascinating
that I was really blown away
by as a political scientist
when I discovered it, and I think it's best highlighted through an example.
So there's a Dow called Lido, and Lido has been discussing publicly for a while its desire
to build in a veto for certain types of decisions.
And what they want to do is essentially update their constitution to say, when there's
this one set of decisions, maybe the Dow just makes the decision.
But when it's this other type of decision, it goes to an external veto.
and when I was talking to them and to other people about how that would work in practice,
one of my biggest concerns was that in the real world, if you try to set some procedural rule
that only applies to one set of legislative votes and not another,
then it's just obvious that strategic actors will channel it into whichever category is favorable to them.
So they'll just claim, like, if I don't want the veto to apply this,
they'll just claim this is one of the votes that the veto doesn't count on.
And because legislatures get to make all their own rules, you really can't stop that.
And we've seen that in the Senate.
Occasionally, the Senate parliamentarian tries to stop the Senate from doing stuff.
There's a weird equilibrium where sometimes the Senate defers to the parliamentarian,
but in the long run, the Senate has no real deep obligation to do that
and can always change its rules if it wants to, if enough people want to.
And so I kind of thought this veto is not going to work because you can just redefine,
Lido's votes into one category or the other.
But I think the thing that turns out that I learned
that's really interesting is no,
because these votes,
the topic of the vote is defined
by the smart contracts
that the vote actually touches
or doesn't touch,
you can define in a very deep
and immutable way
whether this is one of the votes
that the veto is going to apply to or not.
And that allows for a form of commitment
that I've never seen before in a legislature.
I don't think it's a panacea,
I think there are still these broader issues of self-enforcement that matter for all the reasons that Noah is saying.
But I do think it offers something pretty fascinating in the vein of binding commitments to legislative procedures.
Isn't this the miracle of the U.S. Constitution, Noah?
I mean, you've written a biography of James Madison, the fact that you can set these rules up front and yet make them flexible enough that they can endure.
Well, if we had Madison here, he would agree that it was his perfect design.
You have to ignore the Civil War and some other blips.
But yeah, I mean, the aspiration was to create something that would be able to be both rule-based
and also responsive to change over time.
Some of the design elements didn't work well.
I mean, the amendment provision, we use it very, very rarely.
He probably thought that was generally okay, but there have been some circumstances
where we really need reforms and we can't really get them.
More fundamentally, it's a system that's designed to enable compromise.
and to enable compromise in the middle and to push politicians back towards the center.
And that's why, you know, right now if you ask about sort of the big question, the grand
questions, you know, plaguing politics and political science, I think it's fair to say that
if you have some confidence that the laws of political science are still true, then you think
that we will migrate back towards the middle because our system is designed to push us back
towards the middle. And if you, on the other hand, are really panicked about where things are going
and the possibility for breakdown, you think that it might take too, you might still believe in
the rules, but you think they might take too long to operate, and then in the short term, you
could get a breakdown in overall faith in the system's capacity. And the other part of it is
looking at the world around us and seeing that for all of our polarization, lots of elements
of our society are still functioning, and functioning really reasonably well.
Amazing. This has been a fantastic conversation. Thank you both so much for all your time.
Thank you so much. Really fun. Thank you. Yeah, this is great.
We sort of took it by fiat that direct democracy has problems. We mentioned Bodie McBoatface, which is a pretty humorous example.
But are there more serious examples from history, from the physical world of direct democracy in action and going awry?
I mean, the most classic example people point to
in which the America's founders pointed to was Athens.
So Athens had a pretty robust direct democracy.
The reality of how it function was quite complicated.
It wasn't as simple as just like everyone showed up and made every decision.
They were actually like pretty complex layers of decision makers and stuff.
But there was a significant component of direct democracy among landowning men.
And one way that people say it went to Rye is that during the war with Sparta,
you know, this is just one telling, but like the people got swept away with passion
and were uninformed about the strategic decisions that had to be made
and essentially forced Athens to open a second front in the war
by invading Sicily.
And that sort of turned into Athens, Vietnam.
It was like a disaster.
And it massively sapped Athens power.
It was super, super costly.
And that is often held up as an example of, like, mob rule.
This is because the decision to open a second front and invade Sicily was...
Now, you said there are complex layers of decision-making,
but in this respect, it was kind of a body.
bottoms up.
Well, the claim is that the mob was manipulated by ego-driven generals who wanted to burnish
their reputations by opening a new invasion and that they manipulated the mob into supporting
them.
And if you look in the classical era and, you know, you could question the motives of all these
authors, that's sort of the stereotype that arises.
And so post Athens glory days, a lot of Roman writing refers to mob rule and the ability of demagogues to manipulate the mob as reasons to be very skeptical of direct democracy.
There's like a really famous passage in Virgil Zanid where he goes on at length about how cynical leaders can whip up the mob and what we really need are seasoned states.
who will, you know, make decisions carefully on behalf of people.
You know, more modern examples, hard to say.
It's so unfunctional that it's really not even tried.
Yeah, it's really, I mean, you can point to, like, in Switzerland, there's a pretty aggressive
local referendum system.
I think there's general agreement that it's kind of crazy, but, like, it doesn't work
as badly as some other things.
I mean, you literally, at the Canton level in Switzerland, you vote.
vote on who receives passports is really quite wild yeah there's some great research on this
bizarre you know in the u.s people point to california is pretty extreme on the end towards direct
democracy and that can go both ways um in some ways i think it's an important and potentially
valuable institution because it allows voters to surface issues that the legislature might want
to ignore and so if you think your legislators aren't
doing a good job or are captured by special interests or for whatever reason or not sufficiently
accountable to voters, then giving voters this alternative mechanism to force issues could be
quite valuable. The downside is that you end up with tremendous voter fatigue because you have
lots of these votes and the California ballot is ridiculous. It's many, many pages long. Most people
including myself, can't understand most of the issues being voted on.
And then the second problem, and this is often a problem in direct democracy, is
interest group capture of the agenda.
And so how ballot initiatives get onto the ballot is complicated, but it's actually not that
hard for a well-resourced, committed interest group to force carve-outs for themselves
onto the ballot.
And so one thing that happens in California is every cycle, we have to vote on this extremely
abstruse
ballot initiative that has to do with whether
doctors should be required to sit
in on all dialysis
and you might think
it's doctors who are pushing this
because you might think like oh they make money from it
but no the doctors do not want to do
this and they think it's crazy
because there's no medical reason why a doctor
needs to be present for the administration of dialysis
just a ton of time
wasted and
yeah yeah it's crazy
and it's being pushed
by certain interest groups
that are kind of just more generally
in a conflict with doctors
and they have stated publicly
that the reason they put on the ballot
every two years is to force
the doctor's interest groups
to spend money
convincing everyone not to vote for it.
So it's a complete waste
of everyone's time and resources.
And so that I think is a great example
of how these processes get messed up.
By the way, the Irish referendum
that just happened,
my brother-in-law was just visiting
from Ireland,
interested in this going on. It was interesting to see people reject so strongly these proposed
changes to the Constitution. I don't know if you're tracking it. No, I haven't followed that very
carefully, but I do think that's a good example where referendums, if they're embedded into a broader
process in a healthy way, can be a really good way to get more signal for voters. And this happened
in California. I think it was 2018. We had a bunch of
of pretty like culture, war-related referendums or ballot initiatives.
Yeah.
And quite consistently, the voters really signaled through them
that they were in a more centrist position than California's elected officials.
And I think that had a big impact on how the elected officials then proceeded from then on
after they lost these big ones.
So I do think it can be valuable, but it's complicated in how you enact in practice.
And I think that's kind of where Dow government is heading.
There will still be some direct token holder voting,
but there will also be a lot more delegation
to professional experts on these issues.
You need to protect against mob rule,
but also plutocracy and find some sort of, you know, middle way.
Yeah, and I think that the delegation stuff, you know,
it's very valuable, but it's not going to solve,
the participation problem fundamentally.
You're still going to need the token
holders or the other voters
to pay attention and make
sure their delegates don't go rogue.
That's going to require some
pretty careful planning because there's definitely
a temptation to set it and forget it
in terms of delegating your tokens.
But we have lots of reasons to
suspect that if that behavior
manifests regularly,
the delegates won't have those incentives
we want them to have to do a good job.
So I think a lot of the most interesting work right now in Dow governance is around how do you build delegation programs that, first of all, recruit and give good incentives to delegates, while at the same time, second of all, still encourage token holders or other voters to pay attention and to think about re-delegating their votes on a regular basis so that delegates feel like they're being watched.
What are people putting in place? What sorts of new rules or experiments are happening?
First thing that's happened, which I think is really interesting, is a bunch of the largest DAOs have instituted.
They each have a different name.
I would call them Delegate Programs, which is a combination of often it includes paying the delegates, sometimes as a function of how many votes they accrue, and creates like an online web interface that makes it easy for token holders to delegate to different delegates.
And so you're putting together, you're creating incentives for there to be delegates.
and you're helping token holders find delegates.
And we've actually done some research.
There's some pretty interesting evidence that rolling out those programs
actually does increase token holder voting participation.
Probably because you're asking the token holder to do a much easier task.
Instead of voting on like literally changes to the underlying protocols code,
you're just asking them like find one of these delegates who you like
and give them your voting power.
And part of these programs is also having the delegates write platform statements and or post videos about what they want to accomplish as a delegate that, again, helps the token holders find, like, oh, you know, that's a delegate that shares my views. I'm going to delegate to them for now.
Fascinating. So this is a paper that you're working on currently?
Yeah, yeah. I'm hoping to have it done to the next month or so. We'll see.
Very cool. And so basically getting people to delegate their votes more, making it low friction, making the information available for people to make reasonable decisions about who they want to delegate to and the directions that they'll take a Dow.
Exactly. Yeah. No, and I think it's going to be a really important model because this came up in the conversation with Noah, I think a lot of people in other parts of online governance are coming to the same.
conclusion that we need representatives because we need this kind of expertise in the accountability
and we can't rely on direct democracy but web three is years ahead literally years ahead in
experimenting with how you actually set up representative democracy so i think it's going to be
yeah really important development i mean we've got this intense like darwinian combat going on
you know all these experiments just let loose and seeing which ones will flourish how far away are we from
you know, actually determining what works, what doesn't work,
and seeing, you know, the kind of best practices shake out from all this.
I don't know. I think there's probably two aspects to it that we need to have happened before we'll know,
for sure. One is to get DOWs to a place where they have, you know,
more broader, killer use cases for society, which will in turn make the governance decisions
higher stakes for society. And then we'll see.
which governance structures can stand up to that pressure.
In some ways, they're already involved in high-stakes decisions,
certainly in these like D-Fi protocols and stuff,
but they don't have the same kind of global public pressure on them
that other online platforms have faced
because other online platforms are much more mature,
have many more regular users and so forth.
So I think that's going to be a really big, interesting shift as it occurs.
that's going to bring the Dow governance more together with what we talked about with NOAA in terms of, like, Web 2.0 governance.
And then the second is we need over time to have more DAO's with broader distributions of voting power, which is happening over time.
You know, big criticism of DAO's historically has been, you know, they're clothed in the rhetoric of democracy, but the voting power is very unevenly distributed.
And that may make some of this like delegation experiment sort of
unrepresentative of what would happen in a broader democratic system
where there's more conflict among users with roughly equal amounts of power.
And so that is the trend, I think, in token holding over time.
And again, as these become larger, as they touch on more killer use cases for society,
I think we'll see that broader distribution.
And that will be another kind of pressure test for these systems.
systems. So those are the two things I'm keeping my eye on.