Bankless - The Very Serious Impact of Misinformation | Nico Perrino
Episode Date: October 7, 2024In this episode, we sit down with Nico Perrino, a leading advocate for free speech and individual rights, currently serving at the Foundation for Individual Rights and Expression (FIRE). Nico shares h...is views on the increasing government control over digital speech, the role of social media, and how misinformation and disinformation are shaping our society. We dive deep into the debate around censorship, the dangers of having a "Ministry of Truth," and the future implications of AI on free speech. Whether it’s the growing concerns about misinformation or the battle between government intervention and freedom of expression, Nico gives us a fresh perspective on navigating these complex issues. ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🗣️TOKU | CRYPTO EMPLOYMENT https://bankless.cc/toku 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🦄UNISWAP | BROWSER EXTENSION https://bankless.cc/uniswap ⚡️CARTESI | LINUX-POWERED ROLLUPS https://bankless.cc/CartesiGovernance ------ ✨ Mint the episode on Zora ✨ https://zora.co/collect/zora:0x0c294913a7596b427add7dcbd6d7bbfc7338d53f/74?referrer=0x077Fe9e96Aa9b20Bd36F1C6290f54F8717C5674E ------ TIMESTAMPS 0:00 Intro 2:46 Nico Perino: Free Speech Advocate Introduction 3:17 Social Media Misinformation & AI Concerns 7:14 Debating 1984 & Ministry of Truth 11:49 Misinformation, Fractured Sources, & Consensus 16:36 John Kerry on Governance & Free Speech 21:56 Fractured Truth: Media Referees & COVID 23:26 Morlocks Trap: Conspiracy Theories and Control 24:23 Misinformation Amplified by Algorithms & Bias 32:56 AI as a Tool for Finding Truth 36:59 Misinformation vs. Disinformation 39:58 The Pavel Durov Case 46:50 Comparing Telegram & Apple Encryption Cases 48:05 Brazil's Ban on X & VPN Penalties 52:58 Balkanization of the Internet and Free Speech 56:10 UK Online Safety Act 59:46 Global Misinformation Laws: Australia to U.S. 1:08:00 Government's Role in Protecting Free Speech 1:12:50 Bottom-Up Solutions for Truth & Speech 1:20:23 Inevitability of Censorship's Failures 1:22:33 Advancing Free Speech 1:23:01 Closing & Disclaimers ------ RESOURCES Nico https://x.com/NicoPerrino ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
I do wonder, though, whether the crisis of truth really stems from our fractured media
environment or from the institutions that we entrusted with the power to sort truth from
falsity failing us.
Welcome to bankless, where today we explore the frontier of the problem of misinformation.
This is Ryan Sean Adams. I'm here at David Hoffman, and we're here to help you become
more bankless. The question today, is misinformation a problem that governments really have to address?
they certainly seem to think so.
A few current events that raised eyebrows on the bankless podcast here.
People getting arrested in the UK for social media posts.
Like what's going on there?
Brazil banning X and the kerfuffle with Elon Musk.
California speech bills requiring the labeling of satire that's here in the United States.
Pavel Derov arrested.
This is adjacently related, but for not producing chat logs for various governments around the world.
could other tech CEOs get arrested for breaches of misinformation laws?
All of this today prompted the conversation with Nico Perino.
He's our guest.
And he argues that the misinformation problem is massively overstated.
And if our governments aren't careful, they're all going to speed run us into totalitarian surveillance states through speech regulation.
I think David and I tend to agree with him.
You can listen and decide for yourselves.
I think the big takeaway I have from Nico from this podcast is the idea that when there are
massive technological shifts in society. Society tends to think that this is unprecedented times.
We've never seen this before. All rules need to be thrown out the door and, you know, we just need
to figure out things in this present moment. When in reality, you know, times do rhyme. We have seen
this before. This is not unprecedented situations. And actually, first principles are first principles
for a reason. They are timeless. And I think that is kind of the position.
I think that you and I probably have. There's ancient wisdom that we've learned. There are things
that we know are unequivocally good. Free speech is unequivocally good. And let's not be so hasty to
throw away things that we've learned in the past just because we think AI is materially different.
And I thought that perspective was pretty useful, along with all the other perspectives that Niko
brings to this podcast today. So before we get into the episode with NICO, actually, we have a
favor to ask you bankless listener if you think that we deserve a five-star review on whatever podcast
player that you are listening to this on, Spotify, Apple, wherever. We would appreciate that five-star
review. We don't really typically ask for these often, but we haven't done one in a while. We try
really hard. If you guys agree, please give us that five-star review. Now let's go ahead and get
right into the episode with Nico. But first, I'm going to talk about some of these fantastic
sponsors that make this show possible, especially Cracken. Our preferred exchange for crypto in
20 and 24. If you do not, have an account with Cracken, consider clicking the link in the show
notes to getting started with Cracken today. If you want a crypto trading experience backed by
world-class security and award-winning support teams, then head over to Cracken, one of the longest
standing and most secure crypto platforms in the world. Cracken is on a journey to build a more
accessible, inclusive, and fair financial system, making it simple and secure for everyone,
everywhere to trade crypto. Cracken's intuitive trading tools are designed to grow with you,
empowering you to make your first or your hundreds trade in just a few clicks. And there's
an award-winning client support team available 24-7 to help you along the way, along with a whole
range of educational guides, articles, and videos. With products and features like Cracken Pro and
Cracken NFT Marketplace and a seamless app to bring it all together, it's really the perfect
place to get your complete crypto experience. So check out the simple, secure, and powerful
way for everyone to trade crypto, whether you're a complete beginner or a season pro. Go to
crackin.com slash bank lists to see what crypto can be. Not investment advice, crypto trading involves
risk of loss. The Arbitrum portal is your one-stop hub to entering the Ethereum ecosystem.
With over 800 apps, Arbitrum offers something for everyone.
D-Fi, where advanced trading, lending, and staking platforms are redefining how we interact with money.
Explore Arbitrum's rapidly growing gaming hub from immersed role-playing games, fast-paced fantasy MMOs, to casual luck-battle mobile games.
Move assets effortlessly between chains and access the ecosystem with ease via Arbitram's expansive network of bridges and onrits.
Step into Arbitrum's flourishing NFT and creators-based, where artists, collectors, and social converge and support your favorite streamers all on-chain.
Find new and trending apps and learn how to earn rewards across the Arbitrum ecosystem with limited time campaigns from your favorite projects.
Empower your future with Arbitrum.com.
Visit portal.arbitrum.com.
To find out what's next on your Web3 journey.
Uniswap wallet is officially the preferred wallet of bankless, and it's the one we use any time when we want to transact on chain.
Whether you're on your browser or on the go, Uniswap wallet makes it easier than ever to swap anytime, anywhere.
Use your wallet to transfer funds directly from a top centralized exchange and tap.
thousands of tokens across Ethereum and over 10 other chains like base, arbitram, and optimism.
Uniswap wallet delivers deep liquidity, fast execution, and reliable quotes with zero gas swaps
through Uniswap X. And when it comes through security, you can rest easy knowing it's backed
by Uniswop Labs, one of the most trusted teams in Defy. Their code is open source and independently
reviewed so you know it's protected. So why wait? Download the Uniswap wallet today on Chrome, iOS,
and Android. And don't forget to claim your free uni.eath username directly in the mobile wall.
Start swapping smarter with Uniswap.
pleased to introduce you to Nico Perino. He's a prominent advocate for free speech and individual rights.
Of course, things that we regard quite highly on bankless. He's currently serving at fire,
which you guys know is the foundation for individual rights and expression. That's the FI-R-E.
He's also the creator and host of a podcast that just talks about free speech. It's called,
so to speak, the free speech podcast. We've got a fellow podcaster here, all in one package.
Nico, welcome to bankless.
It's good to be here, Ryan and David.
Thanks for having me.
All right.
So a topic that's been on our minds, Nico, that we want to like tap your brain into here and just like have a conversation about it.
There's this drumbeat, I think right now, almost like a memetic drumbeat, let's say, in Western liberal democracies.
And it goes something like this.
Social media misinformation is bad.
AI is going to make it worse.
The market has no way of solving this.
So we need misinformation laws to stop it.
Or will fracture into cancer.
chaos and elect strongman, authoritarians, and descended to madness. I think it's a pretty
broadly accepted meme. I guess I want, at the highest level, your take on what is wrong
with this meme? Well, someone has to decide what is mis and disinformation, right? So you need to
have essentially established some sort of ministry of truth to determine what is true and false.
And we all read, or at least I hope we've all read, George Orwell's 1984, the main character in that book, Winston Smith, worked at the Ministry of Truth of Oceania.
And his job was to essentially review the news and ensure that it followed the party line and where there were inconvenient facts, excise them from the news and from the narrative.
So I don't know how you get to a place where you have a government ministry of truth that everyone accepts.
And then we build consensus that way.
People are concerned about mis and disinformation.
There was a study done by the World Economic Forum of 1,500 experts.
And they were asking these experts what the biggest global risk is.
And the first thing that they listed was mis and disinformation.
This is above cyber insecurities, societal polarization, extreme weather risks.
Mis and disinformation was the thing that they were most concerned about.
And there's pretty broad support within America for policing mis and disinformation.
55% of American adults believe that the government should restrict false information online.
Wow.
That's up from 39%.
And they say specifically the government should restrict.
The government.
That's up from 39% in 2018.
And there are 65% of Americans who believe that tech companies should restrict false information online, up from 56% in 2018.
So you see this trend whereby Americans are increasingly concerned about miss and disinformation online.
But if you ask them about the types of misinformation Americans are most concerned about, the number one leader is politicians spreading misinformation.
51% say that's the most concerning form of misinformation above social media companies failing to stop misinformation.
That's at 35%.
So you have 16% delta there.
So politicians still lie the most.
That's at least consistent.
And we know that, right?
The weapons of mass destruction, right?
The CIA torture.
So we've seen politicians spreading misinformation throughout American history.
So what do you do about misinformation?
Americans and experts are concerned about it, right?
Well, you could have the government step in.
Like, that seems to be the natural solution.
But Americans also don't trust the government to police misinformation.
Indeed, they think they're the biggest spreader of misinformation.
And then you could have tech companies do it, I guess.
but would you trust them to do it correctly?
So this is always the question when you're debating free speech topics is who decides,
who is that angel out there with whom you would entrust the authority to decide for you
what speech you can hear, for you what music you can listen to, for you what movies you can watch.
There's not anybody in the world with whom I would trust that responsibility.
So it's better to let me just access the information and decide for myself.
And I think that's how most people approach the issue.
It's always the other guy who's easily duped.
It's never myself, right?
It's never me who's doomed.
I really want to drill into this point, Nico.
You brought up 1984, and that's a very common thing to use as an analogy as like an
illustrative idea when we are talking about free speech.
And I think everyone listening to this podcast will be like, oh, yeah, like Ministry of Truth,
like automatically bad.
But I really want to double tap on that because, you know, the 1984 book, which is so famous,
really painted that in a terrible light.
But, you know, what if that is actually the best solution?
Maybe it's just like democracy.
It's just the least.
worst solution that we have, right? And so I think that people understand as a meme that 1984 is bad,
a ministry of truth is bad. But that's because that's kind of just built in culturally into our
reaction to these words, ministry of truth. Yeah. Like, I really just want to double down on,
like, how do we actually know that this is actually just a negative outcome that we should avoid?
Like, their governments organize other things. The governments don't, aren't a complete failure
across everything that they do. Why is this one different? Yeah. Well, the appeals,
to 1984 can become a cliche in my world. So point taken on that front. So there are a number of
ways you can approach this. And I think the first is to just ask ourselves whether misinformation
is more prevalent now than it has been in the past. I mean, you need to accept that premise,
right, in order to move on to the next question that is then what do we do about it? So the average
United States internet user spends less than 10 minutes a day consuming news online. And misinformation
according to a 2021 study represents 0.15% of the Americans' media diet. So it's minuscule. Now,
that assumes that you can actually identify what is miss and disinformation, which is hard itself, right?
Not everything is black and white. But if that's the amount of information that Americans are consuming,
then you have to ask yourself, is that a problem? And is it worse than it has been historically?
I'm not sure it is. Americans throughout history have been easily duped, particularly
at moments of technological revolution.
So as society was becoming more literate, for example, people were easily duped by anything
that was put in writing.
There's that famous anecdote when people were put in front of a television screen or a movie
screen for the first time and saw a train barreling toward them.
They jumped out of their seats because they thought there was an actual train barreling
toward them.
You have Orson Wells' radio broadcast, War of the Worlds in 1938 that created a panic
in the streets of New York.
internet, right? We had a famous Time magazine cover where you had a child in front of
television scream and the headline was just cyber porn and it discussed the sort of parade
of horribles that were going to come with an unfettered and open internet. I think what you're
ultimately going to need and society is going to need to develop is a sort of media literacy
and understanding that not everything that you read on the internet is true and that
that you are going to need to determine for yourself where you go and find your truth.
The problem is, right, when you take that next step, you determine that this is a problem,
take for granted it is a problem.
How do you actually solve it?
I think there are some technological solutions and some companies that are doing some
interesting things like X's community notes, right, which even applies to Elon Musk.
I've seen it on a couple of his tweets in the past.
But when you get into the government, you have to be really confident that this.
This is a problem because look at authoritarian dictatorships throughout history, right?
Hitler censored because of what he called Jewish lies and Marxist propaganda.
You have Kim Jong-un and Xi Jinping in China, for example, who are censoring based on what they say
are false international narratives that undermine society's consensus and ability to move forward.
You see it all across the world with Vladimir Putin, for example, after the invasion of Ukraine.
So this is a tool, if you give the government, the power to censor misinformation that will be wielded by those in power against political dissidents and those who dissent from society's broader perspectives.
So there's two interesting points there.
One is just like questioning the premise that we have more information now than we've had previously.
And I think that's an interesting premise to sort of question.
Maybe we'll dig into that later.
But one point that I might make to is that maybe we don't have more misinformation, but it's a question.
but it sort of feels like we have more fracturing in information sources. So it's very interesting
in the early 2000s with weapons of mass destruction thing, right? That turned out to be misinformation.
But we had societal consensus that that was reality and truth, almost because we didn't have
fringe social media internet platforms saying, like, are you sure they have WMBs? And here's a paper
that I read. And they may not have it. So maybe what we're struggling,
with is not so much misinformation, but a difficulty into coming to consensus on what reality is.
And it might be that all these information sources are actually healthy because we sort of
question the mainstream narrative or the government narrative in a way that we didn't previously.
I'm kind of unresolved on that in my own mind. But the broader point that you're making,
I think your thesis here, is basically that when it comes to government regulation, some sort
of ministry of truth, the cure is worse than the disease.
easier. So maybe there is some misinformation problem out there that's been amplified by the
internet. But if you're going to solve that with the government, look at all of the ways
we've seen throughout history of how it can and inevitably will go wrong. I think that's
the broad point that you're making. Yeah, that is the broad point I'm making. But I take the
consensus point seriously. There was a recent video going around X. I think it got something like
35 million impressions of John Kerry speaking at some World Economic Forum event. And
And he was asked about mis and disinformation and the challenges with governing in the social media
age.
And he said it's harder to build consensus today than it has been in the past when you had something
like four or five television stations and a couple of national newspapers.
And he said authoritarian countries can move faster than governing democracies because they can
build consensus easier, right?
They do it at the point of a gun.
Here we have to do it at the ballot box.
And he suggested in the latter part of that clip that, you know, the way to win is to win at the ballot box.
And some people interpret that to mean amending the First Amendment to the United States Constitution.
Oh, really?
Yeah.
Nico, I saw that clip, too.
So let's maybe start by steel manning the case that people make for misinformation.
Why don't we just play that clip?
Let's play it now.
Sure.
And I think the dislike of and anguish over social media is just growing and growing and growing.
and it's part of our problem, particularly in democracies,
in terms of building consensus around any issue.
It's really hard to govern today.
You can't, you know, there's no, the referees we used to have to determine what's a fact
and what isn't a fact that kind of, you know, been eviscerated to a certain degree.
And people go and that people self-select where they go for their news or for their information.
and then you just get into a vicious cycle.
So it's really, really hard, much harder to build consensus today than at any time in the 45, 50 years I've been involved in this.
And, you know, there's a lot of discussion now about how you curb those entities in order to guarantee that you're going to have, you know, some accountability on facts, etc.
But look, if people go to only one source and the source they go to is sick,
and, you know, has an agenda and they're putting out disinformation,
our First Amendment stands as a major block to the ability to be able to just, you know, hammer it out of existence.
So what you need, what we need is to win the ground, win the right to govern by hopefully having, you know, winning enough votes that you're free to be able to implement change.
Obviously, there are some people in our country who are prepared to implement change in other ways.
And that's really if democracy can survive unregulated social media.
I think democracies are very challenged right now and have not proven they can move fast enough or big enough to deal with the challenges that we are facing.
And to me, that is part of what this race, this election is all about.
will we break the fever in the United States?
Wow, so that was John Kerry speaking on a panel at the World Economic Forum, the WEF, which is interesting, and some phrases that I caught, Nico.
I'd like your commentary on what you think he was saying there, because that could be interpreted multiple ways, and there were parts that were very explicit, and there were parts that were vague.
But he's saying it's hard to govern right now.
I think all politicians sort of feel that.
The referees aren't here to kind of build consensus, right?
We don't have the media truth, the consensus machines that we used to have.
People don't trust the media, and, like, you know, they're fracturing in terms of their information
sources.
So the question, he said, is how to curb these entities.
It's a little scary, how to curb these entities.
And the First Amendment is a block to curbing these entities.
It's interesting because, like, I, and I know you do, hold the First Amendment in very high esteem.
We wouldn't say it's a block.
We'd say it's a guarantee that we actually are able to preserve a free society.
and he said that those that win the right to govern
might be able to implement change.
So some vague statements there.
I mean, to me, that kind of,
like my interpretation is that kind of points towards,
hey, if we win, if our side wins,
then we're going to have to do things,
you know, to curb misinformation in some ways,
whether through executive order,
through legislation, etc.
But we'll run into some blocks
with the First Amendment.
We'll just have to navigate that.
And we can because we have won the right to govern.
So we'll have some power here.
But how do you interpret it?
I'm not sure.
Christopher Hitchens has this saying.
Christopher Hitchens, like great Christopher Hitchens, once said that one of the worst tactics in public
argumentation is to assume that your opponent's worst possible motive is their only possible
motive, to assume that their worst possible motive is their only possible motive.
So I always try and be charitable when I'm interpreting what people say, especially when it's unclear.
For me, it was a Rorschach test on X, right?
You had a certain segment politically aligned, presumably with Kerry, who interpreted
him as saying you need to build governing coalitions through the ballot box. You need to appeal
to a wide swath of people, not just to your political poll. And then you had the other segment
saying not inclined to interpret him charitly that what he's asking for is a constitutional amendment
to curb the First Amendment because he's totally right about the First Amendment. The First Amendment
doesn't have any categorical exceptions to false information. In fact, there was a case in 2011
called United States versus Alvarez that involved the Stolen Valor Act, which essentially said you can't lie about winning military medals.
And there was this man, Alvarez, who lied about winning the Medal of Honor, America's highest military honor.
Wow.
And the Supreme Court said the law is unconstitutional.
You have the right to lie.
You have the right to lie.
His First Amendment protected right to lie that you won a Medal of Honor.
Yes, yes.
So he, Justice Kennedy and his plurality decision.
wrote, permitting the government to decree this speech to be a criminal offense, whether shouted
from the rooftops or made in barely audible whisper, would endorse government authority to compile
a list of subjects about which false statements are punishable. That governmental power has no clear
limiting principle. Our constitutional tradition stands against the idea that we need Oceana's
Ministry of Truth. Here we go back to that George Orwell. It's deeply embedded socially.
Yeah, but he's right. It was a block. The First Amendment in this case was a block on the
government saying you can't, you know, pretend or lie about being a winner of some of our most
prestigious military medals. Do you guys know that I've actually won a Medal of Honor?
You're wearing it right now. I'm not actually wearing it right now. True story. True story.
I do wonder, though, whether the crisis of truth really stems from our fractured media
environment or from the institutions that we entrusted with the power to sort truth from falsity
failing us.
You just think back to COVID, for example, right?
And there was a huge discussion around the science.
Some of the things we thought were true in 2020 turned out to not be true in 2024, whether it's masking or children in schools or whether the virus leaked from a lab or from a wet market.
And tech companies and the government to a certain extent policed this information.
And I tend to see free speech as a way of developing trust, not deteriorating trust.
Because think about it like a criminal trial, for example.
You can trust the outcome of that trial if you saw the trial itself as fair.
It's not a star chamber.
It's not a show trial.
And you can think about that in the same way with our public debate.
If the public debate you feel like is fair, you can trust the outcome of that public debate, whether it be public policy or policy implemented by tech companies.
But what people saw during COVID and during other periods is that censorship happened.
And people weren't able to try and poke holes in the scientific consensus.
And therefore, they just didn't trust the outcome.
And then it deteriorated the institutions.
I mean, there's an element of they tried to do the WMD thing of the early 2000s with COVID information.
where all of traditional institutional media also backed by the government said this is the actual truth here. And now they couldn't do that as effectively in 2020, 2021, because we had the internet, because we had social media, because we had this fracturing of truth. So you're sort of making the case that like maybe the problem isn't like the fracturing of media. Maybe the problem is that our referees, as John Kerry said, are actually bad referees. And like you can't get away with that any longer.
Did you guys ever see that documentary by Andrew Callahan This Place Rules?
It's an HBO documentary.
I didn't know, but I'm a big fan.
He's the All Gas No Breaks guy, right?
I believe so.
He became famous by making social media videos interviewing people who aren't traditionally
interviewed, you know, sort of people at the margins of society.
Yeah, and so he created this documentary about January 6th and kind of the events leading
up to it and surrounding it.
And there was this amazing quote that he kind of interjected into the documentary where
he said, when you take someone who talks about a date.
deep state conspiracy to silence him and his followers. And then you silence him and his followers.
It only really adds to his credibility. So, you know, the long and the short of them, when you're
dealing with people who believe there's a conspiracy to shut them up, don't try and shut them up, right?
It just sort of supercharges this lack of trust that exists within that community.
But there's that weird Moloch trap, though. Moloch, I don't know if you're familiar with this idea
of Moloch, Nico, it's a quip that we use on bankless a lot. It just kind of means like a trap of
coordination failures where you could make any conspiracy.
about anything that you want,
so long as it also has
this underpinning of,
oh, and also the government
wants to hide this.
You can make a conspiracy
about anything.
It's like,
the government doesn't want you
to know about this,
and it kind of turns
into a self-fulfilling prophecy.
I mean, for instance,
David Hoffman does have a
medal honor
that the government
does not want anyone to know about.
And they've been trying to silence
this on Vegas for very long time.
We're just coming out with this
in today's podcast.
Well, you do need
a sort of scientific method
whereby the claims are falsifiable.
Right.
Right.
And if you're essentially positing a theory that's impossible to falsify, then we should look at
that with extreme skepticism.
Yeah, this is interesting.
A interview that we recently did with somebody that Dave and I respect a lot from
his previous books in literature, Yvalho Harari.
And he's got this book called Nexus.
And an underpinning of the book is basically like social media was sort of our first act
in AI, these AI algorithms essentially engagement AI's that sort of thing.
and our next act is going to be all of this amplified, and it's resulted in a massive, chaotic,
misinformation, disinformation problem. And it's like maybe point taken to some degree, but his
solution to that is basically government regulation, which is the thing that gets pointed to.
But can we go back for a second and actually just like continue to steal man the case before we
sort of apply it to some of the events taking place? Because there are some out there, Niko,
who would say, look, misinformation is not a problem at all. Like,
There's no such thing as like, say, Russian bot farms, you know, instilling propaganda.
Like, that's not a real force. That's something that, like, ex-political party would say.
You know, they would just downplay misinformation so much that they would deny its existence, let's say.
And I want to kind of, like, steal man the case because this is sort of the Yuval-No Harari case.
He's basically, like, algorithms are optimizing for engagement.
That's kind of what they do.
They're paid to suck out, like, your eyeballs.
and so they're not optimized in a capitalist tech economy for truth-seeking or for self-correction,
kind of the virtues we said. They're not optimized for the scientific method. They're just optimized
to give you kind of like the tabloid thing that triggers the dopamine. That's going to be
outrage and stuff like that. And so that's how misinformation kind of like propagates.
And then we have these chaos agents, maybe foreign actors. We can't tell online in any of the
social media platforms who's a human and who's a bot. So we have all of this interference
by like bots, maybe some of these are foreign adversaries, for instance. And then we have like
extreme ownership bias. So like now we have like if the platform is somebody more right leading,
like X and Elon Musk, right, he's going to trigger the algorithm to amplify his ideology. And if
it's more left leaning, maybe something like threads or some of the, you know, Reddit, for example,
then they're going to amplify kind of like, you know, something that's more left leaning. And then
you also have audience capture where you're stuck in these silos of, you know,
of just people want to consume the information that fits with their existing ideology, right?
And so, you know, you go to Fox News or you go to MSNBC, depending on your political affiliation.
And then we have this other effect of the Internet, which is like the fringe is now able to find one another.
Right. So like in the early days, if you're consuming media from kind of like your regional newspaper, right, it's going to be, your fringe ideas are going to be kind of diluted by the populace around you.
But now you can go into like a telegram group and find all of the fringe crazies that subscribe to your belief system and communicate with them and spread your ideology.
And like these are basically problems, they would say, in steel manning the case, that are unique that did not exist in, you know, the first part of the 20th century.
And these are internet problems amplified by social media.
Do you acknowledge that these kind of problems exist and are the roots of what we're dealing with from a misinformation side?
To a certain extent, right?
It's difficult, right, when you say that the outcome of accepting that premise is that the government needs to be involved and regulated.
And, like, I just will never accept that outcome or that solution, right?
But if you look at the premise, I guess I would just need to see more evidence that mis and disinformation is the problem.
Like, what are the best examples we have of mis and disinformation creating societal discord?
Or are the problems of mis and disinformation so diffuse because it's created by bots and it shapes.
cultural narratives that you could just never really identify if mis and disinformation is creating
a problem within society. It's creating that polarization. But then you get into the falsification
problem. Right. So people cite as examples of mis and disinformation, that example where you had
the Q&on conspiracy theorists arrive at Comet Pizza in Washington, D.C. with a gun in 2016.
People talk about the election in 2020 as well. You have the Russian bot farms, but it's unclear what
effect that actually had. So like, where is the miss and disinformation creating these problems that
are so acute and so challenging that we then need to go to the government to help determine
what is true and false and then censor accordingly? So I don't know. I mean, what do you guys think
are the best examples of mis and disinformation? Best examples of misinformation. I was thinking about it
before I came out here. And I mean, I just really have a hard time thinking of the examples.
Well, so there are probably, I could spend a few minutes in kind of research like this.
best examples that the whole entire world knows about, like, examples of this. I think in the
crypto world, however, we are in this, like, constant battle for attention everywhere. Like,
projects need attention. Meme coins need attention. Like, everyone is vying for attention on Twitter,
on X. And this has turned into, like, this just massive Darwinian competition space for attention
at all costs. And there are some projects that are super honorable and legit. And the
crypto space, there are many, many projects that, like, the incentivization here just, like,
collapses down to just, like, the worst possible behaviors. And you get the full kind of spectrum.
So I think in the crypto space, the media illiteracy is actually pretty strong because if you
are media illiterate, like, you end up buying a scam, like, really quickly. Yeah. In the way I would
phrase this, like, there's lots of different citations we could, you kind of make, I guess maybe the
COVID example is a good example, but that kind of came from mainstream as more, like, disproven by the
internet, which is probably a counter evidence to this.
Success story, yeah.
But I would tell you that from my perspective, it just seems so difficult to find out what's
true today.
Yeah.
Like, we are sort of thrown headlong into kind of like the consensus finding meat grinder
online so that it's very difficult for me to tell what's true and what's not, at least in
the moment.
I feel very alone when I navigate the internet.
and it's up to me to determine what is true.
And I'm in chat rooms with like-minded friends
who have disagreements and people that are like
I would spend all day with.
And then there's this one fact that we just cannot get over.
Yes.
And when we get to that point,
I just feel alone because there's no university for me to go to.
There's no government ministry of truth for me to rely on.
And so like, you know, honestly,
I'm very grateful for my partner here, Ryan,
because I mean I think very similar.
Yeah, you have to do this like with a small tribe.
of like truth seekers.
And you can only do it with one narrow set, right?
So like David and I can try to ascertain what's true in crypto and what's not.
But if you go outside of our zone of expertise, we're automatically relying on someone else.
Right.
So it creates this feeling of chaos.
And I think this is part of the explanation for like the growing nihilism that we sort of see all around us.
It's just like people are like, well, who knows what's true?
Nothing is true.
And so like I don't believe anything from anyone.
Right.
And so there is no truth.
And like, why bother?
is kind of an outcome of this.
Well, are you guys coming to that conclusion, too?
It sounds like David and Ryan, when you have a truth claim that needs to be analyzed,
you're going out there and trying to find more information about it.
I just don't think in the past people had that ability, right?
You could pick up the morning newspaper and that was it.
And that's all the information you had.
That was it.
Or you could look at your encyclopedia if you had one of those on your shelf.
But now we do have access to the world's information, at least for now.
We can get into that topic a little bit later.
And so things aren't always as clear or as black and white as you might think they are.
There are a lot of claims that are somewhat true.
Anyway, so we can question things more easily because we have access to more information by which to question them.
You know what?
I think you're right here because another thing in the case that I was making earlier is the statement that AI will make it worse,
which seems to be sort of part of the memetic thesis, like social media misinformation is bad, but AI will make it worse.
And I can see why they say that.
They point to kind of this massive flood of content that seems human but it's not produced by human.
Anyone can produce a scientifically seeming white paper and you post that.
You could generate that at scale.
And, you know, AI has really passed the turning test.
So again, back to the statement of you don't know who's a human and who's a bot online.
And yet, there's the other side.
That's kind of the glass half empty side of AI.
On the other side of the equation, I find myself using AI models as a true.
finding mechanism. And it's been so helpful for me in trying to parse. More than asking my friends.
And so you can type in perplexity. And you just type in any kind of statement. We probably will at
time string this episode with like real world events. And you get the best digestion I've seen
of like actual facts. Like the surface, the individual consumer who's like quote of quote doing your own
research. Not perfect facts. We don't know if they're perfect facts. But they're cited. Yes. But they do
have citations. Yes. But you need to go look at the citations yourself. Right. I don't think either
us use perplexity as like tell me the truth it's so like give me the top of the funnel yes like
show me the rabbit hole and like spit me out some version of events that's probably hopefully
approximate to the truth and then show me the eight different like sources that you've got that from
exactly and then you can even prompt it and ask like you know what are the sources that disagree with
this right so you could ask it to seal me on the other side and you can kind of like right you could do
your own research i would say at a level that was previously unprecedented for kind of like
any topic if you want to go deep on it. So I guess that's an example of AI actually helping,
and that's kind of what you're saying. Well, it assumes that the data that the intelligence is
built on is good, right? And so you could have a problem whereby misinformation gets solidified
within these models if it's built on bad information or it's built on other AI generated
information that is only like 98% correct. And that's when you get toward that model
collapse scenario that some people talk about. But we're in the early phases of,
artificial intelligence. And these are problems that are solvable, presumably. What's the model
collapse scenario? Can you illustrate that? Well, it's the idea that you have artificial intelligence
creating content that's posted online. Then you have the models that are consuming that content.
And if that original content wasn't correct, you just kind of get this vicious cycle where you have all
these models training themselves on content produced by artificial intelligence that was incorrect.
And it just keeps going. Right. So we kind of have like a chicken and egg problem here, right? So we have like
AI trying to parse truth and then we have AI trying to produce bullshit and maybe some AI trying
to produce truth, but more AI trying to produce bullshit. And it kind of goes back to there's like
there's no actual truth in the world. There's only interpretations of facts. And so, you know,
AI can help you both solve the problem and produce the problem like almost equivalently.
Yeah, it's an arrow in the quiver, right? Or it's a tool in our toolbox that we can use to help
arrive at truth. And it's just another kind of tool in the ever-evolving internet toolbox.
Again, getting back to that original point, if you and I were having a conversation about world events or anything else, the only source we would have had 50 years ago was the newspaper in front of us, right?
Or whatever the four channels on the television, if we happen to have access to the television or the radio, we're telling us.
Now we just have so many more tools, and it can be overwhelming.
I give credence to that.
But for me, I would rather have access to more tools to decipher the truth than fewer.
Yeah, I would 100% agree with that.
One of my worries, Nico, is that there's too few of us that actually do that deep dive in kind of like research.
But maybe that goes back to kind of your solution of media literacy and redefining sort of what that means.
We'll get to that in the solution point.
One thing definitionally, I want to ask you, though, Nico, is like I've used the term misinformation like most in this podcast.
And when you've said it, you've said misinformation and disinformation.
And there is actually a distinction that I've learned between these two things.
What's the diff between misinformation and disinformation?
Misinformation is false information that's not put out in the world with any malevolent motive.
Like people aren't doing it to deceive, right?
They just believe a wrong thing.
Disinformation is a concerted effort to deceive and put out false information, whether it's to so distrust or to malign someone.
We refer to that in the free speech First Amendment world as defamation.
And then there's this third category that's becoming more popular called META.
malinformation. Oh yeah. What's that one? Which is true, but I guess, inconvenient for troubling
information. Inconvenient for who? For the people who are accusing someone else of putting out
malinformation. Huh. I hate that category most. Isn't that just like actual information that people
don't like? It's called bad facts in the legal case, right? Yeah, exactly. It's essentially just
information that people don't like. And there was actually this institution, I believe it still exists,
called the Global Disinformation Index.
It was founded in the UK in 2018.
And its idea was to disrupt the business models of publications that publish, in their opinion, misinformation.
And I think they had some partnerships with advertisers and whatnot.
And they labeled different publications as being truthful and not truthful.
And initially, they defined misinformation on their website or disinformation as deliberately
false content designed to deceive.
So that would be disinformation.
But then they kind of broadened their work to a definition that encompasses anything that has, quote, an adversarial narrative.
And you actually have the Global Disinformation Index founder, Claire Melford, explaining that something can be factually accurate but still extremely harmful, which I think on its own is probably true.
There are certain information that can be extremely harmful but is nevertheless true.
But when you're using that as the basis to determine for advertisers where they should or shouldn't advertise, you can see how it could be wielded as a political weapon.
So, you know, it categorizes the least dangerous outlets as NPR, AP News, the Associated Press, the New York Times, and then the most dangerous as the New York Post, Reason Magazine, the Daily Wire, for example.
So you can see how that then becomes.
That is not very objective, is it, right?
And that's the trouble with these terms, like misinformation and disinformation.
and like most, wow, malinformation.
I didn't even, like, that sounds like such a stretching of the misinformation term
that it's just like I could see how that could go wrong in all sorts of ways.
Have you ever felt that the tools for developing decentralized applications
are too restrictive and fail to leverage advancements from traditional software programming?
There's a wide range of expressive building blocks beyond conventional smart contracts
and solidity development.
Don't waste your time building the basics from scratch and don't limit the potential of your vision.
Cartese provides powerful and scalable solutions for developers that supercharge app development.
With a Cartese virtual machine, you can run a full Linux OS and access decades of rich code
libraries and open source tooling for building in Web3.
And with Cartese's unique roll-up framework, you'll get real-world scaling and computation.
No more competing for Blockspace.
So if you're a developer looking to push the boundaries of what's possible in Web 3,
Cartezi is now offering up to $50,000 in grants.
Head over to Cartezi's grant application page to apply today.
And if you're not a developer, those with staked CTSI can take part in the governance process and vote on whether or not a proposal should be funded.
Make sure your vote ready by staking your CTSI before the vote's open.
Launching a token?
Don't let complex legal and tax issues slow you down.
Toku provides specialized support to optimize your launch and ensure that you as a founder and your team and your investors get the most tax-efficient outcomes.
The Toku team understands the crypto space inside and out and will ensure your token launch is fully compliant while maximizing tax efficiency.
Toku can connect you with the best attorneys if you need them to make sure that you have the best advice,
and Toku can help to optimize your taxes so you pay the least possible amount of taxes while still maintaining legal compliance.
With Toku's guidance, you can concentrate on building your company while Toku handles the logistics.
Token launches don't have to be complicated.
Talk to Toku today to get a free initial token valuation.
New projects are coming online to the Mantle Layer 2 every single week.
Why is this happening?
Maybe it's because Mantle has been on the frontier of Layer 2 design architecture since it first started building
Mantle DA powered by technology from EigenDA. Maybe it's because users are coming onto the
mantle layer two to capture some of the highest yields available in Defy and to automatically receive
the points and tokens being accrued by the $3 billion mantle treasury in the Mantle reward
station. Maybe it's because the Mantle team is one of those helpful teams to build with,
giving you grants, liquidity support, and venture partners to help bootstrap your mantle application.
Maybe it's all of these reasons all put together. So if you're a dev and you want to build
on one of the best foundations in crypto, or your user looking to claim some own
ownership on Mantle's Defi apps, click the link in the show notes. So getting started with Mantle.
You know, so maybe the last part is we pick apart this thesis, right, is that the government is
the answer to this. So the memetic idea that's spreading is misinformation is a problem.
Social media makes it worse. AI will make it even like more, like worse. And the solution
to this, because the free market can't and won't solve it, you know, it's just greedy capitalists
out there. And so they're doing things for their own motives and it won't solve it. So what
do when the free market doesn't solve it, you have to turn to the government. So the government
needs to step in. Let's talk about how the government is stepping in, because they most certainly
are, and they are increasing, they're stepping into the digital speech landscape world.
Like, I think, you know, the 1990s and the early 2000s, most Western liberal democracies anyway,
if you had kind of that title, it was a laissez-faire approach, you know, just the internet's a great,
you know, communication tool. And we are a free speech country. We are a free society. So let's
export it to the world. It's great. It's laissez-faire. It has turned. It has turned in a massive way.
I think toward the end of the 2010s, but certainly into the 2020s. And I want to talk about
some particular notable cases that have actually happened this year and part of the Genesis
and the reason we're having this podcast. So can we talk for a second about the Pavel Daraev case?
Yeah. So this is a case where, again, we don't know all of the things.
facts. But I guess from my perspective, you know, Telegram is a social media messenger tool. So it doesn't
have an algorithmic-based, you know, like social feed as X would. It's basically about, you know,
group chats where people meet. They can be big groups, I believe, like up to 200,000 people, for
example. So are you familiar with that case? Like, the high level is France arrested Pavel Dirov. He is
the CEO of Telegram. They said he was not doing his company, was not doing a good job, monitoring
all of the group chats and there was illegal activity going on, you know, terrorism, foreign adversary type of activity, child pornography, all of the usual suspects, and that he needs to essentially allow government surveillance for these channels. Anyway, they rested him when he landed in France, of course. He's now out. We'll talk about the outcomes later, but what do you get from this case?
So the thing about Pavel Duraov is that he's had run-ins with authoritarian governments in the past, and namely Vladimir Putin and Russia, the Russian government tried to get access to information about users of a previous social media company that he had and was seemingly trying to target political dissidents.
It's not like there were allegations of some of the things that he's accused of or the platforms.
Like AKA misinformation, Putin's version of misinformation.
Yes, which is political dissent, essentially.
And so Pavel Derov left and fired up telegram became super popular.
And then at some point, I believe, became a French citizen.
So he lands in France and is arrested.
And the allegations were that he was complicit in, I think the suggestion is trafficking,
drugs, terrorism, all these sorts of things.
Now, I haven't read anywhere that he's actually alleged to have engaged in any of this activity.
And I think the suggestion is that because he didn't turn over user information for people who were accused of doing these things, he was therefore complicit in the activity.
And I just, again, need to level set that we have very little information as to what's actually going on here.
But that's the suggestion.
And governments across the world, it is standard practice whereby if illegal things are happening and the owners of the platforms where those illegal things are alleged to have happened have to turn over information about what's going.
on. And generally, this is done through subpoenas and whatnot. There is a court process to ensure
that those in power are not abusing their authority to go after, for example, in political
dissidents. Now, you can imagine how in authoritarian countries this power would be used to go
after political dissidents, but in places like the United States and maybe even France, I don't
know much about its legal system, there are checks and balances for these sorts of things.
Now, that isn't to say that the governments never try and abuse their authority. And organizations
like the Electronic Frontier Foundation
urge social media companies
where there are allegations of abuse
to fight these subpoenas.
But Pavaldorov, I guess,
didn't even respond to French requests.
That's the allegation, right?
That's the allegation that he didn't respond
to even requests for this information
and therefore they decided to arrest him.
And now we stand at a point
where I guess he's going to do more moderation.
He is going to turn over this information.
Yeah, what I'm most interested in is like,
so again,
don't have all of the facts of the case in, you know, free societies. One nice thing is, you know,
our truth mechanism apparatus for finding the truth in these cases like this is like a court.
Yes. Right? That has to prove beyond a reasonable shadow of a doubt that the defendant is guilty
of charges. And they present evidence on either side. That is a truth finding mechanism. Okay.
So that hasn't happened so we don't have all the facts. But what we can know at this point are
kind of the outcomes. So Pavel is released. He has now agreed to turn over as much information as the
authorities want about everyone in these chat rooms, including IP address, you know, any other
identifying information that they have. So it feels like effectively, if the government asks,
what's going on in this chat room, can you, you know, turn over all of the data, then Telegram
will from this point on. So you have effective government surveillance in all telegram chat rooms,
at least the ones that aren't fully end-to-end encrypted. And that's kind of the outcome. Now Pavel gets
out of jail and you can understand why he wouldn't want to resist this to the extent of becoming
a martyr and going to jail himself. But that's kind of the outcome that we are left with.
And whether you think that is good for free society or not, that is maybe another question
that we could get into later. I don't know if you have any other comments on this,
but there are some other cases I want to run by you too, Nika.
Well, it's just a thought. Can you have a truly private messaging system? Now, not all conversations
on telegram are end-to-end encrypted, but are we going to get to a place where governments prohibit
end-to-end encryption. I don't know. We still have signal, right? Yeah. That's fully end-to-end encrypted.
I mean, like, for how long is kind of a question? Niko, how does the telegram case differ from the
Apple case from, like, years and years ago? I can't remember what the case was, but there was, I think,
terrorist-like person, and the FBI or CIA wanted to get into this individual's phone,
and Apple was just like, no, uh-uh, not helping you. Do you remember the details around that one? How does that
differ? I don't remember the specific details of that one. Fire, the organization I helped lead,
was just a campus organization for 22 years. And we expanded off campus in June of 2022 to fight for
free speech across America. And so some of those earlier cases I'm less familiar with.
And privacy has some weird nexus with free speech, like it's a separate right. But often if
your privacy rights are violated, your free speech rights are violated as well. And if your privacy is
violated it chills speech, right? If you're being monitored and watch, you're less inclined to
speak freely. So there are some nexus there. But I remember that Apple made a big show of its
commitment to privacy and even put out ads, for example, lauding its own commitment to privacy at
the time. So that's the value proposition that they've been creating over the years for some of
their users. Let's talk about another case. So Brazil banned X inside of Brazil. So it was
like some activity going on there, some, you know, post-
to the platform were in breach of Brazil's local legislation. I believe a court justice kind of ruled
this way. And the result, the outcome was a ban of the X platform across all of Brazil,
which is interesting because it was like done at the ISP level. And so if this citizen in Brazil
wanted to circumvent this ban, they could use a VPN. But they also blocked that door too
so that if you use a VPN to access the restricted X platform, you're subject to a fine in the
thousands of dollars, right? So, like, you know, penalizing usage of a VPN to circumvent these
blocks, too. What's your take on that case? Well, that's a complicated case. So it begins
with Brazil essentially appointing this justice to navigate the situation after you had a mob storm
one of Brazil's seats of government. And this justice identified something like 100 X users
who were engaged in misinformation and creating democratic instability and wanted X to take these users off the platform or to otherwise censor them.
And X refused to do so.
Elon Musk claims to be supportive of free speech.
And the next step was that, okay, this justice asked for X to appoint a legal representative in the country.
You're doing business in the country.
you need to have a legal representative by which the government can communicate with.
This is pretty common across the world.
If you're going to do business someplace, there needs to be someone with whom the government can communicate with.
And my understanding is that X refused to appoint a legal representative in the country.
And that's what triggered the X ban.
And the concern here was that you appoint a legal representative, you identify your employees in that country.
They're going to be targeted for political persecution.
And Elon also had said that he is willing and the company is willing to abide the laws of any individual country with regard to its content moderation policies.
But he thought that this order to take down these user accounts violated the country's laws.
So that's why he didn't comply with them.
In other countries, there have been take down requests most notably and famously in India, for example, that presumably he thought the takedown requests complied with the law,
even though in that case it was a documentary critical of Narenda Modi.
So that's what kind of precipitated this fight.
And then you're right, there was an effort to target the ISPs to prevent them from hosting X.
I think Starlink even complied with that one of Elon Musk's other companies after some initial SpaceX owned Starlink after some initial having and hawing over that.
And then there was the targeting, of course, famously, of the VPN.
And then like the revoking of the VPN order, but it was actually still in place.
So it's a little bit confusing.
But my understanding now is that X is complying with the order, has appointed a legal representative, and we'll see how it shakes out.
Elon Musk is picking a lot of fights with a lot of these governments across the world.
We got involved in one in Australia, for example.
So it's interesting, right?
So it's a question of, does Brazil as a sovereign nation have the ability to tell some foreign technology provider that they have to comply with their legislation, even if that legislation sort of,
you know, chill speech and, you know, labels political dissidents. There's a question between kind of the
sovereign powers of a nation to do this versus like what Elon Musk should do. Anyway, people have
taken different sides of this issue. What's your take on it? There is this argument about
national sovereignty and digital sovereignty. And digital sovereignty needs to be a part of national
sovereignty and this idea that you can police your own information ecosystem, just like you can
place other things within your borders.
This is a challenge, right?
Because digital sovereignty, what does that look like in Russia?
What does that look like in China?
What does that look like in authoritarian countries?
And it looks like what the other forms of sovereignty look like in those countries, repression,
censorship, tyranny in certain cases.
Now, it's just a question of whether you trust your government to do it consistent with human
rights. This has been a theme that we've kind of explored on bank lists in different episodes, which is
the balkanization of the internet, where the power, the might of the nation state is kind of working
its way up into the internet, just because, you know, the monopoly on violence being able to put
people in jail is pretty powerful. And at the end of the day, technology and technology leaders,
like Elon Musk, exists inside of some border somewhere. And we've seen a fracturing of the internet,
whereas the internet started as to like one global decentralized network of nodes, it's starting to become, you know, there is the Chinese internet, you know, there is the United States internet. There's, you know, the internet just looks different wherever you go in the world. And I think that's kind of like a secular trend that has been crescendoing over the years. And I think like free speech is starting to become a very important part of that story. Yeah, you have to ask yourself, what do you think of the internet? I mean, I think of it as this great tool to access the world's information.
But some governments don't think it's a great tool because some of the world's information they don't want within their own borders because it can be destabilizing.
And so I've seen this really start to pick up in the last two or three years.
And it doesn't end at the borders.
That's the thing.
So I had mentioned that we got involved in one of Elon's earlier fights in Australia where they had this e-safety commission that they established.
Order a takedown, annex of this live stream that occurred where a preacher got stabbed.
Okay.
And yeah, violent, violent video, of course.
Sure.
Someone gets stabbed.
But the preacher himself didn't want the video taken down.
You know, it's an example of something that happened in the world and he wants to show what radicalism can look like.
Elon Musk refused to do so.
Or I think he might have done so within the borders of Australia.
But Australia was arguing that because Australians, subject to its laws, can use VPNs and access this information outside of Australia.
It wasn't enough for X to simply take the video down in Australia.
It needed to be a global takedown.
Wow.
So the Australian rules.
Claiming sovereignty of the entire internet.
Of the world, the internet.
Yeah.
So the internet has to be beholden to the rules of this one country.
Because of VPNs and there are even some arguments because Australians can travel outside of Australia's borders
and might be able to access this information there.
What was their case for this being a video that should be taken down in the first place?
Like, why is a stabbing video that big of a deal?
Because it might inspire copycat attacks or it might inspire radicalism.
So weak.
Yeah, yeah.
So we ended up fighting that intervening in the lawsuit.
It's the first time fires ever intervened in a foreign court along with the Electronic Frontier Foundation.
We're actually able to win there.
So the global takedown order was reversed.
But you can see this argument gaining credence elsewhere, too.
in our recently globalized society with international travel and VPNs.
So, you know, we're getting closer. We've moved from, you know, France to Brazil to Australia,
and now maybe the UK. Another thing that's kind of caught my attention recently, and I haven't done a deep dive on it,
but is some UK speech laws that are coming to effect. I believe this is called,
the primary legislation is called the Online Safety Act. It was past in October. It was not set to be enforced until later.
But there have been cases of people tweeting, posting on social media, and actually being a
arrested in the UK. And you guys can go search for some of these cases. Some of these are, like one
example is a guy by the name of Jordan Parlor, 28 was sentenced to 20 months in prison after pleading
guilty to inciting racial hatred through Facebook posts that called for an attack on a hotel
housing refugees and asylum seekers. I mean, it's kind of an incitement to violence.
There's another case where somebody, Dimitri Stokia was arrested for making a TikTok video
that pretended right-wing rioters were chasing him, even though he claimed it was.
was a joke. That's a little bit different. Maybe what's the line between kind of like satire
and, you know, something that violates the UK's laws here. So there are these cases coming out
about the UK arresting people for their posts and spreading various forms of disinformation,
misinformation, misinformation, or maybe malinformation. Have you followed this at all? I followed it a little
bit. The challenge when trying to parse all these stories is where was using the United States'
is First Amendment standards, where was it just political speech, satire, parody, and where
was it actually incitement to imminent lawless action? And I say the United States standards, because I think
the United States has largely gotten it right on free expression, where how to have free expression
in the real world, creating carve-outs for things like defamation or incitement to imminent lawless
action, or true threats for that matter. And so I tend to use the American philosophy surrounding
freedom of speech to define freedom of speech. And so if you're, you're,
kind of exporting that definition and trying to analyze the situation in the UK, it's a little bit
difficult because there is this exception, the incitement of imminent lawless action.
But in the United States, it's a very high bar, right?
It needs to be likely to cause the imminent lawless action, and it needs to do so imminently.
It's not enough to simply call for lawless action at some future date unless it's part of this
broader category of criminal conspiracy.
And so you can see how in some of these situations, there were actually calls incitements.
saying, and this is what you saw in Rwanda, for example, where you had folks on the radio saying,
we need to kill them, you know, bring your machetes. They're at this location. We need to do it now.
But some of the examples you're seeing in the UK are nothing like that. It's satire. It's parody.
You had that example from years ago with the guy doing the Nazi salute or making his pug do the Nazi salute, for example.
And the UK has long had more permissiveness on speech restrictions than the United States.
You have this Public Order Act, for example. And you have this statute that you
cited about incitement to racial hatred. The United States doesn't have things like that,
and our First Amendment would prohibit them. But in any case, you need to look at the context of the
facts on the ground, look at the facts on the ground, and then determine whether this is something
that you think should be protected under free speech standards or something that can reasonably
be restricted. And in the United States and in the UK, it's going to be different.
Yeah, it really feels like some of these non-U.S. Western liberal democracies are kind of going
down a very dangerous path with respect to their misinformation laws and the legislation that's
getting pushed forward. There's an Australian misinformation bill. It actually had like a very long bill,
but I had, you know, AI kind of like summarize it for me. And this seems incredibly
dystopian because what it's doing is requiring the surveilling of like search engine results,
the social media results. And if some Australian agency deems it as misinformation or
disinformation, well, first of all, you have to report all of this to the government. And if a
government agency deems this as misinformation or disinformation. They can order a takedown
and also fine these social media companies, like exorbitant fines as well. So that creates
kind of like a chilling effect too. And it's kind of like if it's not in your democracy,
it's coming to a democracy near you. Let's talk about maybe the last case, which is the United
States. Yeah. All right. So there are some speech bills that I believe are being drafted
in California. And of course, California is subject to the First Amendment, operates under
the Constitution as a protocol. So what's going on with these California speech bills? And, like,
is this the first sort of misinformation legislation that we've seen in the U.S.? But, like,
tell us about this case. It's definitely the most prominent misinformation legislation that we've
seen in the United States. And Governor Gavin Newsom signed recently into law three bills
governing disinformation. And they have some concerning language regarding free speech. So there's
Assembly Bill 2839 that bans sharing deceptive digitally modified content about candidates.
So we're talking about elections here for office for any purposes.
So that means sharing contents that even criticizes or points out that the post is fake.
So you can't even criticize the content and share it that way or point out that it's fake.
The law also requires, for example, satire and parody to be labeled.
This is like getting onto a stage as a stand-up comedian saying, I'm about to tell you a joke.
Oh, my God.
Yeah, yeah.
That's not very funny.
No, it's not very funny.
It's not very funny.
But we have to remember where this law came from.
It came after Elon Musk shared a parody video of Kamala Harris, where she appears to be saying things that nobody is actually going to believe Kamala Harris actually said, because right at the start, she's talking about how she's a DEI hire and Joe Biden is senile.
No one is actually confused about this thing.
It's clearly parody for anyone with a brain.
Yes, for anyone with a brain.
But Gavin Newsom and Elon Musk, who shared it, got into this Twitter spat and Gavin Newsom said he was going to sign a law that forbade this sort of thing.
So that's one law.
That's a restriction on the users, right?
You can be punished for sharing deceptive content about candidates, even if you're just pointing it out or criticizing it.
And then you also have to label your comedy.
And by the way, what's the punishment?
They're going to show up to your house for a, like, a.
Malinformation ex post?
Invoke your Twitter account.
Well, I'd have to go look deeper into the statute to figure out what the punishment is.
I don't have that right in front of me.
But there was another law.
He signed a package of three laws that places restrictions on the platforms themselves.
So the previous one was about the users.
This one, AB 2655, requires platforms to block deceptive content about politicians, even if the content isn't actually defamatory.
It says that platforms have to respond to every single public complaint about deceptive.
content within 36 hours.
Just think about that.
I mean, first of all, that just kills
startups, right?
If you're trying to
start a new social media company.
I mean, no, I would just use a chat
GBT response, just like, yeah, responded.
DDoS, the agency.
Yeah.
But like, wow, that's a lot of administrative
overhead. Yeah. And then once you have
identified content and removed it,
they also have to filter and block any content
that's substantially similar to that previously removed.
What? That's hard.
So just talk about the kind of economic and
resource burden on companies to say nothing about the larger First Amendment concern and the
Section 230 concern. I don't know how much your listeners know about Section 230, but it insulates
social media companies and any other Internet service provider from liability based on content
posted by its users. This is landmark legislation from like the 1990s, like 1996 or something
that basically enabled the Internet to flourish in the United States. Yes, yes. You wouldn't have the
modern internet without it when you think about the millions and billions of posts that some of
these platforms receive, the idea that the platforms themselves can be liable for every single
piece of content that's posted.
You just would never get social media to develop because the legal liability and the
risk would just be too high.
And when you think about it from a first principle standpoint, the person who should be
liable for posting that defamatory content or that content that's a true threat should be
the person who actually posted it, right?
And this is where you kind of circle back to that idea surrounding subpoenas and the
government's ability to go through the courts to get access to information where things
violate law.
But you can see how this law, the second law, will be wielded by political opponents to go after
each other, right?
Just report something as being defamatory or deceptive to the platform.
And the platform, you know, if they're inundated with thousands or tens of thousands
or hundreds of thousands of these, they're going to take the risk adverse path, right?
Which is just to take it down.
It's just easier to do it that way.
And I am skeptical that this law will survive a legal challenge, either under Section 230 or the Constitution for that matter.
And the previous law that we talked about that places restrictions on users is already being challenged.
Babylon B is the plaintiff in that case, which makes sense, right?
Because it engages in satire and parody.
And they might have even been involved in that initial Kamala Harris video that precipitated the law, but I would have to double check on that.
Okay.
So there's already misinformation legislation coming to various states.
And you sort of wonder if this kind of like concern or fear begins to spread where this becomes some sort of federal legislation.
I suppose we are still protected by the First Amendment though.
And all of this has to be ultimately settled in the court system.
I don't know what you do.
If you're not in a democracy that has some sort of, you know, ironclad, maybe it's not ironclad, but strong First Amendment level protection.
One other comment I want to make about this is this is really targeted at, you know, social media, it seems like.
we were just talking earlier about using AIs as like models to kind of find out information and
help us as individuals trying to discern what's reality and what's not in the world,
sort through that. You can imagine misinformation legislation applying directly to AI data
models, for instance. So the results coming from a chat GPT or from a perplexity are like completely
presensored, right? And any like malinformation that these things produced are just kind of
like blocked from a user ever being able to access it. So social media is kind of where these
things start, but it ends up embedded deeply in the AI models themselves, I think. Well, you just
walked into a First Amendment thicket there, Ryan. There's an active debate surrounding whether the
outputs from these artificial intelligence or these large language models are protected by the First
Amendment. And under current case law, I think it's pretty clear that they would be to the extent
code his speech and they're the outcome of a human design product that's looking at it from the
speaker's perspective. You also have considerations as the receiver of information that you should
have the freedom to receive information is also protected under the First Amendment. But this is a
place that's kind of evolving in the First Amendment community as cases winding their way through
the courts. Niko, I want to open up the conversation of what a government can do. What are the
progressive, productive things that a government can take action on? We've established in this
conversation that what we don't want is some sort of top-down government control over a bottom-up
phenomenon, right? Speech is this bottom-up phenomenon. And I kind of want to make the allusion to
markets here, like markets, bottom-up phenomenon. We generally want our markets unadulterated.
But there are some things that the government does to protect markets, like antitrust laws.
I personally think antitrust laws are great. And the reason why they're great is because they are
pro-markets. That's a pro-market sort of stance that we use the government to have.
help kind of enforce. And I think we kind of want to extend that same level of control over the
marketplace of speech, like enforcing free speech. Freedom of speech is some sort of equivalent
of like antitrust in this speech sense. So maybe that's one perspective. There's also like this
idea of like private organizations that attempt to discern truth earlier in the podcast. You kind of
alluded to some sort of like private institution. The global disinformation index. Exactly. Yeah.
And what they are doing is they are attempting to discern truth as a public surface. And, you know,
private institutions, I think, are going to be able at discerning truth better than a public,
you know, government, which is because they try, right? And they can probably do it better than any
organization who's not trying. But the problem here is, is that if they become so good at it
based on merits, society might become reliant on them. And then they become a place of capturing
control just because they are a scaffolding of society, right? And so in that event, we won a free market
competition of truth-telling organizations. Maybe these are, you know, Fox, MSNBC, CNN,
Al Jazeera. These are media organizations. And also lately, there's been this like Cambrian explosion
of independent media channels, pirate wires, breaking points with Saga and Crystal, bankless,
independent media organization, but also Alex Jones, you know, flat earthers and their Twitter
accounts. Elon Musk wants to turn every single X account into some sort of independent media
organization. And so with this context, right, we don't want top-down government control.
We want to protect freedom of speech. We want to incentivize bottom-up,
telling. So with that kind of context, what can a government do with these sort of parameters? What
would we want to see out of governments in this modern age of free speech protections? The government
has the bully pulpit, right? It is free to put its own perspective out there in whatever way
it deems fit. Now, it can't pressure private companies to censor, which was the subject of
some Supreme Court cases this past term. But it can go out there and exercise the bully pulpit
and put its perspective out there. The challenge is right now, people don't trust the government.
Why don't people trust the government?
I don't know, maybe because it's gotten things wrong in the past.
We were talking earlier about weapons of mass destruction, CIA black sites, Vietnam War.
I mean, the list goes on and on historically.
Some of the arguments put forth by government have just lost in the marketplace of ideas.
I don't know how the government, though, then regains trust through censorship.
So my answer to your question is probably unsatisfactory because I think there are limited solutions it can do through coercion when you're talking about policing and information.
system. The First Amendment, John Kerry is right, is a considerable barrier there, and I think
justifiably so. But it can use the same moral suasion that the rest of us use to try and bend
the public discussion in the direction it wants to. I don't know. What do you guys think?
I think it's a really hard problem. We had a, you've all know, a hurray on the podcast recently.
And he was very much against the idea of this idea that, you know, when there's a lack of
truth out there, we just need to inject more information into the information sphere. He
he's against that idea because he thinks that, like, in the modern day and age, the ability to
produce information is just, like, so cheap and easy. It's like, you know, our society is malnourished.
Let's inject them with even more fast food, is his metaphor. And he rejects that notion.
I also don't think his answer to that problem is also, like, good. It's just kind of a cop-out
of, well, we need, like, government institutions. We need some sort of, like, centralized authority to really
protect speech, which it seems like a good analysis with a very unfortunate, like, conclusion.
Yeah, I mean, I think our answer would be, like, just let free markets kind of, like,
sort this out because they will, given enough time, right? And this is the messy process of them
sorting it out. Given enough time and pain, though, I think is, like, what people are concerned about.
Some pain. But a lot of people don't want to go through the fracture period and the chaos period to come out
the other side and have, like, come to the other side. Have, like, come up.
kind of free market protocols that are, because you've all says this, what we need is like kind
of self-corrective mechanisms and truth-finding mechanisms. I completely believe that we will
produce those in the digital age, new sort of institutions protocols on the internet that preserve
this. But everyone wants to, it seems like the current temperature is to take the shortcut and just
say, oh, the solution to all of this chaos is the government does it. And I am terrified. I'm like
personally, like, terrified of that outcome. I think there are things like, you know, on the
crypto side, we love exploring kind of, like, decentralized markets. So, like, one tool that's
pretty interesting. You mentioned community notes earlier. That's a fantastic protocol. It's kind of
like a truth-finding, like, protocol in the X platform. Protocols like that are smart. We also
really like prediction markets. Yes. For example, you know, like there's a tool in crypto called
like polymarket, which is basically, it's different than just saying information. You're kind of like
betting on the outcome here in kind of a prediction market. So you have stake weighted claims on the
information. And these can be tools that basically find truth better than established information
sources. It's very interesting that even old media like CNN and such is they're referring to
polymarket for different election results because it's like they recognize it's a better truth
source than even polling data at this point in time. So these like bottom up solutions that are
very appealing to me. And I even think some of the fracturing of our information landscape is like
going to be helpful in the long run. The thing for me, Nico, is I love the ability to, if I don't
like what's going on in X, I could move to a different social media platform. I could move to threads
or I can move to your truth social. And it's my decision as a user. I get to go pick, like free market.
With the government, I don't get to pick. I am much more terrified of the people in power with the
guns than I am of, you know, a billionaire like Elon Musk. Because Elon Musk, I could just leave his
platform if I don't like it. He can't arrest me. He can't put me in jail. You know, he can't find me for
anything for leaving his platform. But we don't have a silver bullet solution, which is even at the
end of this, as I'm saying these things, I can't point to you, this is the way out. Yeah. Well, again,
you kind of need to accept the premise that it's a big problem in the first place. And I think that's
where I would kind of diverge from seemingly the rest of the world.
I think we've always had mis and disinformation.
You know, the Catholic Church during the period of the Reformation,
which was precipitated by another technological revolution, the printing press.
I assume the Catholic Church, which governed much of Europe at the time,
thought that those who were making the reformist arguments were engaged in missed or disinformation.
Yeah, they called it blasphemy, right?
And they just have trials to excommunicate you.
Yeah.
And that led, unfortunately, to the 30-year-s war.
which led to 8 million dead in Europe.
So there was definitely an anarchic period there,
but are we worse off for having the printing press in the long term?
I guess you can run the utilitarian tradeoff there.
I don't know, but it was surely a precipitating factor in the Reformation,
which led to the 30 years war.
So we're in that anarchic period right now where people are trying,
hopefully we don't have a 30 years war,
but we're in that period where we're trying to figure out how we create a healthy media diet for ourselves
in this new and evolving technological economy.
And I think prediction markets are great.
Either we have that saying, put your money where your mouth is, that's one way to do it.
But again, I just kind of questioned the premise.
There was a scholarly article in Nature in June of this year, which found that there's a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information.
And I'm quoting from the article here.
so that the exposure to misinformation that we've been talking about is concentrated among a narrow fringe with strong motivations to seek out such information.
And this kind of lends credence to that horseshoe theory of politics, right, where it's like the political polls, just like a horseshoe, dictate the direction that the country goes or the parties go.
One of the more influential classes that I took in college, I'm not a math guy.
You don't even want to know what my math score was on the ACT.
I didn't take the SAT because I'm from the Midwest.
But I took statistics, which is one of the most challenging courses for me.
And something a professor told me really stuck with me.
He showed me the normal curve, right, which is this bell curve, which is you've got these thin tails in the fat middle.
And most people and most natural phenomena fall in that thick bell curve right there.
But anywhere in society, you're going to see the tails.
The tails are just going to exist, right?
So you're going to always have in society people who are super credulous who are going to just believe
stupid things that are out there who are going to be inclined to believe those sorts of things.
Now, the situation now is that they have access to a wider audience than they once did.
That's a challenge, right?
But they're always going to exist, and I don't know that you can censor your way out of that problem.
My dear friend Jonathan Rauch, who wrote a book, Kindly Inquisitors, which I recommend to everyone.
It's about free expression, says that censorship is like breaking the thermometer.
It doesn't change the temperature.
You just don't know what the temperature is anymore, right?
And so censorship doesn't change things.
It just provides you with less information about the world as it actually is.
And if you're someone who wants to navigate the world, you're trying to gather as much information for yourself as you can.
And censorship prevents you from doing that.
It kind of sounds like we are just advocating for this idea of like, well, if we're going through hell, we just got to keep going.
We just got to get through it.
I feel like maybe we'll discover some solutions along the way.
The crypto community, Nico is very excited about this thing called Polymarket, which is this like alternative source of truth about like,
It's this market-based truth.
Like, what does the market say?
And the market is able to, like, produce some sort of, like, you know, Oracle about
the likelihood of events.
So this is like a modern solution that is just kind of now gaining steam.
But otherwise, it kind of just seems like, man, we're just saying there's no great solution
here, but we just got to keep on powering through.
Well, yeah.
If you, again, accept the premise that there is this big problem here, which I'm not sure
I do, but taking for granted that there is, I think the argument that people most likely
posit on the other side are, you know, election.
denialism, which can have democratic effects, right, effects for your democracy.
But my dear friend, Andy Mills and Matthew Bull, they have this podcast called The Reflector,
and election denialism has been with us throughout much of American history, particularly
in the last two to three decades, going back to that disputed election in the early 2000s.
So that's one of the things you find in advocating for free speech is a lot of the arguments
for censorship you've seen throughout human history and people think.
they're new and novel, but they're really not. And it never works. I mean, if your objective is to
preserve an open, free society that kind of competes on that basis, you start down the censorship
path, and it just like, it never leads to good outcomes. And it increases in severity. So I guess the
message is, yeah, as David said, we just have to keep on going, get through this anarchic period.
And in the meanwhile, it seems like there are too few people saying, hey, hold the line in the
first amendment, even as we enter into this digital age. But,
I guess we have to shout that even louder and make our arguments and compete in the marketplace
of ideas, you know, in order to kind of win that battle and just hope, just hope it holds.
I guess that's what you're doing at fire.
Yep.
Yep.
You've seen throughout human history that when people have a fear, they're willing to sacrifice the rights
at that altar.
Whether it's hate speech or it's indecency or it's national security, you can go through the
litany of them or blasphemy, you mentioned.
People are willing to sacrifice their rights at that altar.
And that's why our Bill of Rights is fundamentally an anti-democratic document.
It says no matter what the majority wants to do, the government cannot do these things.
Cannot violate your religious liberty.
It cannot violate your free speech rights.
It cannot violate your due process rights.
That's the line that holds a lot of these sort of populist tendencies back, these moral panics, back.
And I'm actually going to London here in a few weeks to make the debate that,
the UK should adopt something like the First Amendment, which it doesn't have.
Right.
Because otherwise you're just at the whims of whatever the latest panic is about.
And I see free speech and my access to information about how to live in this world as a fundamental human right that the government has no right to violate.
And if you accept that premise, then I think you arrive at the conclusion that I do, which is, yeah, sure, misinformation and disinformation might be a problem.
But the bigger problem is violating that fundamental human right.
You don't have a human right to be free from mis and disinformation, but you do have a human right to be free from government censorship.
Well, completely agree with you there, Nico.
And thank you for fighting the good fight.
And keep on going for our part.
We're doing this on the tech side, you know, so technologies like prediction markets and censorship resistant platforms.
And then, you know, like also encryption, which is how you keep that communication, the digital world actually private, are incredibly important for that battle ahead.
too. So thank you for what you're doing. We're going to keep it up on our side.
Thanks, Ryan. Thanks, David. This was a lot of fun.
Bankless Nation, got to end with this, of course. Free speech is risky, right? But like,
it's worth it. Where else are we going? If we don't have that, you could lose what you put in.
But we're headed west. This is the frontier. It's not for everyone, but we're glad you're with us on the bankless journey. Thanks a lot.
