Tech Won't Save Us - A Better Internet Requires Ending the Monopolies w/ Cory Doctorow
Episode Date: September 10, 2020Paris Marx is joined by Cory Doctorow to discuss how the problems we associate with Big Tech aren’t the result of mind-control systems, but corporation consolidation. Cory argues we need to stop buy...ing the overblown sales pitch, stop collecting so much data, and enforce antitrust legislation against the tech monopolies.Cory Doctorow is a science fiction author, activist, and journalist. His most recent non-fiction book, “How to Defeat Surveillance Capitalism,” is available for free at OneZero. You can also preorder his next fiction book, “Attack Surface,” on Kickstarter or anywhere else books are sold. Cory has a daily blog at Pluralistic.net and you can follow him on Twitter as @doctorow.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter.Support the show
Transcript
Discussion (0)
That word control is the key one there, right? We spend a lot of time focused on what technology
does. What we should really focus on is who it does it for and who it does it to.
Hello and welcome to Tech Won't Save Us, a podcast that knows we need to take on the power of the tech giants before we can build a better future of technology.
I'm your host, Paris Marks, and today I'm joined by Corey Doctorow.
Corey is a science fiction author, activist, and journalist.
His most recent book, How to Destroy Surveillance Capitalism, is available for free on OneZero.
You can find the link in the show notes.
His new fiction book, Attack Surface, comes out on October 13th. And Corey's actually running a
Kickstarter right now where you can pre-order a digital or audio version. And you can also go to
pluralistic.net where you can find Corey's daily blog. Today on the show, we talk about how we
shouldn't see the sales copy of the tech giants and what their PR firms put out as an accurate reflection of what they can actually do.
And that if we ever really want to address the problems that come with big tech, we need to break it up and take on its power.
This is a really insightful interview, and I think you're really going to enjoy it.
If you like our conversation, please leave a five-star review on Apple Podcasts and make sure to share it with any friends or colleagues who you think
would enjoy it. And if you want to support the work that I put into making the show possible,
you can go to patreon.com slash techwontsaveus and become a supporter. Enjoy the conversation.
Corey, welcome to Tech Won't Save Us. Thank you very much. It's a pleasure to be on.
Ah, it's great to speak with you. In the book,
you outline how you see technology through the lenses of idealism and materialism. And so I was
wondering if you could start by explaining those two concepts and how they help you to understand
surveillance capitalism, but also to kind of arrive at your broader perspective on what's
happening with technology in this moment. Well, I guess, you know, there's a prevailing story around the world today that the reason
that people seem so unreasonable, and of course, what constitutes unreasonable is highly idiosyncratic,
but the reason that people who disagree with you are so unreasonable is because something
has happened to them with tech.
Tech has put them in a filter bubble.
Tech has allowed them to be manipulated by unethical people who figured out how to worm their way past their critical defenses and get curvature of the earth, not all that infrequently. And so there is definitely
something going on that is causing people to fracture, not just in what they believe,
but also if you ever argued with someone you disagree with, you've probably noticed that
we've stopped agreeing on how we know whether something is true, right? You might say, well,
look at this scientific study, and they'll say, that scientific study is bogus, it's corrupt, I don't believe in
that kind of study or in studies altogether or what have you. And if we're going to understand
where this comes from, we kind of have two choices, right? We can either say that the reason that
these ideas are spreading now is because the people who are promulgating them
are really good at convincing people, maybe because they've got these technological assists
in the form of persuasive technology. Or there's a material explanation that something has changed
in the lives of the people that makes explanations about conspiracies seem more plausible.
And the idealistic explanation,
it breaks down into two possibilities. One is that we've got these gifted, once-in-a-generation
orators who have figured out how to convince people about the flat Earth just through the
sheer power of their rhetoric. And I think that's definitely not true. If you tune into flat Earth
podcasts or eugenics podcasts or anti-vax podcasts, they're clearly very sincere in their belief and very alarmed and what have you.
But they're not fantastic speakers, right?
This is not the Gettysburg Address.
Alex Jones is not an incredible speaker.
And so I think that the next possibility that we have to dispense with is whether it's just normal, mediocre people who believe stupid things, but they somehow got their hands on this super weapon that allows them to persuade other people.
And, you know, the short version of that is like Facebook developed a mind control ray
to help them sell fidget spinners.
And then Cambridge Analytica used it to make your uncle racist.
And I think that the source for that, for that belief, the best evidence we have for
how good Facebook's persuasive
tools are or Google's or any of the other ad tech persuasive tools are, is the companies
themselves.
And it's not, by and large, peer-reviewed research out of their R&D departments.
It's their marketing claims, right?
When they sit down with a potential advertiser, they say, you know, if you just trust us and
give us your money, we will sell the heck
out of whatever it is you need to sell, whether that advertiser is a politician or someone selling
fidget spinners or refrigerators or whatever. And sales bump is not a very reliable source
of information. Historically, there's a kind of self interest in padding the ledger there and
overstating the case.
And where there's like some peer-reviewed stuff, it's pretty thin. Like a lot of people will point
to Facebook did this study where they showed that by intervening in like 20 million people's
Facebook feeds, they managed to get a couple hundred thousand more of them than they expected
to go out and vote on election day a couple of years ago. And they say, well, look at this. This is evidence that Facebook can really make a difference. And it's true that a
couple hundred thousand votes in a tight election, it's an important number, right? That can sway the
election. But we're talking about less than half a percent effect size. This is a very small,
persuasive effect. It's not the kind of thing that tips over QAnon into a mass movement.
It's instead something that has to be wielded like a scalpel to make differences when everything
hangs in the balance, when there's a very slim margin of opportunity. And so then we come to
the material possibility that maybe something changed in the world that makes conspiracism
plausible. And this is where I come down, no surprise. And I think that what's happened in the world is that we have more conspiracies.
And the conspiracies that we have more of, usually we just call them corruption, right? Like we have
more instances in which industries are really concentrated, and they do stupid things that are
harmful. And the regulator says, eh, it seems okay to me,
and they let them get away with it. And, you know, there's a lot of times that that has happened in
pretty recent history, you know, the Boeing 737 MAX, and the spread of OxyContin and the opioid
epidemic. And, you know, we see regulators being less and less trustworthy, and more and more in
the tank for special interests, usually the
industries they're regulating, but sometimes political authorities. So, you know, one day,
a couple of weeks ago, the CDC got out of bed and said, I guess nobody should get tested for
coronavirus anymore, and put out official warnings saying, even if you've been exposed to someone
who has tested positive for coronavirus, you don't need to test unless your state tells you to.
And that's like clearly stupid advice. And all the epidemiologists who weren't trying to do things that were politically favored by the administration said, this is really bad practice and you're going
to murder people with it. And that's the thing is that bad corporate conduct and bad regulatory
outcomes, like they really traumatize people. People die as a
result of them. The opioid epidemic killed more Americans than died in the Vietnam War.
And so if you've been traumatized by a conspiracy, and your takeaway from that is that the way that
we know what's true is no longer trustworthy, you can't just read the studies or listen to
the experts who've parsed through the studies to tell you which ones are more reliable, then you're in a kind of void where you know that
bad policy can kill you or traumatize you or kill the people you love. But you also don't know how
to figure out what's true or not. And so what you end up doing is just listening to people who sound
like they know what they're talking about. And we go from kind of the rule of law, where we have the scientific method, and we have a process,
and we have a regulator, and they are neutral, and they have to recuse themselves if they have
a conflict of interest, and they have a process for rethinking their outcomes if there's new
evidence that comes to light. And we go to the rule of man, right? We go to the, I believe the
people who say the things that sound true to me. And I had an argument with a really beloved relative a couple of weeks ago who thinks that coronavirus is a hoax. And at the end of this argument, they ended it by saying, you have your truth and I have mine. And that's where we land if we believe in material circumstances as the cause of this. And of course, one of the industries that's
very concentrated and one of the industries that really harms people because its regulators are
asleep at the switch is tech. Tech is the source of enormous trouble for us. It's just not mind
control that makes tech dangerous to us. It's industrial concentration and a lack of regulatory
oversight that makes tech dangerous to us.
I think it's a really compelling point.
And, you know, what you describe there really shows the potential damage that it can have,
you know, especially when you talk about the COVID-19 pandemic, because it's not something where, you know, there are two different truths.
There is a very particular truth that people need to understand.
And, you know, going back to what you said about the sales copy and using that
as kind of the measure of what these technologies can actually do. I think one of the real values of
the book is that you challenge this notion that more data is always better. Because even though
there's this growing backlash to big tech and to data capture in particular, I feel like some people
still feel that sacrificing their data, you know, is necessary to have some good tech tools, right?
But your argument shows us that isn't necessarily true. The sales pitch about the power of collecting
all of this data isn't as true as they make it seem. So what does that mean for how we should
understand the role of data and what possibilities does that mean for how we should understand
the role of data and what possibilities does it hold for how we should think about,
you know, how data should be handled in the future?
I think that there's like two different ways that we need to talk about this, right? One is
we can ask ourselves what the companies can do with data. And the other is we can ask ourselves,
how can the data be used to harm us? And there are two
separate questions, although they're related, right? If it turns out that the companies can do
something really, really fantastic for all of us with the data, and the harms are pretty minor,
we might decide that some harm is worth it. You could imagine very large or very small values for
either of those, right? That maybe the data causes no harm and is really useful. Maybe the data is
really harmful and not very useful. And so let's pick apart those two questions. So what can data do for companies?
Well, one of the things that the data does for companies is it gives them a story to tell to
investors and analysts and shareholders and potential rivals that are newly entering the
market about why their dominance will be perpetual and why you shouldn't
even bother trying to unseat them, right? We have 20 years worth of data on people's behavior.
You will never catch up with us. Don't even bother trying. When we make an insulting acquisition
offer to your nascent competitor, you should just accept it because without our data trove,
you're not going to be able to do anything comparable to what we've done. And I think that that is a very dubious proposition.
There's definitely things where having a really big data set can help. Clearly, like training a
speech recognition model, having a lot of speech is better than having less speech. Same with like
a facial recognition model. But for things like prediction or targeting, knowing where I was 20
years ago or what I searched for 20 years ago is a very, very narrow value to firms, right?
You know, maybe if I bought a roof 20 years ago and you know that roofs wear out about every 20
years, that's like one sales lead you can generate out of a 20-year-old data set. But for the most part, that data gets really stale really quickly.
And remember that what machine learning is, what statistical inference is, is a way of
telling you what things used to be like in case they are still like that.
When you type some words into your phone's keyboard and it tells you some words that
it thinks you might want to type next, what it's really saying is, if you are still typing in the manner to which you are accustomed
to typing, if you are still wanting to follow this word with that word the way you have done before,
then I'm here for you. I'm ready. I've got a prediction. So it only works if the future is
like the past. And a lot of the future is like the past, but not all of the future is like the past. And certainly the further back you go, the less the present moment is comparable to it,
the less insight you have into the present moment.
There's also some other ways in which data has fewer marginal returns.
Like if you think about Netflix and its giant databases of what people watch, people who
like this show also like that show, the new value that they get from
one extra viewer is pretty small, right? The model doesn't get much better as you throw more data at
it, at least not at the individual aggregate level. And so if we can imagine that there's like a
good enough level, then a rival to Netflix might be able to spin up a recommendation engine pretty
quickly, right? And that the advantage that Netflix has as a result of its data repository be pretty small.
And the competitor catches up very quickly because doubling the size of a small data set
will make the model much smarter. But adding, you know, a thousand data points to a very large data
set will do very little except waste a lot of electricity and compute time while you retrain your model with it. So it's again of a pretty marginal utility.
So then let's ask about the harms that can arise from it. Well, one of the things that old data can
do is be a form of compromise. If 15 years ago, you were searching for information about how to
buy MDMA on the dark web, and now you're running for office, that's not going to help
anybody sell you anything, but it might help your political opponents destroy your life.
That's the kind of thing where you can imagine all kinds of harms accruing and where the harm
is effectively unbounded. Because it might be 15 years from now that the fact that you were
standing next to someone who next week is arrested for being a terrorist is the thing that puts you under suspicion of being a terrorist too.
And so we really can't predict all the ways in which data can harm you. And the bigger the
repository of data that you don't control, the more ways there are for false positives to emerge
from it, or even true positives, things that are embarrassing or compromising in retrospect. You know, as Cardinal Rishi Liu said, if you give me six lines in the
hands of an honest man, I will find in them a reason to hang him, right? With a big enough
data set, you can always find some algorithmic basis for suspicion and guilt. And then there's,
you know, the ways that states can harm you or that firms can harm you by going through that data and using that to attack
dissident movements, you know, in retrospect, if we see that, for example, being a member of Black
Lives Matter becomes as Trump has said that he wants it to be, although he, you know, let's be
clear, doesn't have the authority to do this. If it becomes branded a terrorist group and membership
in it is a federal crime, then, those facts retained in third-party databases that
you don't control really puts you in harm's way legally as well. So I think that the answer to
how much value do we get out of data and how much risk does data create is data that's older than
pretty recent data doesn't generate a lot of value. And data that is older than pretty recent data does generate a lot of harms.
And so on balance, there shouldn't be a lot of data gathered and there shouldn't be a
lot of data retained.
You're talking about how data isn't as powerful as it's often made out to be, right?
And as a lot of people understand it to be, because a lot of people really do think that
Facebook has, I think it's fair to say, a lot more power and a lot of people understand it to be because a lot of people really do think that Facebook has,
I think it's fair to say a lot more power and a lot more capability than they might necessarily
have, or that they have this power for a different reason than what they might actually. And so I
wonder, do you think that part of the reason people think that these tech companies or this
use of data or these platforms can, say, control people's minds or
kind of direct their actions or change the way they think is because a lot of people don't
necessarily understand how these technological systems work. And so they can seem almost like
magic. And if they're told that these platforms or whatever are doing these really negative things
or have these really big capabilities, then it just seems natural to believe that. It's pretty clear from the history of the
advertising industry that the one thing that the advertising industry is really, really good at
is convincing people that they're really, really good at their job. They're not actually very good
at their job, but they're really good at convincing people they're good at their job.
And so I think that that was one of the explanations that wasn't in your list.
Do people believe that tech is very persuasive because tech is fundamentally run by a bunch
of people whose job it is to convince us that they're very persuasive because then advertisers
will give them money, right?
If you look at the academic literature, for example, on behavioral versus context ads
in online advertising conversions, right?
That's like,
how often will someone click an ad if you generate the ad or choose the ad by spying on them for
years and years and building these giant dossiers and trying to figure out exactly what ad to show
them versus how often will people click an ad if you just look at the article and check to make
sure that the product that you're advertising is both relevant to the article and available for sale in the country that the user is in and show them that ad,
right?
And the difference is very small, very, very small.
And when you take out all the money that you have to spend spying on people, you actually
make more money from those context ads.
And so there's this like giant weird scam right now where people pay for snake oil.
You know, advertisers are paying because they
believe behavioral ads will sell their products better. And the empirical outcome is that they
just don't, right? That they're like pretty much neck and neck. And when you take away the overheads
and also the social harms that arise from continuous mass surveillance, they're like,
they're really, really close. They're pretty much the same thing. And it's not unusual for there to be multi-billion dollar industries that rich, powerful people spend money on that are totally useless. Hedge funds underperform index funds. If you're a billionaire who's just given $100 million to a hedge fund to invest for you, you are almost certainly going to make less money than if you put it in the same Vanguard fund that my 401k is in. And, you know, there's trillions of dollars under management in hedge
funds. You know, the fact that the fact that industry spends a lot of money on something
doesn't tell you that it works very well. At the same time, I think that we have a very foreclosed
imagination right now about what we can expect of firms, what we can demand of firms, and how
firms should be regulated. We're in the tail end of a 40-year-long experiment of not enforcing
antitrust law. 40 years ago, our American antitrust law was hijacked by a group of academic economists
from the University of Chicago who are like ultra free market, no intervention types. And they have gone to enormous lengths to
convince judges and lawmakers that we just shouldn't enforce anti-monopoly laws. And so
without the possibility of talking about market concentration, when we talk about harms, when we
say like, oh, is the reason that we all believe the same wrong thing because
we all use the same search engine that just accidentally or deliberately gives us the same
wrong answer and no one even thinks to try a second source for it because the market has become
so concentrated that searching is synonymous with that search engine? And if that's the case, well,
then maybe the answer is to just try a
different framework for solving the problem, right? Rather than making the companies surrender their
mind control rays, maybe we just need to enforce some diversity in the market. And because that's
not really on the menu, people are left trying to figure out how to solve the problem using tools that are not made for the job. It's like we're in this car and 40 years ago, Ronald Reagan ripped out the steering wheel and threw it out the window. And we are trying to get that car to stay on the road, but we doners or the turn signals? And you know, like you could imagine a little bit of drag being
introduced by the windshield wipers, but it's never going to be as effective as having the
steering wheel. And so given this impoverished discourse that we have about what firms owe the
public and what states can do to bring firms to heel, we are left with these ever more outlandish explanations
about firms being superhumanly good at what they do, rather than being completely banal,
mediocre monopolists, no different in spirit to Rockefeller or any of the other monopolists,
Mellon or whoever, that now live on in infamy for
their misdeeds in the Gilded Age. Yeah, that's a fantastic point. And obviously, there is this
growing movement for and discussion about breaking up the tech companies or, you know, enforcing
antitrust legislation against them in some way, shape or form. And I wanted to ask you because,
you know, as we're having this conversation, there are growing tensions between the United States and China. And part of that
is resulting in the United States trying to ban TikTok or WeChat, or at least have them,
in the case of TikTok, have it acquired by an American company or part of it by an American
company. And do you think what we're seeing in how the United States is responding here and potentially trying to kind of break its own tech companies away or its own tech industry away from what's happening in China, do you think that potentially, you know, sets up the U.S. government to see domestic tech monopolies as something that's in the national interest as they're kind of competing with Chinese tech firms and maybe tech firms from other countries in the future? Yeah, the term for what you're talking about
in the discourse is national champions. You hear this a lot in Europe, you know,
they talk about how Finland had Nokia and they had Ericsson in Sweden and, you know,
BT in the UK and so on. And the problem is that firms that don't face competitors don't build great technology.
They succeed in spite of the failings in their technology. You know, Facebook is a company,
for example, that doesn't have users, it has hostages. And so when your industry is like
lazy and sloppy and stupid and not well disciplined by competition, when users don't have any choice, that doesn't
make for a great national champion. And one of the lessons of the various pro-competitive actions
that were taken against AT&T is that they improved American competitiveness. It wasn't until the
Carter phone decision that AT&T was forced to allow third parties to make devices to connect to the phone network,
to the RJ11 plug in your wall. That's how we got modems, right? America was not made stronger
by AT&T suppressing the growth of digital communication in order to ensure that it
could continue its monopoly over telecoms. America was made weaker, right? The strength that America got out of taking AT&T's hands off the steering
wheel of industry was this, you know, incredible upswelling of innovation and new industry that
made it a global leader, right? It's true that tech companies do project soft power around the
world. You know, Facebook is under US jurisdiction. I mean, the reason that the Beltway consensus is that TikTok is scary and that the Chinese government probably raids its data repositories to as part of both its domestic and international surveillance apparatus is that's exactly what U.S. companies have done to them that the Americans do with their tech giants.
And, you know, you can see that the problems that they worry about are the product of the
distortion of deputizing these firms to be an arm of the state. So, you know, why do we worry about
TikTok surveillance? Well, because we have laws that make it a felony to reverse engineer TikTok
and make your own third-party
TikTok client that allow you to use TikTok without giving up all of your privacy, right?
The Computer Fraud and Abuse Act, Section 1201 of the Digital Millennium Copyright Act,
and so on.
And because we have no laws that prevent TikTok from abusing our data.
We don't have a national privacy law with a private right of action.
And the reason for that is that the U.S. government doesn't want to put its own tech industry in shackles.
And so it's, you know, having foreclosed on the possibility of allowing us as technology users to have technological self-determination, to decide which tools we use, how we'll use them and what they'll do for us. They then have to say, okay, well, if we're going to force people to have these undesirable anti-features in the technology
in their lives, to defend them against having those undesirable anti-features being wielded
against them by a foreign power we don't like, our only option is to expropriate that tech company
from its Chinese owners and transfer it to American owners. But I am as
alarmed and disgusted at the thought of my 12-year-old, who is a very ardent TikTok user,
being spied on by Microsoft and having that data hoovered up by the NSA or the FBI or,
you know, any local deputy dog who's part of a fusion center, as I am about the People's
Liberation Army getting their hands on that data. In fact, I'm more alarmed because there's very little that the PLA wants to do to me. I don't
represent much of a fly in their ointment, whereas I'm a political activist who's not a citizen who
lives in the United States. And, you know, there's a long history of that being a very fraught
proposition. And so I would very much like to have not a TikTok that is under
US control that spies on us, but a TikTok that doesn't spy on us.
Sticking with the kind of national security point for a minute, I wonder if part of the reason that
the United States doesn't want people to be able to kind of choose the tech tools that they're
using and just have to use the ones that are offered by these major monopolies is because even though the book critiques surveillance capitalism, the United
States still does benefit from having all these tech tools to surveil its population, right?
In the book, you talk about how the East German Stasi recruited one in 60 people to serve as
informants. But in the United States, you know, they don't
need to do that because they have these tech tools, right? And even after 9-11, there was
a proposal called Operation Tips that would have had up to one in 24 Americans working as informants,
but you know, that really wasn't necessary with the NSA, right? So do you think that with the
ability of these companies to provide these tools for the government
and their increasing willingness to do so, that that also kind of will make it more difficult
to force the United States government to take these antitrust actions, especially when,
you know, the United States does have generally a quite unresponsive and corrupt political
system?
If you look at the actual mechanisms by which mass surveillance is accomplished, the efficiencies that allow mass surveillance are all in the form of these
public-private partnerships, where the data is accumulated by private firms who actually make
money doing so, usually by charging us. So every time you pay your mobile phone bill or your cable
bill, you are paying to have data gathered on you. And then it is rated by the
state. And when I go and I talk in the Silicon Valley about surveillance and privacy, it's not
uncommon for the people in the room, you know, engineers who work for tech firms to say, look,
Google just wants my data to customize ads and improve their products. And I don't mind relevant
ads and I want better products. So I'm okay with it. But the NSA, I don't trust those
guys at all. Those guys are like all the people who weren't smart enough to get jobs working at
Google. And then I go and I speak at West Point or in the Beltway. And those people say, look,
Uncle Sam knows everything about me. I've got security clearance. I had to go to the Office
of Personnel Management and tell them about, you know, my mom's heroin habit and the fact that my
brother is gay and in the closet and whatever, so that they knew that I couldn't be
blackmailed. They knew all the secrets. And so I don't have any secrets from Uncle Sam. But Google,
all those guys care about is money. And if they could make a dime selling out their own mother,
they would. And so, you know, let's keep the data in Uncle Sam's pockets, but not in Google's.
When you look at how Google and other tech companies that rely on surveillance are able
to get away with conduct that is plainly deceptive, right?
They lie to their regulators about what they're going to gather.
They lie to us about what they're going to do with it.
And they dissemble in public about what it's used for.
So it's deceptive, it's coercive, and it puts us all
to risk. We have these data breaches that result in really untold harm. And you ask yourself,
why is it that the state doesn't intervene to curb this conduct? The answer is in part that
these firms have very powerful stakeholders within the state. The reason that we don't want facial recognition prohibition, either nationally or statewide,
is that cops are using facial recognition built into smart doorbells and other video
surveillance tools that private citizens are installing as a means of building out these
warrantless surveillance grids.
The reason we don't have effective location privacy that stops apps, the MP3 converter
on your phone or the beauty app that you have is gathering your location data in real time,
second by second, and feeding it to a third party who gives them some money for it.
It helps them monetize their free app.
And then they gather all that data and sell it to law enforcement.
So ICE and Customs and Immigration and the DOJ, they're major customers for this location data.
And this is a pattern that we've seen in the past.
You know, AT&T for a long time was a horrible bully that engaged in all kinds of really
terrible conduct.
And at every juncture where the government had the opportunity
to either break up AT&T or try and make it behave itself, they went with the latter. They said,
well, we're going to put you under consent decree. We're going to fine you. We're going to ban you
from doing certain things. And all AT&T took away from that is the old saying that a fine is a price,
that the cost of doing business incorporates the cost
of these fines or these sanctions, and provided the profit is higher than the estimated fine,
they're okay with it. And by the 1950s, that attitude had been the source of so much mischief
that the DOJ stood ready to break up AT&T. But AT&T wasn't broken up until 1982. And the reason
they got a 30-year stay of execution is that the Pentagon intervened on their behalf and said, we have come to rely so heavily on AT&T to prosecute the war in Korea that we can't afford to have them broken up occupying Korea was the thing the US government needed to do, but when there are things that legitimately the state needs to do, when they find giant monopolists
and deputize them to do it, rather than developing the state capacity to do it themselves,
then the state can no longer afford to have the giant monopolist disciplined to the point that
they can no longer perform that state-like duty.
We really do have this choice, right? Are we going to try and improve the tech companies?
Are we going to try and improve the internet? Because if we try to solve the problems with the internet by making the tech companies better, then if we get rid of the tech companies,
then the internet will get worse. I think we need to make the internet better irrespective
of what's going on with the tech companies. In the same way that I think we need to make the internet better, irrespective of what's going
on with the tech companies. In the same way that I think we shouldn't fund cancer research with
sin taxes on cigarettes, because cancer research shouldn't depend on people smoking. We shouldn't
fix the internet by making the big tech companies cops. If we need cops for the internet, let the
cops do it. Otherwise, all you do is ensure that the cops intervene to keep big tech from ever being
broken up or tamed.
Yeah, I think that's an essential point.
And it's one of the major things that I took away from reading the book, because obviously
there is this growing push now to get the big tech companies to kind of clean up their
act a little bit instead of necessarily breaking them up.
And you kind of argue that you can't really do both, right? It needs to be one or the other. If you enforce these
new regulations or these new kind of rules to try to clean up the platform, then you're giving these
platforms more power and more of a reason to exist in the first place. Whereas if you just break them
up, then we can start to solve those problems in that way, right? Yeah. And not just break them up, but like, prevent them from getting bigger, right? You know,
if we're gonna, if we're gonna tell them they can't merge with major competitors,
one of the things that often happens when you see anti competitive mergers is companies say,
well, we are essential to the nation in this way. And we're not doing as good a job of it as we
could let us buy this major competitor, let us merge with this
major competitor, and we will deliver more value to the nation. And so what we don't want to do
is put a floor underneath how small one of these companies can be. Because even if you think that
we don't have to break them up, right, even if you think that we will someday just have competition,
naturally occurring competition that unthrones them, that does to
Google what Google did to Yahoo. We can't afford to have Google be a part of our state if we're
expecting that in the future they will be dethroned the way Yahoo was. Imagine if Yahoo was delivering
all of the classroom services to everyone doing remote education during the pandemic.
And then Yahoo was mismanaged as badly as it was and ended up on the brink of bankruptcy,
selling for pennies on the dollar to Verizon, right? That is not what we want our critical classroom infrastructure to be grounded in. Something as large as the distance education
platform for a nation, if not the planet, should not be in private hands.
And it should certainly not be in one set of private hands.
Yeah. And I want to note, like, some of my questions are kind of asking how you think
this antitrust would work or, you know, how to get past, you know, potentially the state trying
to get in the way of it or not wanting to do it. But I actually very much support the notion
and what people who are fighting for competition, legislation, antitrust action are trying to
achieve, right? Because I think it is really important and it is part of the way that we
take that power back from these companies and make them serve public goals instead of
increasing their power and their profits and increasing shareholder value and things like that.
You know, you're putting your finger on something important here, which is that,
as Larry Lessig says, there's four ways that we change the world, right? There are four directions
that the world moves in. Code, things that are technologically possible. Law, things that are
legal or illegal. Markets, things that are profitable or unprofitable, and norms, things that are socially
acceptable or not socially acceptable. And they all interrelate to one another, right? Like,
we are not going to get legal reform to antitrust unless there is a sense in the public that we have
a problem with monopoly. And when we have that sense, then antitrust laws and toughening up
antitrust laws will become easier in the same way that it was
in the Gilded Age, right? The Sherman Act and the Clayton Act, the original antitrust laws in the
US, it was 20 years after they were passed that they were finally enforced. And what happened in
the interim was not a legal shift. It was a normative shift. They had the laws, but they
didn't have the outrage. And so the outrage is part of the story. When people say, how do you get from being angry about this to changing it? Step one is get angry about it,
because if it matters to you, then it will matter to the political classes.
Then entrepreneurs will see that there is a market there to serve. Then technologists will
see that there are tools that need to be made, that there is an audience for new technological breakthroughs. And all of that is in service to a world in which we do have more competition.
When I look at some of the arguments, I feel like it's kind of heralding back to the golden
age of capitalism that ran from about the 1950s to the 1970s, right, where there was this
significant growth where labor got a bigger
share of the pie and was actually able to enjoy more of the benefits of capitalism and growth
than they have for, say, the past 40 years, right? And I worry that looking back at that period and
saying, you know, if we break up these big tech companies and these other monopolies and other sectors of the economy,
that we're going to return to this good period. Going back to what you said about materialism,
it kind of misses this point that there were very specific material conditions that made that
possible, which are not the same material conditions that we're dealing with today.
And so I wonder if part of the response to big tech's power and
their monopolies right now is not just to break them up, but really needing to look at the way
that they operate and whether even when they are broken up, just because of the profitability
motives and all these things, can we still ensure that they're serving the public good? Or do we ultimately need to look at ways
that we can try to develop ways of making technology or having technology work that
kind of go beyond capitalism, that have more worker control, community ownership, and things
like that? The 30 years you're talking about, those post-war years, they call that time,
in French, they call it les 30 glorieuses, the 30 glorious
years. And it's true that there were some unique circumstances, right? We had just emerged from a
war. The debt to asset ratio in most economies was better than it's ever been. I mean, if you
think about what happened during the war, there was massive government spending to put people to
work making munitions. And there was, you know, nothing to buy
because there was rationing. And so after the war, people had like a lot of what they call dry
powder, right? They had a lot of, they had a lot of money in the bank to spend and a lot of capacity
in the economy, right? There are a lot of people coming back who could be put to work and, and
a lot of new technologies. And there was a lot of stuff to do, right? We had to rebuild Europe.
We had to rebuild Asia. We had to fix all the buildings that have been falling down. I remember
when I lived in East London, I had a friend who came by and we walked around and he said,
you know, the first time I walked through this neighborhood, I was with a friend who every time
we passed a giant parking garage, he said, of course, that was bombed by the Nazis. And he said,
did Hitler drop parking garages on East London?
But there was all of this work to do.
There were buildings to build.
There was stuff that needed to be done.
There were workers to do it.
And there was a willingness to spend to make it happen.
There was a sense of shared purpose and urgency.
And I think that we do have a lot of those circumstances again, right?
That we are entering a period in which we are going to be in a crisis that will
require massive spending. If you think about the scale of the challenge on our horizon,
we're probably going to have to relocate most coastal cities like 20 kilometers inland or at
least significantly uphill, right? In the world, right? That's just one of the challenges.
We will probably have multiple pandemics because
climate change is going to drive organisms and their parasites into new places where there's
no resistance to the parasites and no predators for the organisms. And so we'll have lots more
pandemics. We are going to have not just social unrest, which is a thing that we can or can't
do something about. We might be able to avert that
through good responsive politics. But we are going to have massive numbers of refugees as their homes
are burned or flooded or rendered uninhabitable because of extended periods of heat. Here in
Southern California, we're going to hit 44 degrees centigrade each day this weekend. And we are going to need every single person's labor, right? We're going to have things that need doing that will consume and exceed all the available labor that we have in the world for hundreds of years. automation-driven unemployment, not just because self-driving cars are a fantasy cooked up by self-puffing billionaires, but also because even if we had self-driving cars and every truck driver
was put out of work, there's jobs for them, right? There's jobs for all of us. There's jobs for our
kids and their kids and their kids and their kids. So we do have this. And then the other thing that
we have is a need for evidence-based policy. Because we started this off talking about corruption and how industry concentration creates corruption. And when an industry is small enough that they
all fit around a single table, and when the industry is profitable enough that they have
lots of excess capital that they can put to work on regulatory projects, right? Instead of making
new products, they get laws or rules passed or removed in ways that are favorable to them. Then instead of doing the thing that we think is most likely to be best for our
general prosperity, we do the thing that we think will be best for the shareholders of those
companies. And, you know, people look at that photo of Donald Trump with all the tech leaders
sitting around that leatherette boardroom table at the top floor of Trump Tower after the election.
And, you know, they're rightly aghast at all of those people sitting down and meeting with Trump. But let's
have a moment's aghastness here for the fact that all the leaders of the tech industry fit around
one modest-sized Leatherette table in Trump Tower. That should never be the case, right? There should
be so many of them that the collective action problem of conspiracy exceeds any budget that
they could throw at it, that there would always be people who would defect from whatever consensus they
tried to force on us and on the policy sphere. And we have never been in more dire need of
evidence-based policy than today. We need to do stuff about the coronavirus crisis and the climate
emergency, and we need it to be grounded
in what we think is right based on the best evidence we can gather and the best analysis
we can perform on it, irrespective of the priorities or shareholders of major firms.
And we are so clearly not doing that on either score. And the major reason for that is because
we're not living in the 30 glorious years, because our firms are absolutely able to both nominate their referees for industry and then control them once they're in industry.
I mean, you're calling me from Canada, source of the SNC-Lavalin crisis. I'm speaking to you from the United States, source of innumerable crises. But most recently, the crisis about the access to,
I can never pronounce it right, remesdivir, resevdivir, the Gilead drug that was produced
at public expense that might have a significant effect on recuperation from coronavirus that
costs $8 a dose to manufacturing that they're going to charge $3,000 a dose for. We absolutely
must have a transformation in our policymaking from the parochial to the
evidence-based. And we're running out of time for it. And breaking up monopolies is a critical piece
of that. Our priorities have to be public priorities, not parochial ones. And the only
way we get there is to restore the spirit of the 30 glorious years, to restore a moment of pluralism
where we go where the
evidence takes us. And we can do it, right? If you're sitting there thinking, oh, we'll never
do that, they'll never understand the intense technical questions, how could they ever answer
them? Consider the fact that you are not dead of listeria, right? Consider how easy it is to be
dead of listeria and the fact that there isn't a single microbiologist in your parliament or
Congress, and yet they were able to make evidence-based policy and you are not dead of listeria the roof
over your head hasn't fallen in people build buildings whose roofs fall in all the time the
reason that the roofs don't fall in anymore is because we figured out the answer the best answer
to hard technical questions and we did it not by favoring shareholders, but by favoring
evidence. We can do it. We have done it. We're not the remnants of a fallen civilization that
has forgotten how to make good policy. There are people alive today who made those policies,
and we can do it again. I wanted to end by asking you one more thing, Corey. Obviously, you write
and think a lot about technology, both in your nonfiction work and
in your fiction. And part of the goal of this podcast is also to think about the future and
what technology is going to look like in the future. And so you've talked about, you know,
how we need to take on these tech monopolies. But do you have a vision for what you think
technology and technological development should look like in the future where it serves our
collective goals and our public goals to really serve the good of everyone.
So here's what I think the major difference could be. It's interoperability. Because right now,
everything is a take it or leave it offer, right? If you want Facebook, you have to take the
surveillance. If you want Apple, you have to take the lock-in.
You don't get to pick and choose which parts of it you like. And when you contrast that with the areas where we still have real interoperability, where you can modify or ask someone else to
modify your tools so that they serve you rather than a firm, you can see that a different equilibrium
emerges, right? I'm old enough to remember the pop-up wars when web publishers were really over
a barrel. There were too many web publishers, not enough advertisers. The advertisers thought that
pop-up ads would get them more clicks. And so they started to demand really obnoxious pop-up ads
from the publishers. And the publisher's alternative was just not get any advertising
money and go broke. And so all over the web, you had ads that would pop up and play music,
and they would run away from your cursor, and they didn't have a close box.
And they were just, you know, you close them and they would spawn 12 more and they were just disgusting.
And what fixed that was interoperability.
Browser vendors, starting with Opera and then Mozilla, started to put pop-up blockers in the browser that was on by default.
And users just didn't see pop-up ads anymore. And when the publisher sat down with the advertisers,
the advertisers said, you have to put pop-up ads. And the publisher said, fine, but you need to understand no one will see them because they've modified the tool, right? Like we don't get to
control the browser they use and they have modified the tool. So they just don't see it anymore.
And that's when publishers gave up on pop-up ads or advertisers gave up on pop-up ads.
When my grandmother left the Soviet Union and was a Soviet refugee and came to Toronto
or to Halifax and then Toronto on a displaced persons boat from Germany, she didn't see
her family for 15 years, right?
She was totally out of contact with them.
It was a very high price to pay you know the offer that the soviet union had was if you want your family
you've got to stay in the soviet union and they're not available to you as a a la carte package when
i left the united kingdom five years ago moved to los angeles i not only got to stay in touch with
my family like we get on zoom every week, but, you know,
I brought some appliances over and just bought some mains adapters and plug them into the wall.
Like I had such a low switching cost.
I could take the parts of the UK that I liked and give up the parts that I didn't like.
So what if you could make an alternate client for Facebook, right, that allowed you to use
the parts of Facebook that mattered to you and ignore the parts that didn't or actively
block the parts that didn't. Maybe aggregate your Facebook inbox with your Twitter
inbox, cross-thread it with things from other rival services, put it through a privacy layer
like Tor, whatever that would just give you the power to control which parts of the service you
took and which parts you didn't. We would get new equilibria, right? We would get equilibria that
were oriented around users and their priorities and not shareholders and their priorities.
It's such a small change. And I feel like for a lot of people, even just thinking about that,
it is so far beyond what we have right now. But you can see how that small piece of the shifting
of the control could have a really significant difference in how we use and how we
can use technology. Yeah, that word control is the key one there, right? We spend a lot of time
focused on what technology does. What we should really focus on is who it does it for and who it
does it to. Corey, I really appreciate you taking the time. Thanks so much. Thank you.
Corey Doctorow is a science fiction author,
activist, and journalist. His book, How to Destroy Surveillance Capitalism, is available for free on
OneZero. And his new book, Attack Surface, from Tor Books, is available on October 13th. And you
can get a digital or audio copy on Corey's Kickstarter. You can follow Corey Doctorow
on Twitter at at Doctorow. You can also follow me, Paris Marks, at Paris Marks.
And you can follow the show at Tech Won't Save Us.
If you liked our conversation, please leave a five-star review on Apple Podcasts.
Thanks so much for listening.