CoRecursive: Coding Stories - Tech Talk: Rethinking Technological Positivism with Cory Doctorow

Episode Date: June 15, 2019

Self-driving cars or armed autonomous military robots may make use of the same technologies. In a certain sense, we as software developers are helping to build and shape the future. What does the futu...re look like and are we helping build the right one? Is technology a force for liberty or oppression. Cory Doctorow is one of my favorite authors and also a public intellectual with a keen insight into the dangers we face a society. In this interview, I ask him how to avoid ending up in a techno-totalitarian society. We also talk about Turing, DRM, data mining and monopolies.   The coming war on general computation  Cory's Personal Site  Radicalized (and other books)  EFF  Website for Page

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, this is Adam Gordon-Bell. Join me as I learn about building software. This is Code Recursive. Then we're like Huxley-ing ourselves into the full Orwell, right? That is like the most stupid, awful, possible dystopia, right? Like to make sure people don't watch TV wrong, we make it illegal to report defects in browsers. Are you kidding me? What idiot thought that was a good idea? That is Cory Doctorow. He is the editor of Boing Boing. He's a science fiction author. I think I've
Starting point is 00:00:36 read every sci-fi book he has written. I guess even some that aren't sci-fi. He's very involved in the Electronic Freedom Foundation, the EFF. And most importantly for this episode, I would describe him as an advocate for digital civil liberties. I was thinking about how in dystopian visions of the future, the power that governments or corporations have to oppress people are often software and computer networks and cameras and things that software developers would have a role in building. I think that we software developers therefore have a big role in kind of shaping the future that we end up in. This is the topic that I talked to Corey about today. Sometimes I listen to a podcast on like 1.5 times or even 2x speed. Corey, I think, requires listening at 0.75 speed. He covers a
Starting point is 00:01:27 wide breadth of topics. He had some little aside about evidence-based policy that really blew my mind and a ton of other interesting tidbits. I hope you enjoy the interview. So, Corey, thanks for joining me on the podcast. I have a copy of your new book here, which I'm quite enjoying, but I wanted to ask you sort of some high level questions about software and technology and freedom, maybe even. So let's start with an easy one. So is technology like a force for liberation or for oppression? You know, I think that the right um, the right answer is not about what the technology does, but who it does it for and who it does it to. And I think that whenever
Starting point is 00:02:10 you see like a dark pattern in technology or an abusive technology, when you pick at it, you find that that you're not looking at a technology problem, but a policy problem or a power problem. Or sometimes you might think of it as like an evidence-based policy problem, which is, I think what you get when the kind of policy problem you have when you have power problems. Like if you can't examine the evidence when the evidence gores some rich, important person's ox, then you can never ask whether or not the technology is working because if you found out the answer, there's a pretty good chance that it would undermine the profits of someone who gets a decision and whether or not you get to research whether the technology is working. You know, you can think about things
Starting point is 00:02:48 like predictive policing, right? You know, the idea of predictive policing is that you can use statistical modeling to replace the kind of police intuition, which may be laden with either conscious or unconscious bias, and direct policing to places where it's likely to occur. But the way that these machine learning models are trained is by feeding them data from the very police whom you suspect of either conscious or unconscious bias. And you don't have to be like a machine learning specialist or a mystic to understand garbage in, garbage out. It's been an iron law of computing since the first IO systems. And if you feed biased policing data to an ML system and say, where will the crime be? It will give you a biased answer. And this has been really
Starting point is 00:03:32 well demonstrated by a guy named Patrick Ball, who's a forensic statistician who runs an NGO called the Human Rights Data Analysis Group. And mostly what they do are these very big projects, like they located a bunch of hidden mass graves in Mexico, they participated in genocide trials in Guatemala, and so on and so on. They use fragmentary statistical evidence to build out larger statistical pictures of the likely unreported human rights atrocities committed during revolutions, wars, oppressive regimes, and so on. But in this case, they decided to investigate predictive policing, and specifically a product called PredPol, which was created by a University of California professor, and it's the most widely sold predictive policing tool in the country.
Starting point is 00:04:15 And so what they did was they took policing data from Oakland, California from two years before, and Oakland is notorious for a bias in its policing as well as violence. And they specifically said, given the policing data from two years before, where should we expect to find drug crime next year? Where will the drugs be in Oakland? And if you look at the map that that PredPol spits out for the following year, which given that this was two years old data was the year before and a year ago, It's just this giant red splotch over the blackest part of the blackest neighborhood in Oakland. And then they took the NIH standard of drug use, which is the gold standard for empirical pictures of where people actually use drugs in America. And they said, where is the drug use, which is to say the drug crime in Oakland. And it's this kind of nice Gaussian distribution with a couple of little strange attractors, but the data is distributed very evenly across the whole map. And so you can see how very quickly bias in data can produce bias
Starting point is 00:05:18 in conclusions. And so the problem with PredPol is that we ask PredPol where to police without bias, but all PredPol does is make us do more biased policing. But imagine instead that you took this data and you used it as a way to evaluate whether or not there was bias in policing, right? You said, okay, well, here's the empirical data about where crime takes place. Here's where the machine learning model predicts what crime will take place. Now we know that our police data is producing biased outcomes. And then you might create an intervention, right? You might try and train the police or you might try and change the racial or other makeup
Starting point is 00:05:55 of the police department or some other intervention. And so now you're performing an experiment and you need to evaluate whether it was successful. You can use exactly the same tools to find out whether or not your anti-bias initiatives are working or failing. And the only difference between PredPol being used to valorize and provide a kind of veneer of empirical face wash to racism and embedded as a permanent feature of policing in Oakland and predictive policing tools being used to root out and eliminate bias in policing in Oakland is who wields the tool and why they wield it and not the tool itself. And so technology is a force for good when we wield it for good. And our ability
Starting point is 00:06:39 to wield it for good has been systematically undermined because the people who get to decide how we wield it are increasingly not subject to any kind of democratic controls. They act without any kind of consideration for other important equities in society. And no one gets to investigate whether or not they're doing a good job or a bad job. And so can this problem be overcome? So I get there's a perspective you're saying where you could use the machine learning to look at the bias embedded in the data. So why isn't that happening? Well, I think because tech has grown up with a radical new form of antitrust enforcement, which has been part of a radical shift of who has how much wealth in our society.
Starting point is 00:07:19 So remember that Jobs and Wozniak released the Apple II Plus while Ronald Reagan was on the campaign trail in 1979. Right. And the first thing pretty much that Reagan did in office was start to dismantle antitrust protection. As a science fiction writer, I can tell you that he relied on another science fiction writer, specifically an alternate history writer, to inform his theories of antitrust. There's a guy named Robert Bork, he's very famous for not being confirmed for the Supreme Court when Reagan nominated him, who conceived of an entirely fictional account of how antitrust law came to be in this country. He wrote a fanciful legislative history of it that said that the Sherman Act was written by people who were not concerned with monopolies, but were instead concerned only with so-called consumer harm, whether or not monopolies were used to raise prices. And everything else was fair game, cornering markets,
Starting point is 00:08:10 vertical integration, buying all of your big competitors, buying all of your little competitors, using lock-in to snuff out anyone who entrants the market. All this stuff that had been illegal under antitrust since day one suddenly became legal again. And so tech has grown up over 40 years with a steady erosion of antitrust. And now we have things like Facebook, which is the largest tech social media company in the world. It grew primarily by buying its competitors. If you think about what other successes has Facebook had as a homegrown success, there's pretty much none to speak of.
Starting point is 00:08:46 Same is true of Google, right? Google made search and Gmail and everything else that they've succeeded with is something that they bought. And they would have been absolutely prohibited from buying those other companies, YouTube and Gmaps and all of these other companies that they bought. They would have been absolutely prohibited from doing it until Reagan came into being. So we've had 40 years, a generation and a half, nearly two generations of antitrust malpractice in America and in the wider world, because Reagan wasn't an isolated phenomenon. Thatcher was elected around the same time. And in Canada, where I'm
Starting point is 00:09:19 from, we had Brian Mulroney. And they all subscribe to these radical theories that came out of Robert Bork and the University of Chicago. And 40 years later, the internet consists of five giant websites filled with screenshots from the other four. And it's not just the internet, right? Like this week, the screenwriters are firing their agents because there's only three big talent agencies left. They're all owned by hedge funds and they're screwing their clients. And that's also true in world wrestling. We're down to one world wrestling league and the median wrestler is dying at something like 46 and they're all treated as independent contractors and none of them have medical benefits. And so, you know, every industry over 40 years has been transformed
Starting point is 00:10:00 into this oligarchic structure. And as power is concentrated into a fewer and fewer hands, our ability to make policy that reflects pluralistic, broad goals, as opposed to the parochial needs of someone who wants legislatures to deliver them gifts in the form of laws that enhance their profits instead of enhancing the public good, their power grows and grows and grows. And so we conduct all of our policy in an evidentiary vacuum. And that means that all of these things that enhance shareholder returns, but at public expense, sail through with no regulatory scrutiny. That's true in tech as it is in every other regime. I mean, the biggest industry in West Virginia, it's not coal, it's chemical processing. And the biggest chemical processor in West
Starting point is 00:10:48 Virginia is Dow Chemicals. And their lobby just filed comments in a docket on whether or not West Virginia should allow increased levels of toxic runoff from chemical processing in the drinking water. And Dow Chemicals submitted an answer that really tells you that they're not even trying anymore. That as far as they're concerned, so long as there's some reason, it doesn't have to be a good one, they'll get what they want. Because their answer was, yes, of course, we can have higher levels of deadly poison in the drinking water, because the toxic levels that are approved nationally are based on the average body size of the average American.
Starting point is 00:11:26 And people in West Virginia are much fatter than the average American. And so the chemicals will be more dilute in their bodies. And besides, West Virginians hardly drink water at all, right? That is the answer of someone who's just stopped caring whether or not their ideas pass the giggle test, who knows that so long as they can write down any reason next to the box where they tick, I object, that that will carry the force of law. And so, you know, that's the world we're living in now. Now, does technology have a role to play in fixing it? Absolutely. Right. What technology is not going to do is it's not going to allow us
Starting point is 00:12:00 to, as some people thought in the early 90s, we can't use cryptography to like create a separate parallel society in which we can live free from the interference of dumb dumbs who don't understand technology, you know, while tyranny grows up around us, right? You know, the term rubber hose cryptanalysis, I believe Julian Assange actually coined it, refers to the idea that if you don't have democratic fundamentals in the society you live in, you may have a key that is so long that it couldn't be guessed, not even if all the hydrogen atoms in the universe were converted to computers and all they did between now and the Stella Ferris era was try to guess keys. But it doesn't matter if the person who
Starting point is 00:12:42 knows the key can be dragged into a little soundproof room where there's a guy with a howling void where his conscience should be and a pair of tin snips, right? Because that person will find out the passphrase anyway. And so what is it that technology allows us to do? It allows us to build a kind of temporary shelter that will hold up the roof after the support pillars have been removed by 40 years of increased oligarchic policy. And we can hold up the roof while we erect new pillars, while we create new democratic fundamentals. Because what the internet gives us is the power to organize like no one has ever organized before. Facebook is a machine for finding people who have
Starting point is 00:13:24 hard to find traits in society, whether that finding people who have hard-to-find traits in society, whether that's people who are thinking about buying a refrigerator, which the average person does two or three times in their life, or people who want to carry tiki torches through Charlottesville while dressed up as Civil War LARPers chanting, Jews will not replace us, or people have the same rare disease as you, or people who went to the same high school as you, or people who care about this stuff. Networks allow us to find people who care about the same stuff as us and coordinate our work with them in ways that exceed the best systems that we've had through the whole history of the world. And cryptography allows us a degree of operational security that while imperfect and while subject to
Starting point is 00:14:01 democratic fundamentals, nevertheless, creates a temporary and usable edge that we can exploit to give us the space to operate while we embark on the long and vital project of reforming our society to reflect the needs of the many and not the few. So if I understand, what's a good way to say that power has coalesced over time since these antitrust laws are gone. And you're saying it's not the kind of cyberpunk vision of people should abandon society and build an anarchist commune, but that they should use their power to network to change things. Is that the vision you're putting forth? We have a systemic problem and it will have a systemic answer. Right. It will not have an individual answer. We've had a 40 year long project to
Starting point is 00:14:45 convince us that all of our problems have individual causes and individual solutions. You know, the problem with climate change is like whether or not you are good at separating your waste before you recycle it. And that if only you were better, then we wouldn't be all like drinking our urine in 30 years, right? And the reality is that like you can recycle everything. You could go down to like zero waste, right? And we would still not solve climate change. The chances are that your biggest contribution to climate change is your commute. And the only way you are going to solve your commute is with public transit. And you cannot personally create public transit, right? Like you will never build your own subway system. I mean, this is like the fundamental
Starting point is 00:15:30 flaw with Elon Musk, right? Is he thinks that like, he thinks he's Iron Man, right? Like he thinks you can just punch things until they're fixed, right? That you can just build a boring machine and dig a tunnel and that will solve transit. And the reality is that these are pluralistic problems with pluralistic solutions. They're problems that emerge out of collective action and they have collective solutions, but the internet is the greatest collective solution tool we've ever had. I mean, you know, think about just like source control, right? Version control, version control, or wikis allow people to pool their labor in ways that are infinitely more efficient than anything that existed before them. Imagine trying to write Wikipedia without a wiki.
Starting point is 00:16:15 Instead, you were using couriers shipping giant filing cabinets with the latest revision around to everyone who wanted to read or edit it, you know, several times a day, right? Or how source control used to work before we had source control, where either everything was written by one person, and that put a limit on how complex a system could be, or only one person at a time could work on the system. And you generally had to all be under one roof, or you had to plan it in advance so that you knew that the Austin office was working on one part and the New York office was working on another. We now have through just, you know, control Z and revert and check-ins and rollback. All of those things allow us to pool our labor using these informal networks that before would have needed rigid, well-defined hierarchical systems that were capital intensive and that put a limit on how much work you could do because there was a certain
Starting point is 00:17:12 amount of sitting around with your thumb up your ass, waiting for the courier to arrive with the filing cabinets full of Wikipedia revisions, or waiting for it to be your turn to work on the code. And now we can collaborate in ways that put the dreams of theorists of collective action to shame. You know, when you think about Adam Smith writing about the pin factory, where you have an assembly line where people are making pins and one person stretches the wire and one person snips it and so on. And he's rhapsodizing about this incredible efficiency relative to the individual artisan working in their workshop all on their own, how we've managed to decompose a complex
Starting point is 00:17:50 process into a bunch of individual steps. And now you think about how we no longer even need to perform those steps in order. The people who perform them don't need to know each other. They don't even need to know that each other exists. They don't have to exist in the same place or in the same time. How many times has some hacker been like, I need to solve a problem? And they Google it and they find some old abandoned code sitting on a GitHub project somewhere that solves half their problem. And they sync it with their computer. And they are now collaborating with someone who might even be dead, right? Who's long gone. And that is an amazing thing, right? That is what technology buys us. And so as we sit here confronting this world that is in the grips of an uncaring, depraved in their indifference oligarchy, who make policy to benefit themselves without regard to everyone else, to the point where we now risk literally the only habitable planet in the known universe. The one thing we have going
Starting point is 00:18:51 for us that no one has ever had in a struggle like this before is the ability to coalesce groups and mobilize them in ways never seen before. Like that's a powerful vision. I'm interested in what end. So there's this joke, I don't know the root of it, but software engineering joke, right? So it's like an ethical developer would never write a method called bomb Baghdad. Like they would write it called bomb city and it would take the city as a parameter.
Starting point is 00:19:17 So like the joke is about the idea that- Yeah, I get that joke. Yeah, I understand the joke. Okay, I shouldn't condescend to you. Yeah, no, no, no. I shouldn't condescend to you. overweight the immediate instrumental benefits of getting a paycheck and underweight the long-term consequences of lying on your deathbed, realizing that you participated in genocide. That's a common problem. I mean, that's why people smoke, right? Like they overweight the pleasure of having a cigarette now and they underweight the long-term consequences of dying of lung cancer. You know,
Starting point is 00:20:02 it's the behavioral economists, they have a name for it, right? Hyperbolic discounting. And it's when you over-discount immediate consequences and undercharge for long-term consequences. And, you know, this is like an old problem of our species, but it's a problem that we often rely on like democracies to solve, right? Like one of the things that democracies can do is act as a kind of check against that convenient rationalization by creating policy firewalls that sit between you and the rest of the world, right? Either some of those might be like minotaur-y, like, oh, well, the short-term benefits are very high and the long-term costs are a long way away, so I don't need to worry about it. But when society adds, say, criminal penalties with jail sentences for people who knowingly participate in this kind of thing,
Starting point is 00:20:50 well, then maybe it changes your calculus about whether or not you should be doing it, right? You know, there's other ways of doing it that don't involve coercive state power. There's a thing called the Ulysses Pact, right? So, you know, if you've read your Homer, you know, that Ulysses was a hacker who went sailing around the sea and who didn't want to do things the way that normies did and wanted to do things the hacker way. And so like when you sail through the sea where the sirens are, you know that the sirens, their song will make you jump off the deck and drown yourself. And so, you know, the standard OSHA protocol for navigating those seas is to fill your ears with wax so that you never hear the sirens. But being a hacker, Ulysses is like, I want to hear the sirens and I want to not drown. And so he comes up with an alternate protocol, which is that he has his men tie him to the mast so that when the sirens start singing, although he'll want to drown, he can't. He can't throw himself in the sea. And this is Ulysses' pact. It's when you take some action in advance of the moment in which you know that you are going to be tempted to block yourself from temptation, right? It's why
Starting point is 00:21:54 you throw away all the Oreos the day you go on a diet. You can still go get Oreos, but you're raising the transaction costs. But it's also things like why when you start a company, you irrevocably license your code under something like the GPL, because someday your VCs are going to show up at your door and they're going to say, well, you know, those 20 people that you convinced to quit their jobs and like trust their mortgage and their kids' college funds to you, if you want to make payroll for them next week, you're going to have to close the source that you promised the world would be open. And you can say, you can threaten me all day long. I tell you what, I can't close the source. It's
Starting point is 00:22:27 an irrevocable license, right? And so these Ulysses Pacts are one of the ways that we guard our strong self against our weak self, right? It's a way that we can reach into the future and change the calculus that we make about immediate benefits and long-term costs. That's a great idea. You had this character in your book, she's not a main character, but this Wyoming who was working on the software, right? For a bread machine. Yeah. Like, I'm wondering, what would you tell her to do? You know, if you were speaking to her, like she's a software developer who's building something that I believe you think is bad,
Starting point is 00:23:05 right? But she's not a bad person. How would she even know what she's doing is bad? Well, I mean, part of the story is like her discovery that she's bad. And we should mention that it's like it's not just a bread machine. It's a toaster that is sold at a subsidy, but it's locked to a certain kind of consumable. It uses a vision system. So only heat authorized meals and toast authorized breads, little toaster oven. And it's being used as an extractive mechanism to take poor people who are in subsidized housing, where, and these appliances are being like non-consensually installed in their subsidized housing as a condition of their, of their residency, and thereafter doing them into a spiral of debt.
Starting point is 00:23:40 And as she comes to realize this stuff, as she comes to realize more viscerally what's going on she meets people who are involved in it she has to come to grips with her conscience so the first thing that she does is she tries to help them subvert it and then the second thing that she does is she tries to co-opt the individuals who she has come to like and make them part of the system she she basically gets them a job offer from her big, terrible company, thinking that so long as she can bring along the people who matter to her, all of the people that she hasn't met who also suffer under this system won't matter. And when the person whose story it is, the refugee woman, Salima, who she tries to get a job offer for, as they reject this job offer and say,
Starting point is 00:24:27 no, it's all of us or none of us, you know, she then helps them more permanently subvert the whole system. She helps them work out how to make a VM that can respond to challenges, nonces from the server that make it look like they're running unmodified firmware. And meanwhile, she helps them come up with firmware modifications that let them reconfigure the toaster ovens so that they can toast any damn bread they want. So I really enjoyed the story. And I found that you had this thing called the bad technology adoption curve. What is that? Okay. So when you've got a terrible idea about technology, you know
Starting point is 00:25:06 that it's going to generate a lot of complaints. And so you need to try it out on people who nobody is going to listen to when they gripe. And so you have to find people with no social agency as your beta testers. And these like non-consensual beta testers, we draw them from the ranks of children, mental patients, prisoners, people receiving welfare benefits. And, you know, then we move up to like blue collar workers and eventually white collar workers. Once we've gotten the kinks out and once we've normalized whatever terrible idea we have about technology. And that means that we can actually peer a little into the future, which is a bit weird because for the most part, we can't know what the future holds. Science fiction writers in particular have no idea what the future holds, which is good
Starting point is 00:25:48 news because if the future were predictable, then what we did wouldn't matter. I believe that what people do matters. We change the future by our actions. And so we can use these kids, though, and other groups of people who are disenfranchised to get a little bit of insight into what our future holds, because all other things being equal, if we figured out a way to make kids' lives miserable with technology, in a few years, we'll be doing it to adults too, right? You know, like if you think about, say, just surveillance cameras, right? Surveillance cameras were once a thing
Starting point is 00:26:21 that were literally limited to maximum security prisons. And now they're things that we pay Google to install in our own houses. And so, you know, that's how these adoption curves work, right? So by making this a story about refugees with this story on authorized bread, I was able to kind of make more vivid the underlying tale that we are living through when we say like, well, it's okay and legitimate for Apple to decide who you can buy your apps from. And it's okay and legitimate for HP to decide who you buy your ink from. And it's okay and legitimate for GM to decide which car parts you can install in your engine because now GM is using cryptographic handshaking to validate new parts when they're installed. And even if the part is fully functional and on equivalent replacement
Starting point is 00:27:10 for the OEM part, GM engine can still reject it, can still force you to buy the OEM part. And if the dead hand of the manufacturer rests on a product after you buy it and can force you to arrange your affairs to benefit the manufacturer shareholders, even at your own expense, then you don't own property anymore, right? Like that makes you like a tenant who's subject to a license agreement or a lease, not an owner. Owners, you know, the Blackstone on property, the classic text that you read when you do a first year property law course, Blackstone just describes property as that which man enjoys soul and despotic dominion over to the exclusion of every other man in the universe, right? That's not like the Ayula, right? Ayula
Starting point is 00:27:58 runs counter to that. Ayula that says like, by being dumb enough to use this product, you agree that I'm allowed to come over to your house and punch your grandmother and wear your underwear and make long distance calls and eat all the food in your fridge is not what Blackstone had in mind when he talked about property. So what about services, right? You know, like Netflix versus owning a DVD. Do you see that as like a lessing of our, a decrease in our rights? Well, it depends, right? It's not Netflix in a vacuum, but Netflix plus DRM for sure. So historically, copyright has been what they call a bundle of rights. So there's a bunch of things that you get as a copyright holder, and there's a bunch of things you don't get as a
Starting point is 00:28:36 copyright holder. I mean, obviously, being a broadcaster or someone who licenses work for broadcast means that someone can't record the broadcast with their VCR and then sell the VHS cassettes. But they sure as heck can record it with their VCR, lend the tapes to their friends, store them forever and never have to go to the video store to buy your VHS. This was actually decided on by the Supreme Court in 1984 in a case called Sony versus Universal, the Betamax case. And what happened after the Betamax case, after we had this era in which technology that might be used to infringe copyright, but was also widely used to do things that copyright permits that were accepted by copyright, is that we passed this law called the Digital Millennium Copyright Act, or DMCA. It's a big, gnarly hairball of copyright, but the relevant clause here is Section 1201.
Starting point is 00:29:30 And that's the one that makes it illegal to tamper with DRM, even if you are doing something lawful. Recording Netflix, totally lawful, right? But breaking DRM is radioactively illegal. Trafficking a tool that lets you record Netflix without Netflix's permission, if that tool bypasses the DRM, that's punishable by a five-year prison sentence and a $500,000 fine. So normally what you expect is if a company like Netflix comes into the market with a bunch of offers, some of which are fair and some of which are unfair, and the unfair ones are things like, well, you can't record it and watch it later. And you can't decide what device you watch it on.
Starting point is 00:30:10 You can only use devices that Netflix has blessed and so on and so on that you would expect that other manufacturers would enter the market to correct that in the same way that like, you know, if your car came with the cigarette lighter fitted with a special bolt that needed a special screwdriver to remove it, and that bolt was there to stop you from using a third-party phone charger, and you had to buy their phone charger, which costs like $200 or would only charge certain brands of phone, you would expect that the next time you went to the gas station, next to the giant fishbowl of $1 phone chargers would be another giant fishbowl full of 50 cent screwdrivers to remove that bolt, right? That's how markets usually function. But because of the DMCA, companies have figured out how to bootstrap themselves into something that you might call felony contempt of business model, right? Where once you add like a one molecule thick layer of DRM to a product, then using that product in a way that advantages you instead of the manufacturer becomes a literal
Starting point is 00:31:12 felony. And so Netflix is fine as far as it goes, but Netflix plus DRM means that a bunch of features that would otherwise be legitimate are off limitslimits. Now, that's like the opening act. That's just the warm-up. Because the real problem is that the DMCA is so broadly worded and has been so widely abused to produce bad jurisprudence that the DMCA now stretches to prohibiting independent security auditing because when you reveal defects in a product that has DRM in it, you make it easier for people to circumvent the DRM. And so the upshot of that is that we have this growing constellation of devices that represent unauditable attack surfaces that will contain defects because, you know,
Starting point is 00:31:57 things contain defects. The halting state problem is real. And those defects can be researched, discovered, and weaponized only by criminals. And good guys who in good faith discover those defects can only report those defects to the extent that they do so with permission and under term set by manufacturers when those defects are revealed. So we have made companies that own products. And this goes way beyond whether or not you can record Netflix or whether there's a competitive market. This goes to like, now that we have DRM and browsers, which was standardized by the W3C a couple of years ago, we now have 2 billion browsers in
Starting point is 00:32:35 the field that have unauditable attack surfaces that can be used to control everything you do with a browser. So some of these are control surfaces for actuating sensing IoT devices. So they may be a vector or a gateway for inserting malware into industrial control systems or into car engines or into pacemakers, all of which have browser-based control systems. Or it may be used to compromise everything you do with your browser, your banking, your love life, your family life, you know, and so on and so on. And so that's the real hazard here, right? The entertainment industry side of things, you know, whatever, I make my living from it. That's fine. I'm a science fiction writer. I hope that we will have good rules for my industry. But like,
Starting point is 00:33:22 honestly, if we decide to allow policy designed to protect entertainment that helps you survive the long slog from the cradle to the grave without getting too bored, and if we take that and turn it into the basis for instituting a kind of totalitarian control in our networks, then we're like Huxley-ing ourselves into the full Orwell, right? That is like the most stupid, awful possible dystopia, right? Like to make sure people don't watch TV wrong, we make it illegal to report defects in browsers. Are you kidding me? What idiot thought that was a good idea? There's something I'm not clear on, right? So these are a lot of problems to do with legislation around technology, but they're all focused on technology.
Starting point is 00:34:07 And I'm wondering why. Why is software so important? Like nobody envisions a dystopia where like regulations around like e-cigarettes or something like lead to a horrible end result. Like why is software something that has that much power? Well, I think that there's two things going on. One is that the tech industry is coterminal with the dismantling of antitrust. So we're just the first industry that has had this terrible policy. And if you like, if you want to see how things can go horribly wrong in domains that have nothing to do with software, if they're not regulated, right? Just think of like food safety, right? Like cholera will kill you dead as will listeria,
Starting point is 00:34:43 right? And in ways that are like much more immediate and literally visceral than any software bug, right? You know, my grandmother was a war refugee. She was a child soldier in the siege of Leningrad and she married my grandfather who she met in Siberia. He was a Pole and my dad was born while they were living as displaced people in Azerbaijan. And then she lost track of my grandfather
Starting point is 00:35:03 and ended up in Poland for a while with his brother. And the only Polish she really ever learned was how to curse in Polish. So it's the only Polish I know. And the only Polish curse word I know is cholera, which just means cholera. Like that's how terrible cholera is. It's the all purpose thing that you say when you're angry, right? Chol really bad news. And so, evidence-based policy is only possible to the extent that we have pluralistic governance and not oligarchic governance. And we haven't had that. We've been losing that one drip and drab at a time for 40 years, and we're at a pretty low point right now. But then the other thing, of course, is that software does have a special nature, right? I'm not a tech exceptionalist,
Starting point is 00:35:49 but I do think that both software and networks, as they're currently construed, have a special nature that regulators struggle with. And I think that that nature is in its universality, right? Prior to the Turing completeness, prior to a von Neumann machine, you had these specialized mechanical or electronic calculators. If you wanted to tabulate a census, you built one machine. And if you wanted to make an actuarial table, you made a different machine and your ballistics table would be on a third machine. And short of dismantling the machine down to the component level, there was no way to repurpose one machine to do the other kind of computation. And what Turing gives us is this idea of completeness, of being able to build a single universal machine that can compute any instruction that we can
Starting point is 00:36:36 express in symbolic language. And, you know, we go from, if you can imagine, like our paleo computing forefathers and foremothers struggling to wire together a different machine for every purpose and then going, oh my goodness, we figured out how to make one machine that does it all. This is incredible. It's like, who knew that this was possible? Now we actually struggle to get rid of Turing completeness, right? There are so many people who would love to build Turing complete minus one, right? Or, you know, some limited Turing Completeness set, right? Like, make me a laser printer that also won't run malware, right? Like, we don't know how to do that. And, you know, periodically, people show that you can,
Starting point is 00:37:16 like, Anqui did this research on HP InkJets, where the way that you update an HP InkJets firmware in at least five years ago, was that in the Postgres file that you update an HPE Inkjet's firmware in at least five years ago was that in the Postgres file that you sent to the printer, you had a comment line that said, new firmware blob begins here. And it would ingest everything between that and the end firmware blob and flash its ROMs with it. No checks. And so literally, I could send you a file called resume.doc. He did it in 100 lines of code, hidden, not visible. When you sent it to the printer to print out my resume, your printer would be flashed with my new firmware, which would then do things like open a reverse shell to my laptop out of your corporate firewall, crawl your network looking
Starting point is 00:38:03 for zero days, compromise every computer it could find, scan every document that was printed for credit card numbers, and make sure to flag those and send them to me. I mean, you know, like, just awful things. And so, people would love it if you could make a computer that was almost Turing complete, you know, or even just a programming environment that was almost Turing complete. I remember when it was, you know, kind of big news that someone figured out how to make PostScript Turing complete but then a couple of years later someone presented research showing that magic the gathering was Turing complete you just need a big deck of cards so Turing completeness goes from this thing that was like a miraculous discovery and rare and precious to something
Starting point is 00:38:42 that's almost like it's like a nuisance, an unavoidable nuisance. It's like pubic lice or something, right? We just can't get rid of it. It's in all of our systems. I go to Burning Man and they say that glitter is the pubic lice of the playa because some people insist on putting glitters on their body. And once someone is wearing glitter, everybody ends up wearing glitter. The true incompleteness just keeps creeping in. You know, I remember seeing a presentation, I think at a CCC, where someone stood up and they said, like, I was investigating this like toy programming language that came with a new social network. This is long enough ago that we still had new social networks back before people were like, I don't know, Facebook will just kill you. Don't make a
Starting point is 00:39:29 social network. And there was a little toy programming language to like animate sprites on your page, on your homepage, on this social network. And it would like, it had like an X and a Y and a speed and like just a couple other commands looping really primitive structures. And they're like, I figured out how to combine all of those commands to create Turing completeness. And I wrote a virus and I now control all the pages on the entire social network. Right. So it would be so great if we could figure out how to not always be Turing complete, but we can't, we also can't figure out how to make an internet that routes all the messages except for the ones that we don't like, or all the protocols except for the ones that we don't like, or connects all the endpoints except for the ones that we don't
Starting point is 00:40:12 like. Not because this wouldn't be totally amazeballs, but because that is not how the internet works. The reason the internet is so powerful and flexible is because you can hide protocols inside of other protocols. You can proxy and route around blocks. We can do a bunch of things that, you know, run counter to like irretrievably run counter to this. And, you know, my friend, Catherine Moronic, she says every complex ecosystem has parasites. You know, the only systems that we've ever had that didn't have these universal properties were systems that were monocultures like CompuServe, right? Where CompuServe, by dint of being disconnected from everything else, by not having to federate, by being able to control all the services that it rolled out, CompuServe was able to attain a high degree of control over what its users did. But that happened at the expense of being able to be useful
Starting point is 00:41:06 to more users than it was ever able to command. The reason that the web grew is not because everybody bought computers. Everybody bought computers because the web grew. And the reason the web grew is that we let anyone connect anything to the web. And all of a sudden, there was a reason for everyone to get online. And so this is the underlying problem for regulators, because regulators are accustomed to thinking of complicated things as being special purpose and a special purpose things as being regulatable, right? If I say to you, distracted driving is a problem, we have to ban car stereos that can interface with telephones.
Starting point is 00:41:45 You might say that's a terrible idea. You might say it's not supported by the evidence, but you wouldn't say then it won't be a car anymore, right? But if I say to you, you need to make a computer that can run all the programs, except for this one that really irritates me. It doesn't matter how sincere my irritation is. You can't make that computer.
Starting point is 00:42:05 Right. It won't be a computer anymore. That's interesting. In fact, like that's the problem, I guess, from the regulatory perspective, but it seems like that's also the solution to the oppressor element. Right. And I think it comes up in this toaster story as well. Right. Like if you can't restrict what these computers can do because they're Turing complete, it's like, you know, hey, thanks, Turing. Sorry, we were a dick to you, but now I can unlock whatever we got. Yeah. I mean, that is the amazing thing that we get from technology. Technology giveth and technology taketh away.
Starting point is 00:42:36 I think that it's important to note when we talk about regulators that there's one other dimension here, which is that regulators are perfectly capable of making good policy about complex technical realms that they don't personally understand, right? Like the fact that you are not dead of a foodborne illness and that yet there are no foodborne illness specialists in Congress tells you that we can make rules even though we don't have the individual expertise embodied in the lawmakers. That's why we have things like expert agencies. And those expert agencies, in theory, their oversight is by people who are not partisans for the industry that they're supposed to be regulating and instead are watchdogs. And so,
Starting point is 00:43:16 when you have that working well, it works really well. And oftentimes, even very thorny questions can have both empirical and political answers, and even the political answers can be informed by empiricism. So my favorite example of this is the guy who used to be the drug czar of the UK. It was a guy named David Nutt. He's a psychopharmacologist. He was fired by Theresa May, who's now the prime minister, back when she was home secretary. And Nutt undertook a redrafting of the way that the drug authority scheduled potentially harmful recreational drugs, you know, the Schedule A, Schedule B, Schedule C. And what he did was he convened an expert panel of people who had diverse expert experience in this, and he asked them to rate all of the drugs
Starting point is 00:43:59 that were in the pool, in the mix, using their best evidence, using the studies that they'd done themselves or the literature, he asked them to rate how harmful each drug was to the user of the drug, the user's family, and the wider society. And then they did statistical modeling to see which ratings were stable overall, regardless of how you weighted those three criteria. Because some drugs just stayed in the same band, no matter how you weighted those three criteria. Because some drugs just stayed in the same band, no matter how you prioritized harm to self, harm to family, harm to wider society. But some of them moved around a lot, depending on how you rated those different priorities. And so Nutt then went to the parliament and he said, all right, some of these drugs,
Starting point is 00:44:43 you don't get to make a choice about, right? We know where they go because there's an empirical answer to how harmful they are in the schedule. But then there are some drugs where your priorities will inform where we put them. We're not going to let you tell us where they go, but we're going to let you tell us what your priorities are. And once you tell us what your priorities are, which is a political question that has no empirical answer, we can tell you empirically where these drugs should be scheduled based on your political preferences. So lawmakers are perfectly capable of being informed by experts and still making political decisions that reflect domains or elements of regulation that have no one empirical answer. And the reason we don't have that today with our technology is not
Starting point is 00:45:25 because lawmakers are ignorant and not because we've fallen from grace, right? Not because we've lost our ability to reason, but because as Upton Sinclair said, it's impossible to get a man to understand something when his paycheck depends on him not understanding it. Lawmakers rely on a money machine from industry to get and keep their seats. And then they rotate in and out of industry when they finish their stint in Congress. And that's also true of regulators. And it's true in part because of concentration. When you only have four big companies left in a sector, obviously the only people who are qualified to regulate them, to even understand how they work, is going to be someone who's an executive with one of them, and probably an executive with lots of them,
Starting point is 00:46:09 right? You know, it's not just that like Tom Wheeler, who was Obama's FCC chairman, was a former Comcast executive, nor that Ajit Pai, who's Trump's FCC chairman, is a former Verizon executive. But if you look at their CVs, you see that they worked for several of the four remaining giant telcos or five remaining giant telcos. And moreover, they're like godparents to executives at the other telcos because everyone else is an alumnus of one of those companies. They are married into them. They are intertwined with every single company left in their industry. And so those people, they may have the technical expertise to be regulators and
Starting point is 00:46:45 answer empirical questions, but they have so many conflicts of interest that their answers must always be suspect. And so this is not just about this character of software and its pluripotence. It's also about the character of the moment we live in, the moment in which software has become important. And it's this moment in which we have lost the ability to reason together and come to empirical answers because we've chosen to turn our back on that part of our regulatory process. Me, myself, you're saying there's like a pluralistic solution of us all coming together, but what steps do I take to bring in this internet culture to make sure it stays around and doesn't become this oppressive? What can a person do? What can somebody who is, you know, a software developer or somebody, can we help put the future in the direction of open internet, of a non-oppressive regime, et cetera? Yeah. So I'm a great fan of a framework developed by a guy named Lawrence Lessig. He's one of the founders of Creative Commons,
Starting point is 00:47:48 cyber lawyer, really, really smart and wonderful guy. And he talks about how the world is regulated by four forces. There's code, what's technologically possible. There's law, what's legally permissible. There's norms, what's socially acceptable. And markets, what's profitable. And that all four of these forces interact with one another to determine what actually happens. You know, things that are technologically impossible don't happen, right? You could have a terrible law that says everyone is allowed to carve their name on the surface of the moon with a green laser. But until we have green lasers that can carve your name on the surface of the moon, the law can say whatever it wants and nothing is going to happen as a result.
Starting point is 00:48:28 So what's technologically possible obviously has a huge shift. What's legally permissible changes what's profitable, right? It's not that things that are illegal can't be profitable, but the transaction costs of being a drug dealer, for example, include a whole bunch of operational security measures that take a big chunk out of every dollar that you earn because you can't just go on Craigslist and say, I have heroin for sale and here's my house and come on by and get as much as you'd like. And so that makes the whole process a lot more complicated and harder to manage and much more expensive and less profitable. Things that are profitable,
Starting point is 00:49:10 those things are easier to make legal, right? If you are making so much money from what you do that you can spare a few dollars to send lobbyists to Congress or to get your customers to agitate on your behalf or both, then it's easier for you to make what you do legal and to expand the legality of what you're doing. And norms change what's profitable as well as what's legal, right? It's very hard to pass a law that flies in the face of norms. You know, one of the reasons that we're seeing marijuana legalization all over the place is that normatively smoking marijuana has become pretty acceptable. Same thing with gay marriage, right? Banning gay marriage changed in part because normatively being gay changed, right? The social acceptability of being gay changed. And so the law followed from that. And so all of these things swirl around, you know, and one of the things, one of the implications of this is that when you find yourself hitting a wall with trying to affect change in the world, you can ask yourself, which of these axes was I pushing on? Because you may have run out of headroom or freedom of motion in one axis, right? The law,
Starting point is 00:50:12 you may have hit the boundaries of what the law can do, but maybe you can make a technology, right? Not a technology that allows you to exist indefinitely outside of the law, but a technology that allows you to agitate for a change in the law, right? Or a technology that allows you to change normatively how we think about the law, right? Say an ad blocking tool, right? So one of the things that I think is going to be hugely effective at pushing back on the expansion of control over browsers is that ad blocking is the most widely adopted technology in the history of the world in terms of time from launch to now. And it's the largest consumer revolt in history.
Starting point is 00:50:52 And it's such that the enforceable EULAs on browsers plus DRM make it harder to block ads. Then you will see more scope for legal change to legalize ad blockers that have to go beyond what they do now, that have to be able to violate EULAs even though that puts them in conflict with the Computer Fraud and Abuse Act, that have to be able to circumvent DRM even though that puts them in conflict with the Digital Money and Copyright Act, and so on because, normatively, people want to keep their ad blockers. And so, legally, we will open up the
Starting point is 00:51:26 space for legal reform. And technologically, we got there because someone made an ad blocker and put it in the world, right? So all of these things swirl around and around. And so when you've run out of headroom in one domain, try seeing if you've got a little freedom of motion in the other. You know, I live in Los Angeles now, and I'm one of nature's terrible parkers. And every now and again, I have to parallel park my car. I never really owned a car before now. So every now and again, I have to parallel park my car. And when I do, I'm one of those guys who has to twist the wheel all the way over to the
Starting point is 00:52:00 right and then tap the gas pedal and get one half inch of space from doing that. And then I turn my wheel all the way to the left and I tap the gas pedal and I put it in drive and I get another quarter inch. And inch by inch, I am able to work my way into parking spots that people with better geometric sense than me can get into in a few seconds. But in a stepwise way, I'm able to finally get my car up to the curb. You can think of it as being a kind of hill climbing exercise. When we're writing software and we have to traverse a computational terrain that is so complex that enumerating it would take so long that by the time we were done, it would have changed so much that we would have to start
Starting point is 00:52:41 enumerating it all over again. We have another way of solving the problem, right? We do it through hill climbing. We analogize a software agent to something like an ant, you know, an insect with only forward facing eyes. And so it can't sense the gradients that it wants to ascend by looking at them, but it can pull its legs and it can ask which direction that it knows about will take it towards higher ground. And it can proceed one step in that direction and then pull its legs again to see which leg is now standing on the higher ground. And as it attains each step, it finds another step that it can take. And when it runs out of steps, it's reached its local maximum. It may not be the highest point it could have attained if it could see the whole landscape, but it can see how to get to the highest point available to it from where it's starting right
Starting point is 00:53:27 then. And so we can use hill climbing as a way to think about our tactics as we move towards the strategic goal of a more pluralistic, fairer society where technology is a tool for human thriving instead of human oppression. You can ask yourself at any given moment which steps are available to you legally, technologically, in a market framework, or normatively. Is there someone you can have the conversation with? Is there a company you can start or you can support? Is there a bill pending that you can write a letter in support of or campaign on behalf of? Is there a politician
Starting point is 00:54:00 running who has good ideas? Each one of these advances the cause. And I think the reason that we've been cynical before when we think about this stuff is we think of each one of these as an end instead of a means, right? If only we can pass the law, then we will be done. Well, no, if we can pass the law, then we will have opened up some normative space. And if we open up some normative space, maybe we can open up some market space. And if we open up some market space, maybe we'll be able to invest in some new technology and open up some new code space. And so this is my framework for action. And there are a bunch of groups that operate on these principles and try to bind together the labor of lots of people to make a difference in the world. So obviously, Electronic Frontier
Starting point is 00:54:40 Foundation is one that I work with. They don't pay me. I get paid as a research affiliate of MIT Media Lab, and they pay for the work I do at EFF. There's also the Free Software Foundation. There's also Creative Commons. There's also the Software Freedom Law Center. There's also the Software Conservancy. I mean, there's so many organizations, the Internet Archive and so on and so on, that are doing excellent work. And depending on where your proclivities lie, they either need your code, or your money, or your name, or for you to show up at a rally, or to talk to your lawmaker, or to vote and put your support behind a bill, right? And all of these axes, code, law, norms, and markets are being deployed by these organizations all the time. And it's a matter of working with them to help them on the actions that they're doing, and then working in your own life to advance the cause where you are now. That is a great call to action.
Starting point is 00:55:29 I really like that. I personally am a member of the CCLA, the Canadian Civil Liberties Association. Yeah, just sued the government of Canada, the province of Ontario and the city of Toronto over Google's so-called smart city project through Sidewalk Labs. Yeah, yeah. It's important to have these organizations that can take on corporate interests, I feel like. Yeah, yeah. I mean, we broke a story on Boing Boing that the Sidewalk Labs had secretly attained approval to build their so-called prototype city on a plot of land that basically covered the entire lakefront of Toronto.
Starting point is 00:56:02 It was supposed to be one little corner of the city. And in secret, they'd gotten the whole city. And then when we broke the story, their PR person wrote us a bunch of emails that basically lied about it. And they said, oh, no, no, you're misreading the relevant documents. Finally, they admitted it. And that's part of what spurred this lawsuit. Because once you've got really the whole lakefront covered by Google's actuators and sensors under continuous surveillance, the CCLA, the Canadian Civil Liberties Association, was able to argue that this violated Canadians' rights under the Charter of Rights and Freedoms. Yeah, I didn't know about your role in this, but that is amazing. So I want to be conscious of your time. So, yeah, I think everybody should obviously check out those resources. I'll put some links and your new book Radicalize, which is super interesting,
Starting point is 00:56:50 kind of, it paints a very dystopian picture of some of the concepts we've talked about here. I like to think of it as a warning rather than a prediction, right? The idea here is, and not all of them are dystopian. I mean, Unauthorized Bread's got a happy ending, and so does Model Minority. Radicalize is a little more ambiguous ending. And depending on who you're rooting for in Mask of the Red Death, you could make the case that at least someone got a happy ending out of that. But, you know, I think of them as warnings, right? As ways of like kind of having the thought experiment. What if we don't do anything about this? Or what if we let things go wrong? What's at stake here? Is this a fight that we should really join? Not as predictions, right?
Starting point is 00:57:28 These are not inevitable. These are only things that we can choose to allow to come to pass or that we can intervene to prevent. Yeah, they're great. I really like the unauthorized bread. I think that's my favorite. Oh, thank you so much. I really appreciate that. We'll take a minute for fandom. I've read like so many of your books. So it's quite an honor to talk to you. Thank you. The one book that stayed with me a long time was this Someone Comes to Town and Someone Leaves. Is that what it's called? Someone Leaves Town. So that's from an old saying about screenwriting that there's only two stories in the movies. Someone comes to town, someone leaves town. Oh, I didn't even know that. I feel like for some reason that book stayed with me and I feel like it has an underlying, I don't know, like theme about like being an outsider or like alienated
Starting point is 00:58:09 from the world. I don't know. Is that what it's about? God, I mean, you know, that one came straight out of my subconscious, you know, to kind of recap it very briefly. It's a story about a guy who grows up in rural Ontario where his father is a mountain and his mother is a washing machine. And his brothers are variously like, you know, nesting Russian dolls and an immortal vampire. And one of them is an island and so on. One is a clairvoyant. And it's about how he leaves his town in rural North Ontario, moves to Toronto and moves into this Bohemian neighborhood called Kensington Market, where he hooks up with a dumpster diver to build a meshing wireless network through the city. And I wrote that story. I wrote the first 10,000 words while staying at a
Starting point is 00:58:50 B&B when my parents were out visiting me when I lived in Northern California. And we went out to wine country and I couldn't sleep. And the first 10,000 words just sort of barfed out of my fingertips. And then it sat on my hard drive for two years. And then I ran out of steam on another book that I was working on. I got stuck. And so I started working on that one again. And then it sat on my hard drive for two years. And then I ran out of steam on another book that I was working on. I got stuck. And so I started working on that one again. And then I spent like another year and a half on that one. And then as I was getting towards the end, I got really sad because I thought that the
Starting point is 00:59:15 ending I was writing that I'd been planning on for most of the time that I'd been working on the book really wasn't going to work. And I had a different ending. And I wrote that different ending. And I thought, OK, well, now I'm going to have to go back and rewrite the whole book. Because obviously, you know, I've just grafted a different head onto this body. And it's going to need a lot of tinkering. And I turned the book over, like printed it out and turned it over and started reading page one that I'd written in that B&B at like three in the morning, and you know, six years before
Starting point is 00:59:42 whatever. And I realized that I just foreshadowed, I'd foreshadowed the ending that I just written, you know, all those years ago, completely unconsciously. I mean, I actually finished that book. Well, I'm at a conference that I was speaking at in Arizona. And I fell asleep while writing the last paragraph and kept typing. And I woke up and there was like three sentences of just garbage, which I kind of shook my head and deleted and then wrote the actual ending that's there. And so like, that's how kind of totally like out of my deep subconscious that book came and I couldn't tell you what the hell. Well, that that is, it'll leave me still wondering somebody on Amazon said, like, whether
Starting point is 01:00:21 you like this book or not, has something to do with how your childhood went. And I feel that that's true, but I'm not clear on why. It's a super interesting book, though. Yeah, I really enjoyed writing it. And I have in the back of my mind that someday I'd like to write another book that's a little like that one. But I don't know what that book would be. Well, thank you so much for your time, Corey. It's been great. Nice talking to you. Great. Take care. Bye. That was the interview. What did you think? If your hands are free at the moment,
Starting point is 01:00:51 ping me on Twitter. Let me know that you are listening or perhaps what you thought of the episode. This was not an episode about like Postgres or some programming language, but about the technological world we live in. A little bit different than a lot of the episodes I've done. So let me know your thoughts.
Starting point is 01:01:06 What did you think of it? Until next time, thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.