a16z Podcast - a16z Podcast: Making the Case for Permissionless Innovation

Episode Date: September 17, 2015

The internet as it has evolved in the United States is perhaps the best example of “permissionless innovation” -- the idea that you can innovate without first waiting for permission or clearance. ...And so academics, entrepreneurs, and people took up the internet, developed technologies over it, and in the process created fantastically valuable companies that are now household names around the world. But such innovation hasn't happened outside the U.S., argues Adam Thierer -- research fellow with the Technology Policy Program at the Mercatus Center at George Mason University -- because other regions have reversed the model of "innovate first, regulate later" (or rather, regulate only as necessary and if not already covered by existing laws). Thierer, who has also authored a book on Permissionless Innovation, joins this segment of the a16z podcast to discuss "technopanic" cycles; emerging areas of interest; and where "best practices" help ... or hurt when it comes to soft regulation.

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal, and I'm here today with Adam Tehrer, who is a senior researcher at the Mercatus Center at George Mason University, who has long focused. She's actually been in the public policy world for over 25 years, focusing on technology and innovation and policy. That's right. Thanks. Welcome, Adam. Thanks for having me. No problem. Well, since you were here, we thought it'd be great to talk to you about some of the things that we care about. And one of those themes is actually the name of a book that you put out, last year, that you have a second edition coming out next year called Permissionless Innovation. And do you want to actually just start, and the subtitle of that book, by the way, is the continuing case for comprehensive technological freedom. I want to unpack that subtitle in a minute, but first I actually want to just start by asking you, how do you define permissionless innovation?
Starting point is 00:00:48 What does that mean? So, Permissionless Innovation generally refers to the idea of the freedom to experiment and learn through ongoing trial and error experimentation. It's an openness to change in disruption and risk-taking and even the idea of failure as part of the process of innovating. It's an idea that basically has been part of the American ethos for a long, long time, but especially with the Internet, we've embraced this idea of this general freedom to experiment and even sometimes fail, which really makes America unique in the world in this regard. We've led the way with the Internet for this reason, in my opinion.
Starting point is 00:01:23 I actually agree with that. But now I want to get to the subtitle of your book, which is the case for technological freedom. Right. The question, I think, that comes out a lot in some of the policy debates, especially when people have fears around new technologies, is, does that freedom come out of price? Because sometimes we'll toot the horn of technology freedom without thinking about safety and why some of those regulations or policies were there in the first place. I spend most of my time trying to address those very legitimate fears about technological disruption. I try to divide those dangers or disruptions or concerns into five different groups, privacy, safety, security, economic disruption, and intellectual property. In each one of these areas, you have advocates of trying to come up with a more precautionary or preemptive approach to policy that would essentially say you shouldn't be allowed to innovate unless you seek out permission or someone's blessing first and then we'll give you potentially the freedom to innovate. Again, we don't generally take that approach in the United States, but sometimes if the concern is serious enough, it's elevated to the point where we have a sort of preemptive or precautionary approach and don't allow innovation.
Starting point is 00:02:30 But the whole point of my book and the idea of permissible innovation more generally is that the benefit of the doubt should be with the idea of innovation allowed as our policy default. And until such a time as the other crowd can make a compelling case that those concerns are serious enough, that the potential risk is immediate. it, grave, catastrophic, or whatever else, then they can come in and intervene. But until then, no, we should allow innovation to go forward. So you're basically just to kind of distill that argument. You're basically saying, reverse this notion of legislate first and then innovate. Let the innovation happen first and then legislate as needed. That's right. You can think about it in terms of a conflict between those who would say, well, better to be safe than sorry, versus those who would basically say, well, innovate and see what happens. Or the better way to put it
Starting point is 00:03:18 use of traditional phrases, nothing ventured, nothing gained. So we know that every innovation entails a risk or a potential for disruption. It's been the case for every single technology going way back through history. But we gain so much in the process that we learn how to adapt to these technological changes. I've done an extensive history, for example, of the rise of the camera and how socially and economically disruptive that was in society. The idea was that people could take your photo without permission and just walk away. And, you know, previously that was unthinkable. And then all of a sudden we acclimated ourselves to that idea. And instead of panicking over cameras, we all went and bought one. Right. It became an essential part of the human experience
Starting point is 00:03:57 to have a camera. So we do find ways of changing and adapting. Sometimes though there are more legitimate harms. There are more legitimate privacy, your safety, your security, your economic concerns that require law to address. But again, we don't start with the presumption that we should have an overarching heavy-handed sledgehammer approach to trying to solve all of these problems. Because the whole point of my book is to say, if we spend all of our time living in fear of hypothetical worst-case scenarios and basing public policy upon them, then best-case scenarios can never come about. Let's talk about this a little bit more concretely.
Starting point is 00:04:30 So what are some examples where if there had been legislation, certain strictures in place before people had the permission to innovate, that it would have stopped that thing from actually coming about? Like, you'd mention the Internet earlier. Like, concretely, what does that mean? The Internet is probably the best example of permissionless innovation. in action in terms of what public policy can mean or incentivize in the real world. And we have a really nice way to contrast what happened here in the United States with what happened in Europe.
Starting point is 00:04:57 Europe took a very heavy-handed, top-down approach to the data directives and privacy rules that made it almost impossible for innovators to really take hold there or really get any investment. And what, as a result, it's hard today to even name a major innovator over in the EU that operates in this space. Actually, can we think of one? It's really hard. A lot of people when I ask this question in public, we'll say, well, Skype. And I say, well, Skype's owned by an American company now. Can you give me another? And sometimes we'll say, Rovio.
Starting point is 00:05:22 And I said, okay, yeah, I'll give you the maker of Angry Birds. But, you know, beyond that, it is really quite difficult. Meanwhile, here in the United States, all of our companies are household names across the globe. Everybody desires companies like ours be in their home countries or continents now. Well, that's a really good example of what economists and political scientists refer to as a natural real world experiment that has played out on either side of the Atlantic over the past 20 years. That did not happen by accident. It happened because in the United States, we had a clear, bipartisan vision for the Internet,
Starting point is 00:05:54 mostly driven by the Clinton administration, who first commercialized the Internet for open development, and then secondly came out with a framework for global electronic commerce that basically said market should lead, we should have voluntary contractual relationships, where problems develop will have a simple and minimalist approach to policy that targets and addresses those problems, after the fact. That was beautiful. That was the secret sauce that powered the internet revolution. So that's true in cases where that's an entirely new technology and people didn't know what to expect. What about cases where it's not an incremental increase, but it's a technology that affects existing laws that already in place? And so what comes to mind, for example, is an op-ed that you and Eli and Jerry wrote for me when I was at Wired about drones as the next great platform for innovation airspace.
Starting point is 00:06:43 And one of the arguments you made that I thought really resonated with me in particular was that it's not that you're not opposed to there being certain regulation in place, but there are already existing regulations for privacy and other things that we can use. That's right. Why would you make that argument? Like, what's the logic? So what technology innovators need to understand is that their job is going to be a lot easier if they don't have to deal with an agency with an F at the beginning of it. So whether it's the Food and Drug Administration or the Federal Communications Commission or the FAA or the F or the FAA. the Federal Aviation Administration, these are folks that already have a very restrictive approach to public policy governing new innovation. You do have to go and find someone and get
Starting point is 00:07:24 their blessing before you can innovate in those spaces. Luckily, in most of the other spaces that Internet innovators operate in, they don't have to worry about that. But of course, as Mark Andreessen has informed us, software's eating the world. And a lot of innovators are spreading out into all these new areas that are heavily regulated. Right. It's in the physical world for the first time. That's right. That's FDA, FAA, FCC. really exactly and so as that process continues they're going to have to face up to the fact that there's already a body of law they may have to contend with well what they're going to have to do is they're going to have to find a way to listen to the concerns the policymakers have this is what's so
Starting point is 00:07:57 essential that i think a lot of innovators in silicon valley and elsewhere don't understand for the most part policymakers and regulators they just want to be heard they want to have their concerns heard they want to know that you're doing something to address these things in some fashion now luckily there are ways to address these problems sometimes they're already on the books in other forms outside of regulation. So if we could relax the rules, it wouldn't mean the end of all law. We would still have contract law and property law and torts and common law solutions to problems that are created by, say, the rise of drones or other new technologies. So just because you eliminate the heavy-handed preemptive regulatory controls doesn't
Starting point is 00:08:35 mean we live in a world of anarchy. We still have rules on the books that are already in the books that already faced many other technologies and industries and govern them quite nicely. How can those innovators hear those policymakers out? Like, what advice would you give to some of those, some of our startup CEOs that want to work in these spaces? Like, how can they concretely do something to do that, address that? It's essential that technology innovators be willing to go and listen to the concerns raised by policymakers and regulators. Even if they seem outlandish, even if they're sort of engaged in a little bit of a techno panic about whatever the new technology DeJure is. The reality is you need to listen to those concerns. You need to take them seriously.
Starting point is 00:09:14 You can't say off-the-cuff remarks like, oh, your privacy's dead. Get over it. That's the worst thing you can do. You need to go and make an affirmative effort to educate them about your new tool or technology, to talk to them about its beneficial uses, and to hear them out about the problems that they fear will develop. Because some of those privacy and security and safety or economic disruptions will be very legitimate concerns. And I think it helps when you can work as well with other types of policy advocates or innovators or trade associations and try to better educate members of Congress or policymakers, but not every innovator is a member of a trade association or a group to do that. So oftentimes it's going to require you to pick up the phone or use your computer to send an
Starting point is 00:09:54 email and say, listen, we hear you, let us come in and talk to you about this and fly across the country and go visit that other coast and listen to those concerns and offer them the chance to come back to your headquarters and come on campus and learn about these technologies and get a hands-on feel for it. I saw this play out a decade ago with the panic that developed about social networking when it first came on the scenes. And I was always amazed by how many people in the policy world I was engaged in who were panicking about MySpace and Facebook when they first came about, who would never use social networking at all. And so I'd sit down with them and I'd say, Congressman or Senator, let me show you how to get on this site. So just physically being
Starting point is 00:10:31 able to make something concrete. Yeah. Let me show you how we can connect and how we can communicate and the interesting groups that we can interact with. And all of a sudden, you saw their eyes open to the reality that, hey, this is not this dangerous place that we thought it was. This is actually a welcoming environment that will have a lot of great innovation happening. So hear them out on the problems and at the same time paint a picture of the bigger opportunity. That's right. Not only that, show them the opportunity, the potential benefits, explain to them how it translates into innovation for America, competitiveness for our nation, and also that other types of benefits to consumers and maybe even jobs, show how this is homegrown innovation
Starting point is 00:11:10 that the rest of the world envies that we're doing right here on American soil, because that will really go a long way to countering a lot of those concerns when they hear a little bit of – let's face it, it's a little bit of flag waving, but it's effective and I think legitimately so. Well, I think that's right. If it's flag waving emptily, it wouldn't make much sense, but because it has actual results, people believe that. That's right. Let's quickly talk about the case of 23 and me. This is an interesting case because it's another example of what Mark and Ted here call regulatory arbitrage around innovation, or permissionless innovation, the idea that essentially you're going to see this global innovation arbitrage take place where if innovators
Starting point is 00:11:47 don't find a hospitable environment in one area, they'll quickly look to go somewhere else to find a more hospitable place to develop their technologies. That's clearly been the case with 23 and me, which of course our Food and Drug Administration famously went after the innovators there for their genetic testing service, but over in the UK, they were welcomed with open arms by the government who said, come on over here. The UK government and the Canadian government and Australian government have also done the same thing on drones. And they've done this in some countries in the UK as well on driverless cars. So in a world where technology can move in the same way that Capital did before it to wherever you find the most open, hospitable environment, you're going to expect to see more
Starting point is 00:12:29 of that in coming years. I want to caution innovators to not play that card too casually. Obviously, we want you to stay here and you should want to stay here because there are great benefits to being in the States. But make no doubt about it, if we start to give up on permissionless innovation and move towards a more precautionary principle-based approach to policy, I think innovators will be right to go to policymakers and say, hey, wait a minute, you know, we won the first round of the web wars by being more open to the idea of innovation. Don't close up shop now on us. We don't want to leave America. We want to stay on these shores. So be careful about how you play that that regulatory innovation, our global innovation arbitrage card, but do understand that it's
Starting point is 00:13:08 increasingly an effective way to wake up policymakers to the idea that there are other opportunities in this globe for innovators. So let's actually go back to something you mentioned earlier. You mentioned the phrase techno panics. And I know you've written about that. And there was a paper that just came out last week from the ITIF about the cycle of privacy technopanics. Do you want to share some of your thoughts on that? Yeah. I mean, this is an important concept. The idea. of technopanics, it evolves out of the idea of moral panics. It basically refers to this sort of negative response that sometimes accompanies the introduction of new technologies or forms of culture and our society, especially when younger members of society glom onto that technology.
Starting point is 00:13:42 We certainly saw this about the panic over social networking a decade ago, but we've seen it over many other technologies. People forget in 2004 when Gmail was introduced through proposals to ban it. Oh, I don't remember that. Oh, yeah. There were bills here in California that would have banned Gmail because it was unthinkable that we would target ads based on what was in your emails. Well, now, over 450 million people use Gmail as an essential resource in their lives. So that panic subsided. Likewise, we've panicked about geolocation technologies, and we're seeing a bit of a panic now about drones and about wearable tech and internet of things. So each of these technologies has this moment where people are like, oh my gosh, hit that panic button. And the danger
Starting point is 00:14:20 there is that whether it's for privacy, safety, safety, or security reasons, that we whip policymakers into the state of panic and that they end up legislating preemptively in a foolish way. Now, luckily, that doesn't happen a lot if for no other reason, then we don't get a lot done in our American system of government these days, for better or worse. But what does happen is that regulators take notice, and they play on that instinct, and they'll start intervening in other more subtle ways. The name of the game now in Washington is really best practices and industry guidance and
Starting point is 00:14:51 multi-stakeholder processes, and these are things that regulators can use indirect ways of influencing innovation without ever passing a law or regulation. So that's what's happening right now is instead of actually passing the law as you're seeing a lot of these best practices frameworks. Well, some of those are actually useful, though. They are useful, and I don't want to say that they, you know, they don't play a role. But for each and every new major technology I can think of that innovators out here are working on, whether it's Internet of Things, wearable technology, immersive technology,
Starting point is 00:15:18 biometrics, sharing economy, advanced medical devices. in each and every one of these cases, we now have a government document of some sort, usually produced by the U.S. Federal Trade Commission, which is a set of industry guidance or best practices that they expect innovators to follow and to adopt so-called privacy and security by design. Again, many of these steps are entirely sensible. But what's hard for innovators to know is if, well, if we try to do things a little bit differently, are we going to be held to blame for this, or are we going to be fined or penalized?
Starting point is 00:15:51 and the answer is we don't know. I also think it would be nice if the companies took more responsibility themselves for designing those things into their systems because they know their platforms and users best. And the example that comes to mind for me as a woman on the Internet and someone who has a lot of great female friends on the Internet who dealt with trolls and various other things, I think a lot of companies could have fixed those things much sooner.
Starting point is 00:16:14 Sure. But to your point, I'm what I'm hearing you, because I'm trying to figure out putting myself personally in that and how I would feel about legislation and technology by design trying to protect certain things. To your point, you're saying don't let the agencies come up with that by design up front. The key thing I'm trying to get across here is that we don't want a one-size-fits-all solution necessary. Right. Okay. That I can agree. And what I always say in my book on Permissionless Innovation, what we want is diverse solutions for a diverse citizenry.
Starting point is 00:16:41 We all have different values and attitudes towards a lot of types of things that happen online, whether it's speech or commerce. And I'm always concerned about the overarching top-down approach that tries to pigeonhole everybody into one type of system or solution. We need a diversity of approaches. But yes, innovators should offer those solutions when possible. And hopefully we'll get more of that in coming years to solve privacy and safety and security concerns because those are what drive policy for the most part and regulation. But again, I think what policymakers want to hear is that you've at least understood or acknowledged their concerns about these things and that you are taking some steps to address them through whether it's
Starting point is 00:17:22 privacy or security by design or building better systems from the start. But I don't want to be a cookie cutter approach where we say everybody's the same and everybody has to adopt the exact same playbook. That would be a mistake and hurt innovation. I'm actually curious to hear what you think are the next technologies that we need to pay more attention to in this space where technology and policy are kind of going to clash or what are the most important things that are coming down the pike. Well, going back to Mark's theme that software is the world. We know that each of the new technologies that are out there, each of the industry sectors that innovators are operating in, is going to experience the same sort of revolution
Starting point is 00:17:55 we saw in the media and communications world thanks to the Internet. So think about how that will impact, say, the medical profession or health care technology. You're starting to see this clash of visions about how to regulate these things play out in Washington at the Food and Drug Administration, which has already looked into the question of how to regulate or if it should regulate at all, mobile medical apps on our smartphones. Where do they land on that? Luckily, they came out with a guidance document saying, you know, be careful if you're engaging in something truly dangerous or risky.
Starting point is 00:18:24 But for the most part, it took a pretty good, light-handed approach to it and saying, for the most part, you don't need to worry too much about this at this time. That doesn't mean that the FDA is not going to continue to monitor that space. There may be other spaces where the FDA gets involved. 3D printed prosthetics is in the field I've done some work on. It feels like every day you hear an amazing story about some child, a baby, a dog being rescued by a part that was printed. Absolutely.
Starting point is 00:18:48 And these are real-world examples of life enhancements, in some cases, life-saving enhancements to humans that innovation is brought about. And so the FDA understands that there's some clear trade-offs there. Plus, they also understand it's just really, really hard to regulate all those innovations happening on smartphones or in the world of 3-D printing. These are general-purpose technologies that have many, many benefits. You just can't go out and say, we're going to ban all 3-D printers or license them, or we're going to ban all smartphones. It wouldn't work. So they have to come up with other approaches, and they come up again with those sort of best practices. I think the other area that I'm watching closely is immersive technology.
Starting point is 00:19:23 What does that mean? Immersive technology can include things like virtual reality or augmented reality. There was a lot of concern in Washington following the introduction of Google Glass. Of course, Google Glass is subsided as a technology now, but that technology is still with us in other ways. And we're going to witness, I think, a lot of concern about the issues raised by a world of immersive technologies, including sort of psychological concerns about sort of getting lost in that technology or getting distracted by it. So that's another major area.
Starting point is 00:19:53 And then I wouldn't want to forget about the world of cryptography and Bitcoin. I think those technologies are, of course, very right for policy consideration right now. You see a lot of efforts, especially at the state and local level, to try to regulate Bitcoin right now, and that's really concerning. I think in terms of crypto, we're seeing one of the most amazing debates unfold yet again in my lifetime about the cryptography world. wars that took place 20 years ago. Oh, you're talking about like things originated with Kalia. Right. Not just Kalia, but the question of the clipper chip and whether or not the government should regulate encryption and hold the supposed secret key or golden key to the backdoor access to all
Starting point is 00:20:28 of our digital systems, which in my opinion is kind of crazy. I agree with you. It would lead to massively insecure systems as opposed to more secure ones. And it would lead to a lot of legitimate privacy concerns. But that is a debate that's happening right now in a very active way in Washington and with a variety of folks in the law enforcement and national security community demanding that something be done by Silicon Valley and Internet innovators to make it easier for law enforcement and national security folks to snoop. Right. Well, one thing that's interesting just to wrap up that I think is different about this particular issue, privacy and security that you've commented on, is that unlike other issues when it came to, you know, people exercising free speech on the Internet where they had the First Amendment in place, this is an issue where there isn't that framework in place. And what are the implications of that? Yeah, this is a very important point. In all of the previous internet policy battles, there was usually some sort of a speech ramification to the policy, whether it was regulating social networking to say they have mandatory age verification or whether it was trying to censor the internet to get pornography off of it. Or even whether you're accountable for someone else doing something on your site when you're a developer in a third-party ecosystem. That's right. In each of these cases, it was a speech issue primarily. And when it's a speech issue in the United States, it implicates the First Amendment to the Constitution.
Starting point is 00:21:41 And essentially the First Amendment became, if you will, a deregulatory sledgehammer that you could take agencies to court for almost anything they did with the Internet, and you could win. You could beat regulatory proposals. It's kind of handy. But in the field of privacy and security, it's a little bit about speech, but it's also about sort of pure play commerce. And so in that world, it's harder to make a First Amendment claim. It's not impossible. If someone in the States try to do something like the Europeans are doing with right to be forgotten, we could beat that back with the First Amendment in the U.S. We could say no way it doesn't jive with our First Amendment. But other types of privacy or security rules would be much, much harder to challenge on First Amendment grounds. So how do we move forward without that in place?
Starting point is 00:22:19 Well, that is a challenge. I think, again, it requires an added level of sensitivity about policy concerns governing privacy and security relative to speech issues where we've sort of beat back those battles or concerns. I think what policymakers are going to want to do is they're going to want to hear from Silicon Valley and other innovators. They're going to want to have some hands-on experience with these technologies. and just get a better feel for those benefits so that they can understand why they don't want to rush to regulate in this exciting space.
Starting point is 00:22:46 Well, I think that's all we have time for, Adam. Thanks for joining the A6 and Z podcast. Thanks so much for having me.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.