Your Undivided Attention - [Unedited] A Problem Well-Stated is Half-Solved — with Daniel Schmachtenberger

Episode Date: June 25, 2021

We’ve explored many different problems on Your Undivided Attention — addiction, disinformation, polarization, climate change, and more. But what if many of these problems are actually symptoms of ...the same meta-problem, or meta-crisis? And what if a key leverage point for intervening in this meta-crisis is improving our collective capacity to problem-solve?Our guest Daniel Schmachtenberger guides us through his vision for a new form of global coordination to help us address our global existential challenges. Daniel is a founding member of the Consilience Project, aimed at facilitating new forms of collective intelligence and governance to strengthen open societies. He's also a friend and mentor of Tristan Harris. This insight-packed episode introduces key frames we look forward to using in future episodes. For this reason, we highly encourage you to listen to this unedited version along with the edited version. We also invite you to join Daniel and Tristan at our Podcast Club! It will be on Friday, July 9th from 2-3:30pm PDT / 5-6:30pm EDT. Check here for details.

Transcript
Discussion (0)
Starting point is 00:00:00 Hi everyone. It's Tristan, and this is your undivided attention. Up next, we have our unedited conversation with Daniel Schmachtenberger. And because it's unedited, it's longer and not corrected for fact-checking purposes, but you can find our shorter edited version wherever you found this one. Listen to both versions and then come to our podcast club with Daniel and me, and hopefully you, on July 9th. Details are in the show notes. And with that, here we go. Welcome to your undivided attention. Today, I am so honored and happy to have my friend Daniel Schmachtenberger as our guest who works on the topics of existential risk and what are the underlying drivers of all of the
Starting point is 00:00:45 major problems or many of the major problems that are really facing us today as a civilization, be it climate change, breakdown of truth, social media, our information systems. Those of you who've been following your undivided attention will hear this as a very different kind of episode. We almost think of it as a meta episode about the underlying drivers of many of the topics that we have covered on your undivided attention thus far. So if you think about the topics that we've covered, whether you've seen the social dilemma or you followed our interviews previously on topics like attention span shortening or addiction or information overwhelm and distraction, the fall of trust in society, more polarization, breakdown of truth, our inability
Starting point is 00:01:28 to solve problems like climate change, well, this is really about an interconnected set of problems and the kind of core generator functions that are leading to all of these things to happen at once. And so I really encourage you to listen to this all the way through, and I think that we're going to get into some very deep and important knowledge that will hopefully be orienting for all of us. One of my favorite quotes is by Charles Kettering, who said that a problem not fully understood is unsolvable and a problem that is fully understood is half solved. And what I hope we talk about with Daniel is what about the framework that we are using to address or try to meet the various problems that we have has been inadequate. And what is the problem solving framework
Starting point is 00:02:15 that we're going to need to deal with the existential crises that face us? So Daniel, welcome to your undivided detention. Thank you, Tristan. I've been looking forward to dialogue and about these things publicly for a while. Well, you and me both. And for those who don't know, Daniel and I have been friends for a very long time, and his work has been highly influential to me and many people in my circles. So Daniel, maybe we should just start with what is the meta-crisis? And why are these problems seemingly not getting solved, whether it's the SDGs, climate change, or anything that we really care about right now? I think a lot of people have the general sense, that there is an increasing number of possibly catastrophic issues and that as new categories
Starting point is 00:03:05 of tech, tech that allows major cyber attacks on infrastructure, tech that allows weaponized drone attacks on infrastructure, biotechnologies, artificial intelligence, and moving towards AGI, that there are new catastrophic risks with all of those categories of tech and that those tech are creating larger jumps in power faster than any types of jumps of tech, including the development of the nuclear bomb in the past by many orders of magnitude. So there's a general sense that whether we're talking about future pandemic-related issues or whether we're talking about climate change or climate change as a forcing function for human migration that then causes resource wars and political instability or the
Starting point is 00:03:51 fragility of the highly interconnected globalized world where a problem in one part of the world can create supply chain issues that create problems all around the world, there's a sense that there's an increasing number of catastrophic risks and that they're increasing faster than we are solving them. And that when you mention, like with the UN, while progress has been made in certain defined areas of the sustainable development goals, and progress was made back when they were called the Millennium Development Goals, were very far from anything like a comprehensive solution to any of them, we're not even on track for something that is converging towards a comprehensive solution. And if we look at kind of the core initial mandate of the United Nations
Starting point is 00:04:35 in terms of thinking about how to recognizing after World War II that nation state government alone wouldn't prevent World War and now that World War was no longer viable because the amount of technology we had made it a war that no one could win, we still haven't succeeded at nuclear disarmament. We did some very limited nuclear disarmament success while doing nuclear arms races at the same time. And we went from two countries with nukes to more countries with better nukes. And that simultaneous to that, every new type of tech that has emerged has created an arms race. We haven't been able to prevent any of those. And the major tragedy of the commons issues like climate change and overfishing and dead zones in the oceans and microplastics
Starting point is 00:05:14 in the oceans and biodiversity loss. We haven't been able to solve those either. And so rather than just think about this as like an overwhelming number of totally separate issues. The question of why are the patterns of human behavior as we increase our total technological capacity, why are they increasing catastrophic risk and why are we not solving them well? Are there underlying patterns that we could think of as, as you mentioned, generator functions of the catastrophic risk, generator functions of our inability to solve them that if we were to identify those and work at that level, we could solve all of the expressions or symptoms. And if we don't work at that level, we might not be able to solve any of them.
Starting point is 00:05:58 And again, people who've been thinking about this for a long time kind of notice these issues. They notice that you try to solve a, like the first one I noticed when I was a kid was trying to solve an elephant poaching issue in one particular region of Africa that didn't address the poverty of the people that had no mechanism other than black market on poaching. didn't address people's mindset towards animals, didn't address a macroeconomy that created poverty at scale. So when the laws were put in place and the fences were put in place to protect those elephants in that area better, the poachers moved to poaching other animals, particularly in that situation, rhinos and gorillas that were both more endangered than the elephants had been. So you moved a problem from one area to another and actually a more sensitive area.
Starting point is 00:06:44 And we see this with, well, can we solve hunger by bringing commercial agriculture to parts of the world? world that don't have it so that the people don't either not have food or we have to ship them food, but if it's commercial agriculture based on the kind of unsustainable, environmentally unsustainable agricultural processes that lead to huge amounts of nitrogen runoff going into river deltas that are causing dead zones in the ocean that can actually collapse the biosphere's capacity to support life faster, then we're solving for a short-term issue that's important and driving even worse long-term issues. We see that many of the reasons people who oppose change solutions in the West oppose them is because not because they have even really deeply
Starting point is 00:07:26 engaged in the underlying science and say the climate change isn't real. That will oftentimes be what's said, but because the solution itself seems like it'll cause problems to other areas that they're paying attention to that seem even more critical to them. So if the solution involves some kind of carbon tax or something that would decrease GDP for the countries that agree to it. And some other countries don't agree to it. And let's say in this particular case, the model that many people have is Western countries agree to it. Their GDP growth decreases. China doesn't agree to it. And there's already a very, very close neck and neck fight for who controls power in the 21st century. Are we seeding the world to Chinese control that many
Starting point is 00:08:09 people would think it has less civil liberties and is more authoritarian in its nature? And so, Or some people's answer to climate change is, well, we just have to use less energy. But when you understand the energy correlates directly to GDP, and when GDP goes down, it affects poverty, people in extreme poverty first and worst. And wars increase because people who have desired to get more end up going zero-sum on each other. And only when it's very positive to some does that not happen. You see all these intricate theory of trade-off. So we can't see that the problem is climate change.
Starting point is 00:08:42 Everybody knows problem is, you know, the problem of climate change seems like a big thing. But you've got to look at climate change plus the macroeconomic issues that would affect the poorest people and that would increase the chance of war and the geopolitical dynamics between the West and China, whatever, and the enforcement dynamics of international agreement, when you start to recognize that the problem is that suite of things together, in a way it seems, well, that's too hard. We can't even begin to focus on it. I would say that that's actually easier because trying to solve climate change on its own is actually impossible. because if you're trying to solve something that is going to externalize harm to some other thing, maybe the other thing that you, maybe you solve that thing, but you find out that you're in a worse position. So I would say it's impossible to actually improve the world that way. Or half the world that is paying attention to that other thing disagrees with you so vehemently that all the energy goes into infighting and whatever some part of the world is trying to organize to do,
Starting point is 00:09:42 the other part of the world is doing everything they can to resist from happening. then all the creative energy just burns up as heat and we don't actually accomplish anything. So I would say that the way we're trying to solve the problems is actually mostly impossible. It either solves it in a very narrow way while externalizing harm and causing worse problems or makes it impossible to solve it all
Starting point is 00:10:02 because it drives polarization. And so going to the level at which the problems interconnect where that which everybody cares about is being factored and where you're not externalizing other problems, while it seems more complex is actually possible. Impossible is easier than impossible. And so it's not just that there's a lot of issues, right? There are a lot of issues.
Starting point is 00:10:24 And just that the issues are both more consequential at greater scope and moving faster than previous issues because of the nature of exponentially technology. That's part of it. It's not just that the problems are all interconnected. It's also that they do have underlying drivers that have to be addressed otherwise a symptomatic only approach doesn't work. The first underlying driver that when people look at it, they generally see, is they see things like structural, perverse incentive built into macroeconomics, that the elephant dead is worth more than the elephant alive is, and so is the rhino, and so is the, and so how do you have a situation where that's the nature of incentive, where you're incentivizing an activity and then trying to bind it or keep it from happening. And the same would be true with overfishing, as long as live fish are worth nothing and dead fish are worth more. And there's something fundamentally perverse about the nature of the economic incentive. And the same is true that when war goes, when we have war and there's more military manufacturing, GDP goes up.
Starting point is 00:11:28 And when there's more addiction and people are buying the supply of their addiction, GDP goes up. And when they're more sick people paying for health care costs, GDP goes up. So it's obviously a perverse kind of metric. So anytime someone can fiscally advantage themselves or a corporation can in a way that either directly causes harm or indirectly externalizes harm, we have to fundamentally solve that. If there's something like $70 trillion a day of activity happening that is a decentralized system of incentive, that is incenting people to do things that are directly or indirectly causing harm, there's really nothing we can do with some. billions of dollars of non-profit or state or whatever money that is going to solve that thing. So we have to say, well, what changes at the level of macroeconomics need to happen where the incentive of individuals and the incentive of corporations and the incentive of nations
Starting point is 00:12:22 is more well-aligned with the well-being and the incentive of others, and so we're less fundamentally rivalrous in the nature of our incentive. So we can see that underneath heaps of the problems, structures of macroeconomic incentive are there. That's the kind of maybe the first one that most people see. we can go deeper to seeing that even as an expression because whether it's a economic incentive for a corporation or whether it's a power incentive, a political power incentive for a political party or for a country, they're both instantiations of rival risk type dynamics that end up driving
Starting point is 00:12:56 arms races because if you win at a rival risk dynamic, the other side reverse engineers your tech figures out how to make better versions comes back, which creates an exponentiation in warfare and eventually exponential warfare become self-terminating on a finite planet. Exponential externalities also become self-terminating. So if we want to say, what are the underlying generator functions of catastrophic risk? First, maybe just to make clear,
Starting point is 00:13:21 the catastrophic risk landscape. Is this all right if we do a brief aside on that? Yeah, let's do it. And then I think what we do, let's do that, and then let's recap just what these structures are. So people are tracking each of these components because you've already mentioned a few different things. I mean, the first thing is just many listeners might hear what you're sharing as an overwhelming set of problems. And I think it's just a recap.
Starting point is 00:13:48 It's important people understand that it's overwhelming if you're not using a problem solving framework that allows you to see the interconnected nature of those problems. Because if you solve them with the limited tools we have now, let's just solve a social media problem by pulling one lever and changing one business model of one company or, banning TikTok, but then you get 20 other TikToks that come and sit in its place with the same perverse incentive of addiction, the same rival risk dynamic competing for human attention, we're going to end up perpetuating those problems. And so just to sort of maybe recap some of that for listeners, and I think maybe let you continue with the other generator function. Let's just make sure that people really get those frameworks. I think it's really important. Yeah, I mean, in the case that you in Center for Human Technology has brought so much attention to
Starting point is 00:14:32 with regard to the attention, harvesting, and directing economy. It's fair to say that it probably was not Facebook or Google's goal to create the type of effects that they had. Those were unintended externalities. They were second order effects. But they were trying to solve problems, right? Like, let's solve the problem from Google of organizing the world's information and making better search.
Starting point is 00:14:59 That seems like a pretty good thing to do. and let's solve the problem of making it freely available to everybody. Well, that seems like a pretty good thing to do. And with the ad model, we can make it freely available to everyone. And let's recognize that only if we get a lot of data, will our machine learning get better? And so we need to actually get everybody on this thing. So we definitely have to make it free. And then we get this kind of recursive process.
Starting point is 00:15:24 Well, then the nature of the ad model doing time on-site optimization and stuff I'm not going to get into because you've addressed it. well, ends up appealing to people's existing biases rather than correcting their bias, appealing to their tribal in-group identities rather than correcting them and appealing to limbic hijacks rather than helping people transcend them. And as a result, you end up actually breaking the social solidarity and epistemic capacity necessary for democracy. So it's like, oh, let's solve the search problem. That seems like a nice thing. The side effect is we're going to destroy democracy and open societies in the process and all those other things. Like, those are examples of solving a problem in a way that is externalizing harm, causing other problems
Starting point is 00:16:09 that are oftentimes worse. And so we, the, the, let's just focus on the opportunity. And just typically this, this will get accounted for as, oh, this is just an unintended consequence. But there's some other generator functions I think we should outline. I mean, if, if YouTube and Google didn't personalize search results and what video to show you next, and the other guy did on TikTok starts personalizing, they're caught in a race to the bottom of whoever personalizes more for the best limbic hijack. And so just to sort of connect some of those themes together for listeners.
Starting point is 00:16:39 So you mentioned race to the bottom. And obviously, CHT has discussed this before, and this is a key piece of the game theoretic challenge in global coordination. And the two primary ways it expresses itself is arms races and tragedy of the comments. And the tragedy of the common scenario is if we don't overfish that area, virgin ocean, but we can't control that someone else doesn't
Starting point is 00:17:10 because how do we do enforcement if they're also a nuclear country? That's a tricky thing, right? How do you do enforcement on nuclear countries, equipped countries? So us not doing it doesn't mean that the fish don't all get taken. It just means that they grow their populations and their GDP faster, which they will use rivalrously. So, we might as well do it. In fact, we might as well race to do it faster than they do. Those are the tragedy of the commons type issues. The arms race version is if we can't ensure that they don't build AI weapons or they don't build surveillance tech and they get increased near-term power from doing so,
Starting point is 00:17:44 we just have to race to get there before them. That's the arms race type thing. It just happens to be that while that makes sense for each agent on their own in the short term, it creates global dynamics for the whole in the long term that self-terminate. because you can't run exponential externality on a finite planet. That's the tragedy of the Commons one. And you can't run exponential arms races and exponential conflict on a finite planet.
Starting point is 00:18:09 So the thing that has always made sense, which is just keep winning at the arms races, has had a world where we've had lots of wars increasing in their scale and lots of environmental damage. We started desertification thousands of years ago. It has been a long, slow exponential curve that really started to pick up
Starting point is 00:18:28 with the Industrial Revolution and is now really verticalizing with the digital revolution and the cumulative harm of that kind of thing becomes impossible now. So basically, with the environmental destruction, with the wars, and with the kind of class subjugation things that civilization has had in the past, pretty much anyone would say we have not been the best stewards of power. And technology is increasing our power. Exponential tech means tech that makes better versions of itself. So you get an exponent on the curb.
Starting point is 00:18:56 you get a, and we're now in a process where that's a very, very rapid. Computation gives the ability to design better systems of computation, and computation and AI applied to biological big data and protein folding gives the ability to do that on biotech and on and on, right? So, we could say the central question of our time is if we've been poor stewards of power for a long time, and that's always caused problems, but the problems now become existential, they become catastrophic. We can't keep doing that. How do we become adequately good stewards of exponential power in time? How do we develop the good decision-making processes, the wisdom necessary to be able to be stewards of that much power? I think that's a fair way to talk about the central thing.
Starting point is 00:19:46 Now, if it's okay, the thread we were about to get to, I think, is a good one, which was the history of catastrophic risk coming up to now is that before World War II, catastrophic risk was actually a real part of people's experience. It was just always local. But an individual kingdom might face existential risk in a war where they would lose. And so the people faced those kinds of reality. And in fact, one thing that we can see when you read books like the collapse of complex societies by Joseph Tainter and just any study of history is that all the great civilizations don't still exist, which means that one of the first things we can say about civilizations is that they die. They have a finite lifespan on them. One of the interesting things we can find is that they
Starting point is 00:20:37 usually die from self-induced causes. They either overconsume the resources and then stop being able to meet the needs of the people through unrenewable environmental dynamics, and that's old. or they have increasing border conflicts that lead to enmity that has more arms race activity coming back at them, or they have increasing institutional decay of their internal coordination processes that leads to inability to operate quickly in those types of things. So we can say that it's the fundamentally most all civilizations, collapse in a way that is based on generally self-terminating dynamics. And we see that even when they were overtaken by armies,
Starting point is 00:21:24 oftentimes there were armies that were smaller than ones they had defended against successfully at earlier peaks in their adaptive capacity. Okay, so catastrophic risk has been a real thing, it's just been local. And it wasn't until World War II that we had enough technological power that catastrophic risk became a global possibility for the first time ever. And this is a really important thing to get because the world before World War II and the world after was different in kind so fundamentally. And this is why when you study history, so much of what you're studying is history
Starting point is 00:22:00 of warfare, of neighboring kingdoms and neighboring empires fighting. And because the wars were fundamentally winnable, at least for some, right? They weren't winnable for all the people who died, but at least for some. And with World War II and the development of the bomb became the beginning of wars that were no longer winnable and that if we employed our full tech and continued the arms race even beyond the existing tech, it's a war that where win-lose becomes omnilus-lose at that particular level of power. And so that created the need to do something that humanity had never done, which was that the major superpowers didn't war. The whole history of the world,
Starting point is 00:22:42 the history of the thing we call civilization, they always did. And so we made a an entire world system, a globalized world system that was with the aim of preventing World War III. So we could have non-kinetic wars, and we did, right? Increasingly, you can see from World War II to now a movement to unconventional warfare, narrative and information warfare, economic, diplomatic warfare, those types of things. Resource warfare. And you could, if you were going to have a physical kinetic war, it had to be a proxy war. But to have a proxy war, that also required narrative warfare to be able to create a justification for it. But also to be able to prevent the war, so the post-World War II, Bretton Woods, mutually assured destruction, United Nations world was a solution to be able to steward that level of tech without destroying ourselves.
Starting point is 00:23:35 And it really was a reorganization of the world. It was a whole new advent of social technologies or social systems, just like the U.S. was new social technologies or social systems coming out of the Industrial Revolution. The Industrial Revolution ended up giving rise to kind of nation-state democracies. The nuclear revolution in this way kind of gave rise to this IGO intergovernmental world. And it was predicated on a few things. Mutia's sure destruction was critical. Globalization and economic trade was critical that we, if the computer that we're talking on and the phone that we talk on is made over six continents and no countries can make them on our own, we don't want to blow them up and ruin their infrastructure because we depend upon it.
Starting point is 00:24:15 So let's create radical economic interdependence so we have more economic incentive to cooperate. Make sense. And let's grow the materials economy so fast through this globalization that the world gets to be very positive GDP and gets to be very positive sum so that everybody can have more without having to take each other stuff. That was kind of like the basis of that whole world system. And we can see that we've had wars, but they've been proxy wars. in Cold Wars, they haven't been major superpower wars, and they've been unconventional ones. But we haven't had a kinetic World War III.
Starting point is 00:24:52 We have had increase of prosperity of certain kinds. 75 years, give or take. Now we're at a point where that radically positive sum economy that required an exponential growth of the economy, which means of the materials economy, and it's a linear materials economy that unrenewably takes resources from the Earth faster than they can reproduce themselves and turns them into waste faster than they can process themselves has led to the planetary boundaries issue where it's not just climate change or overfishing or dead zones in the ocean or microplastics or species extinction or peak phosphorus it's a hundred things right like there's all these planetary boundaries so we can't keep doing exponential linear materials economy
Starting point is 00:25:38 that that thing has come to an end because now that drives its own set of catastrophic risks we see that the radical interconnection of the world was good in terms of will not bomb each other, but it also created very high fragility because what it meant is a failure anywhere could cascade to failures everywhere because of that much dependence. So we can see with COVID we had what was a local issue to an area of China, but because of how interconnected the world is with travel, it became a global issue at the pandemic level. And it also became an issue where to shut down the travel.
Starting point is 00:26:13 the transmission of the virus, we shut down travel, which also meant shut down supply chains, which meant so many things, right? And very fundamental things that weren't obvious to people at first, like that countries' agriculture depends upon the shipment of pesticides that they don't have stored. And so we got these swarms of locusts because of not having the pesticides, which damaged the food supply and shipments of fertilizer and shipments of seed. So we end up seeing a drive of the food insecurity of extreme poverty at a scale of death threat that is larger than the COVID death threat was. As a second order effect of our problem, we were trying to solve the problem of don't spread COVID, and the solution had these massive second, third order effects
Starting point is 00:26:55 that are still playing out, right? And that was a relatively benign pandemic, a relatively benign catastrophe compared to a lot of scenarios we can model out. So we can say, okay, well, we like the benefit of interconnectivity so we're not invested in bombing each other, but we need more anti-fragility in the system. And then the mutually assured destruction thing doesn't work anymore because we don't have two countries with one catastrophe weapon that's really, really hard to make and easy to monitor, because there's not that many places that have uranium, it's hard to enrich it, and you can monitor it by satellites. We have lots of countries with nukes, but we also have lots of new catastrophe weapons that are not hard to make, that are not easy to monitor, that don't even take nation states to make
Starting point is 00:27:36 them. So if you have many, many actors of different kinds with many different types of catastrophe weapons, how do you do mutually assured destruction? You can't do it the same way. And so what we find is that the set of solutions post-World War II that kept us from blowing ourselves up with our new power lasted for a while, but those set of solutions have ended. And they have now created their own set of new problems. So there's kind of the catastrophic risk world before World War II, the catastrophic risk world from World War II till now, and then the new thing. So the new thing says we have to have solutions that deal with the planetary boundary issues, that deal with global fragility issues, and that deal with the exponential tech issues, both in terms of the way exponential
Starting point is 00:28:26 tech can be intentionally used to cause harm, i.e. exponential tech empowered warfare, and unintentionally, i.a. exponential tech empowered externalities and even just totally unanticipated types of mistakes, the Facebook, Google type problem multiplied by AGI and things like that. And so when we talk about what the catastrophic risk landscape is, like that's the landscape. The metacrisis is how do we solve all of that? And recognizing that our problem solving mechanisms haven't even been able to solve the problems we've had for the last many years, let alone prevent these things. And so, So the central orienting question, it's like the UN has 17 Sustainable Development Goals, there's really one that must supersede them all, which is develop the capacity for global
Starting point is 00:29:12 coordination that can solve global problems. If you get that one, you get all the other ones. If you don't get that one, you don't get any of the other ones. And so we can talk about how do we do that. But that becomes the central imperative for the world at this time. So you're saying a whole bunch of things. And one thing that comes to mind here, if I'm just reading back some of the things you've shared, the development of the, let's call it one of the first exponential technologies,
Starting point is 00:29:47 which is the nuclear bomb, led to a new social system, which was sort of the post-Bretton Woods world, of trying to stabilize that one exponential technology in the world in a way that would not be catastrophic. And even there, we weren't able to sort of, make it all work. And I think people should have maybe a list of some of the other exponential technologies because I want to make sure that that phrase is defined for listeners. And there's a lot of different ways that we've now not just created more exponential technologies, but more decentralized exponential technologies. And I think people should see Facebook and Google as exponential attention mapping or information driving technologies that are shaping the global information
Starting point is 00:30:27 flows or the wiring diagram of the sort of global societal brain at scale. that are exponential. It's sort of a nuclear, nuclear scale, you know, rewiring of the human civilization. We couldn't do that with newspapers. We couldn't do that with the printing press, not at the scale speed, et cetera, that we have now. So do you want to give maybe some more examples of exponential technologies? Because I think that's going to lead to, we're going to need a new kinds of social systems to manage this different landscape of not just one exponential nuclear bomb, but a landscape. Yeah. Indulge me as I tell a story first that leads into it, because it
Starting point is 00:31:02 be a relevant framework. Obviously, the bomb was central to World War II and the world system that came afterwards and what motivated our activity getting into it. But it was not the only tech. It was one new technology that was part of a suite of new technologies that could all be developed because of kind of the level science had gotten to. And basically, like physics and chemistry had gotten to the point that we could work on a nuclear bomb.
Starting point is 00:31:29 We could start to work on computation. we could get things like the V2 rocket and rockets and a whole host of applied chemistry. And one way of thinking about what World War II was, it's not the only way of thinking about it, but it's useful frame, and I think it's a fair frame, is that there were a few competing social ideologies at the time, primarily kind of German fascism, fascism, socialism, whatever you want to call it, Soviet communism and
Starting point is 00:32:05 Western liberalism, something like that. And that this new suite of technologies, whoever kind of developed it and was able to implement it at scale first would win. That social ideology would win because it's just so much more powerful. If you have nukes and they have guns, you're going to win, right?
Starting point is 00:32:23 And Germans were actually ahead of both the U.S. and the Soviets because of some things that they did to invest in education and tech development. But that led both the Soviets and the U.S. to really work them to catch up as fast as they can. And when the U.S. finally figured it out, which we were actually a little bit slow to, right? Einstein actually wrote a letter, the Einstein-S. seller letter that went to the U.S. government saying, now the science really does say that this thing could happen and the Germans could get it and you should focus on it. And at first, they didn't take them up on it. It wasn't until the private sector actually non-profit support advanced it further that then the Manhattan Project was engaged in. But then it was engaged in when they recognized the seriousness that there was an actual eminent existential risk to the nation and the whole Western ideology and whatever.
Starting point is 00:33:14 Then it was an unlimited budget, right? It was a let's find all the smartest people in the world and let's bring them here and let's organize however we need to to make this thing happen. and let's do it for all of the new areas of tech. We're going to get the Enigma machine and crack the Enigma code. We're going to get a V2 rocket. We're going to figure out how to reverse engineer that and advance rocketry. We're going to do everything needed to make a nuclear bomb and then more advanced ones. It was the biggest jump in technology ever in the history of the world, in record history as we know it.
Starting point is 00:33:40 And it wasn't actually done by the market, right? It was done by the state. That's a very important thing. This idea that markets innovate and states don't innovate is just historically not true here. this was state funds and state-controlled operation in the same way that the Apollo project coming out of it was. And a technological jump of that kind hasn't happened since. So it's an important thing to understand. But we can say, though this is not a totally fair thing to say, we can say that the U.S. came out dominant in that technological race. The U.S. and the USSR both had a lot of
Starting point is 00:34:19 capacity, so that was the Cold War, and then finally the U.S. came out. And so the post-World War II system was a U.S.-led system, right? The U.N. was in the U.S. The Bretonwood system was pegged to the U.S. dollar. What I would say is that, so it wasn't one type of tech. It was the recognition that science had got to a place where there's going to be a whole suite of new tech, and the new tech meant more power. And whoever had the power would determine the next phase of the world. And if we didn't like the social ideologies that we're going to be guiding it, of course, we can also think of it as just who wanted to win at the game of power. But from the philosophical argument, if we didn't like the social ideologies,
Starting point is 00:35:03 then we have another social ideology. What I would say is that there is an emerging suite of technologies now that is much more powerful in the total level of, jump, technological jump, than the World War II suite was. In fact, orders of magnitude more. And only those who are developing and employing exponential tech will have much of a say in the direction of the future because just from a real politic point of view, that's where the power is. And if you don't have the power, you won't be able to oppose it. And so what do we mean by exponential tech? There's a couple different ways of thinking about it. Just exponentially more powerful
Starting point is 00:35:48 is a very simplistic way. And in that definition, nuclear is exponential tech. But what we typically mean with exponential tech is tech that makes it possible to make better versions of itself, so that there is like a compounding interest kind of curve. The tech makes it easier to make a better version, which makes it easier to make a better version. And so we see that starting with computation, really, in a fundamental way, because computation allows us to advance models of computation. How do we make better computational substrates? How do we get more transistors in a chip? How do we make better arrangements of chips so we get GPUs and those types of things? And so in this new suite of technology, the center of it is computation. The very, very center of that is AI,
Starting point is 00:36:39 is kind of self-learning computation on large fields of data. The other kind of sort of software advances, like advances in various meaningful advances in cryptography and big data and the ability to get data from sensors and, you know, sensor processing, image recognition, all like that, is a part of that central suite. And the application of that to the directing of attention and the directing of behavior by directing attention, which you focused on very centrally. Then the next phase is the application. of the tech, the application of computation to increasing computational substrate. So this is now the software advancing the hardware that can advance the total level of software, and you see the
Starting point is 00:37:29 recursion. So that's not just continuously better chips. It's also quantum computing, photoc computing, DNA computing, those other types of things. And the other types of hardware that need to be part of that thing, i.e. sensor tech in particular so that you can keep getting more data going into that system that can do big data machine learning on it. Then it's the application of that computation and AI specifically to physical tech. So to nanotech, material sciences, biotech, and even things like modeling how to do better nuclear. And, you know, robotics and automation. And so when you start thinking about better computational substrates, running better software with more total data going in with better sensors in better robots, you start getting the sense of what that whole suite of
Starting point is 00:38:16 things looks like. So that's the suite of things that I would say is what we would kind of call exponential tech. And the reason why the term exponential is important is we don't think exponentially well. Our intuitions are bad for it because we think about how much progress was made over the last five years. We imagine there will be a similar amount over the next five years. And that's not the way exponential curves work, right? And so it's very hard for us.
Starting point is 00:38:45 Our intuition was calibrated on the past, and it's going to be miscalibrated for forecasting the total rate of change and the magnitude of change. So to link this for one much more narrow aspect for our listeners who are familiar with social media and the social dilemma, and you're talking about sort of self-compounding systems that improve recursively like that, if I'm TikTok, or if I'm Facebook and I have, I use data to figure out what's the thing to show you. And that's going to keep you here for longest since going to bypass your prefrontal cortex and go straight to your limbic system, your lizard brain.
Starting point is 00:39:25 Well, the better it gets at doing that and succeeding at that, the more data it has to make a better prediction the next time. But then a new user comes along who it's never seen before. But hey, they're clicking on exactly the same pattern of anorexia videos that we've seen these other 2 million users have that turn out to be teenage girls. And it just happens to know that this other. set of videos that are more interrexia videos are also going to work really, really well. So there's sort of a self-compounding loop that's learning not just from one person
Starting point is 00:39:49 and getting a better sort of version of hijacking your nervous system, but learning across individuals. And so now you get a new person coming in from some developing country who's never used TikTok before, and they're just like barely walking in for the front door the very first time. You know, it's sort of like when Coca-Cola goes to Southeast Asia for the first time and you get diabetes 10 years later because you refined all the techniques of marketing so effectively, but now happening at scales that are automated with computation. So what you're talking about is the impact of computation
Starting point is 00:40:16 and learning on top of learning, data on top of data, and then cross-referencing lookalike models and all of this kind of thing, you could apply to the domain, at least, that social dilemma watchers and people who are familiar with our work might be able to tie into. Yeah, the more people you have in the system and the more data per person that you're able to harvest,
Starting point is 00:40:34 the more stuff you have for the machine learning to figure out patterns on, which also means that the machine learning can provide things that the users want more, even if it's manufactured want, right, even if it's manufactured demand, which means that then more users will come and put more data in, and it can specifically figure out how to manufacture the types of behavior that increase data collection. And so you do get this recursive process on how many people, how much data, how good are the machine learning algorithms, you know, that kind of thing. And this is one of the
Starting point is 00:41:02 reasons that we see these natural monopoly formations within these categories of tech. And this is another reason that it's important to understand like these types of self-reinforcing dynamics and things like network effects like Metcalf's law didn't exist when the Scottish Enlightenment was coming up with its ideas of capitalism and market and the healthy competition and markets and why that creates checks and balances on power. They didn't exist. Adam Smith did not get to think about those things. And so when you have a situation where the value of the network is proportional to the square of the, you know, people coming into the network, then you're incented to keep it free up front, maximize addiction, drive behavior into the system. And then once you get to the kind of a breakaway point on the return of that thing, it becomes nearly impossible for anyone else to come in and overtake that thing. So you get a power law distribution in each vertical. You get one online market that is bigger than all the other online markets, one video player that's bigger than all the other video players, one search, one social network. And that's
Starting point is 00:42:11 not because of a government monopoly. That's because of this kind of natural tech monopoly. This also means that when we created the laws around monopolies, they don't apply to this thing. And yet this thing still has the same spirit of power concentration and unchecked power that our ideas of monopoly had, but it's able to grow much faster than law is able to figure out how to deal with it, or faster than economic theory can change itself, right? And so one of the things that we see is that our social technologies, like law, like governance, like economics, are actually being obsoleted by the development of totally new types of behavior and mechanics that weren't part of the world they were trying to solve problems for, right?
Starting point is 00:42:53 And so the Scottish Enlightenment was the development of new ideas of how to problem solve, the problems of its time. The Constitution was trying to figure out how to solve. the problems of its time. I would say they were good thinking, right? They were good work. The Bretton Woods world was, none of them are adequate to solve these problems because these problems are different in kind. And even where they're just an extension of magnitude, when you get enough change in magnitude, sometimes it becomes a difference in kind. Like, as you're getting more and more information to process, once you get past what humans can process, infosingularity type
Starting point is 00:43:25 issues, okay, well now it's a difference in magnitude that becomes a difference in kind, which means you need a fundamentally different approach. So I would say this is where it's important to recognize that those social technologies that we loved so much because they seemed so much better than all the other options we had at the time, like markets and like democracy. These are not terminal goods in and of themselves.
Starting point is 00:43:48 The terminal goods were things like human liberty and justice and checks and balances on power and opportunity and distribution of opportunity and things like that. These were the best social technologies possible at the time. The new technologies both kill those things. They don't work anymore, right? You can't have the social technology of the fourth estate that was necessary for a democracy,
Starting point is 00:44:15 which is why the founding fathers said things like if I could have perfect newspapers in a broken government or perfect government and broken newspapers, I'd take the newspapers. Because if you have an educated populace that all understands what's going on, they can make a new form of government. If you have people that have no idea what's going on, how could they possibly make good choices? if their sense-making is totally broken. So we had this idea that the fourth estate was a prerequisite to a participatory governance, but that was based on a very narrow, limited capacity for print, right? And again, it was the technology of the Gutenberg Press that was one of the things that actually
Starting point is 00:44:49 ended feudalism. And so the founding fathers were employing that new tech, both because it upended the previous tech and it made this new thing possible. Same with guns. They needed guns. and Second Amendment to make this new thing possible. But once we get to an internet world where you don't have centralized broadcast,
Starting point is 00:45:09 you have decentralized, and then there's so much stuff that you can never possibly find it all in search, so whoever coordinates the search, the content aggregators, which is the Facebook, YouTube, whatever, are doing it with the types of business models we have. The fourth estate is just dead forever,
Starting point is 00:45:23 that old version. There's no way to recreate that version. So that either means democracy is dead forever or anything like a well-informed citizen that could participate in its governance in any form, or you have to say what is a post-internet, post-social media, post-info-singularity for the state that creates an adequately educated citizenry? That's thinking about the way that our social technologies, our social systems,
Starting point is 00:45:47 have to upgrade themselves in the presence of the tech that obsoleted the way they did work. But we can also see, and we can give examples of this, how the new tech also makes possible new things that weren't possible before, so we can do something better than industrial-era democracy or industrial-era markets, which is why I say they aren't a terminal good. They're a way to deliver certain human values that really matter. And the new technology that obsoletes those can actually also be facilitative in designing systems that also serve those values.
Starting point is 00:46:16 But it's not a given that it does. That has to become the kind of central orienting mission. So now, just to make sure we're linking this back to the start of this conversation. We started this conversation by saying, The way that we are going about solving problems, let's say using the legacy systems of lawmaking in a Congress or using the legacy systems of a town hall to vote on a proposition or trying to pass laws as fast as social media as rewiring society, the lines don't match. And so what you're saying is that, and just for listeners, because I know that you use the phrase social technology, but I think you're really sort of talking about social systems, ways of organizing, you know, democracy. or, you know, technology in the most fundamental sense of the word of something humans designed to facilitate certain kinds of activity or outcomes.
Starting point is 00:47:07 So like language is a technology or democracy as a technology, right? So social systems, yeah. And so if the kind of old world approach of, you know, some people might be hearing this and say to themselves, now hold on a second. So we have all these institutions, we have all these structures, we live in a democracy and we live in a system that, you know, is working the way it does. It has its courts. It has its attorney generals.
Starting point is 00:47:30 It has its litigation procedures. It has its lawmaking bodies. If you're saying that we can't use those things because they're not adequate or they won't help us solve those problems, we need to have new social systems. Maybe you could give us some hope about why that might be feasible instead of feeling impossible because this is actually precedented in our history. When new technologies show up and then new social systems emerge to make room for those technologies functioning well.
Starting point is 00:47:58 Could you kind of briefly touch on them, but I think it's important to give listeners a few concrete examples. Yeah, there's a number of kind of good academics and disciplines of academics that look at the history of evolutions in physical technology and the corresponding evolutions in thought and culture and social systems. Marvin Harris, the cultural materialism, did a kind of major opus work here where he specifically looked at how changes in social systems and cultures followed changes in technology. There are other bodies of work that will look at the social systems as primary or the cultures as primary, and we can say they're inter-affecting. But for instance, you know, the vast majority of human history was tribal, was, you know, however much 200,000 years of humans in these small kind of Dunbar number villages. There was a social technology, social systems that mediated that had to do with how the tribal circles worked. and the nature of how resources were shared, right? It was a very different kind of economic system,
Starting point is 00:49:02 a very different kind of judicial system, a different educational system. It had all those things. It had a way of education, meaning intergenerational knowledge transfer of the entire knowledge set that was needed for the tribe to continue operating. The development of certain technologies,
Starting point is 00:49:19 particularly the plow, but baskets and a few other things, obsoleteed that thing, Because all of a sudden, it made possible big amounts of surplus that made reason for much larger populations to emerge. Those larger populations were going to win in conflict against the smaller populations. And so you can see that then the emergence of new social technology to facilitate large groups of people, empire types, civilization technology emerged. You can see, and that there were a few other shifts in technology that evolved the types of empires that were there. And then, you know, the next one that people talk about a lot is the Industrial Revolution from the printing press specifically and then steam engine, the gunpowder revolution was part of it.
Starting point is 00:50:04 That kind of ended feudalism and began the nation state world. And so you can see, like, what is the thing that the founding fathers in the U.S. were doing? Well, they weren't trying to keep winning at feudalism, right? There was a game that had been happening for a long time. And they were saying, like, no, we're all people who are in of the type of people. who could do well at that system. And rather than do that, we recognize that there are fundamentally things wrong with this system and fundamentally new possibilities that hadn't been previously recognized. So we're going to actually try to design a fundamentally different system that we think
Starting point is 00:50:40 a more perfect union that makes life, liberty, and the pursuit of happiness better for everybody and increases productive capacity and things like that. So that was fundamentally an advance in social technology or social systems that both utilized new physical technology and was enabled by it. In the current situation, there are groups that are advancing the exponential technologies, and that means whatever social systems that they're employing are the social systems of the future if we don't change it. And that's what I want to get to in a moment. But who is working to implement any of the new emerging tech for better social systems that are aligned with social systems we want. You've had Audrey Tang on the show. Do you want to just briefly
Starting point is 00:51:30 describe an example of what she and what they have done there? Because if people aren't aware of it, that's a pretty prime example of for this particular iteration. Sure. Well, so just, and maybe just to go back briefly, because you gave this example in one of our earlier conversations that you know the printing press could have been used by the the feudal lords for consciously reinforcing feudalism but instead this and actually this new technology the printing press gives way to new ways of organizing society and we can actually have things like a fourth estate or newspapers or you know things like that um and so what you're both happened yeah right but then then the new thing theoretically has to win out over over the old thing
Starting point is 00:52:13 at least the one that we want that holds the values that the society wants So I think a lot of people can hear our conversation. We've had this riff before, actually following our last episode after my Senate testimony, speaking about a frame that you have offered and know well, which is that we can notice that digital authoritarian societies right now, like China, are consciously using exponential technologies to make stronger, more effective digital closed and authoritarian societies. And in contrast, digital open societies, democracies like the United States, are not consciously using technology
Starting point is 00:52:49 to make stronger, healthier, open societies. Instead, they've sort of surrendered what they are to private technology, multinational corporations, pursuing self-interest to shareholders and are profiting from the degradation and dysfunction of democracies. And so when we say all this, and we talk about how do we build
Starting point is 00:53:10 the kind of next social system in Audrey Tang and her work, I think people get tripped up in thinking that what we really mean is we have to make some kind of 21st century digital democracy. In fact, I've probably said those words. But what we're really talking about here is some new concept that preserves the principles of what we meant by a democracy, but instantiated with the new technologies, you know, our version of the new printing press, which is, you know, networked information environments and, you know, all of the new capacities that we have in a 21st century with, you know,
Starting point is 00:53:41 mobility where everyone's connected to everywhere and everything all at once. So what is that system, that new social system that leverages the current technology and makes a stronger, healthier, open society. And I think Audrey Tang's work, I mean, I would probably send listeners back to listen to that episode. I think it's one of our most listened to and most popular episodes for a reason, because in Taiwan, she's essentially built an entire civic technology ecosystem in which people are really participating in the governance of their society. Oh, we need masks. We need better air quality sensors. We need to fix these potholes. There are processes by which every time you're frustrated by something, you actually get invited into a civic design process,
Starting point is 00:54:19 where whether it's the potholes or the masks, you can actually participate in having a better system. You're complaining about the tax system and filing your taxes, and maybe it's an inefficient form or something like that. You get brought into a design process of what would make it better. And so the system is participatory, but not in that kind of 18th century way of, hey, there's a physical wooden townhouse, and we're going to walk into it, and we're going to hang out there for three hours, and I'm going to yell and scream about issues that are more local within, you know, 10, 15 miles of where we are because we were existing in a world before automobiles. We're now talking about how do you do an open society social system, but in a world
Starting point is 00:54:53 with all of the new technologies that are not just here today, but emerging? And so do you want to talk a little bit about what the principle, how will we even navigate that challenge? And why is some new social system like that necessary for dealing with these problems that you've sort of laid out at the beginning? Because I'm sure people would like to feel less anxiety about those things. things hanging around for longer. Yeah, I think what Taiwan has been doing and what Adri Tang in the digital ministry position in particular has been leading is probably the best example, certainly one of the best examples in the world of this kind of process and thinking. And does it apply in the, or could it apply
Starting point is 00:55:37 in the exact same way to the U.S.? No, of course not. Like, we know that because, because, of the relatively small geography and high-speed train transportation, you can get across Taiwan in an hour and a half. And so when you're mentioning the small scale of local government at the beginning of the U.S., where you come to the town hall, in a way they have that, right? Like it's 23 million people, but there is an older shared culture. There also happens to be an existential threat just right off their border that is big enough that they can't just chill and not focus on it. Everyone has to be civically engaged with some civic identity and like that. They didn't start making their culture in the industrial era and then have to
Starting point is 00:56:21 upgrade it, right? Like they started later where they're able to start at a higher level of the tech stack. So there's a number of reasons why it's different. So we're not going to naively say what you do in a tiny country that is culturally and ethnically homogeneous and has a higher GDP and education per capita and whatever is the same thing you would do. But we can certainly take a lot of the examples and say, how would they apply differently in different contexts? So the thing we said earlier that this suite of exponential technologies is so much more powerful than all of the previous types of power that only those who are developing and deploying them will be really steering the direction of the future.
Starting point is 00:57:14 And that there are ways of employing them that do cause catastrophic risk. And the catastrophic risk is of two primary kinds, right? Conflict theory mediated, and you can't, you just can't do warfare with this level of technology and this interconnected world and make it through well. Not all catastrophic risk means existential. Doesn't all mean nuclear war and nuclear winter and we've killed all the mammals on Earth. It might just mean we break global supply chains, kill lots of people, and regress humanity and the quality of the biosphere pretty significantly. So I'm not just focused on existential risk. I'm interested in kind of catastrophic risk at scale in general. And we can see that exponential tech applied as
Starting point is 00:58:08 in conflict theory and in mistake, right, as externalities and the cumulative effect. Could you define conflict theory and mistake theory for people who are not familiar with those terms? Yeah, there's a very nice discussion on the less wrong forum if people are interested to go deeper. And it's this question of how much of the problems in the world are the result of conflict theory versus mistake theory, meaning conflict theory is we either wanted to cause that problem, that harm to whomever, as in a knowingly wanted to win at a war. Or at least we knew we were going to cause that problem and didn't care because it was attached to something we wanted, right? Conflict theory. Or mistake theory. We didn't know. We didn't want to cause it and we really didn't know and it was just unintended, unanticipatable consequence.
Starting point is 00:59:01 And it's fair to say that there's both, right? There's plenty of both. One thing that is worth knowing is that if I'm trying to do something that is actually motivated by conflict theory, it benefits me to pretend that it was mistake theory, benefits me to pretend that I had no idea, and then afterwards say, oh, it was an unintended, unanticipatable consequence, it was too complex, people can't predict stuff like that. And so the reality of mistake theory ends up being a source of plausible deniability for conflict theory. And, but there, but they're both things and we have to overcome both, meaning we have to have choice making processes in our new system of coordination. And like, this sounds like maybe hippy stuff until you take
Starting point is 00:59:49 seriously the change of context. Oh, we have to have problems of choice making that consider the whole. That sounds like unrealizable hippie stuff. Until you realize, but we're making choices that affect the whole at a level that can even individually be catastrophic and is definitely catastrophic cumulatively. So if we aren't factoring it, then the human experiment self-terminates. And maybe that's the answer to the great filter hypothesis, right? And so are, well, yeah, I think people don't have an intuitive grasp of what it means that each of us are walking around with the power of gods to influence huge enormous consequences. I mean, I could give a few examples every time you enact with the global supply chain and hit buy on Amazon,
Starting point is 01:00:37 you invisibly enacted, you know, shipping and planes and petroleum and wars in the Middle East, there's a whole bunch of things that were sort of tied into. When you are posting something on social media and have more than a million followers, you're influencing a global information ecology. And if you're angry and biased about one side of the other of the pandemic is real or it's not real or something like that, you're externalizing more bias into the commons of how people the rest of the world understands things. So we're walking around with increasing power, but I don't think the increasing power that we've granted is as intuitive for some folks. Did you explain some more examples of that? There's both cumulative effect and like cumulative long term and fairly
Starting point is 01:01:18 singular short term. And cumulative long term, I mean, you go back to early U.S. settlers coming into the U.S. moving west and there being Buffalo everywhere. And there had been Buffalo everywhere for a very long time and then there's no buffalo and whole areas that were forested with old growth forest became deforested and it was like no it's impossible we could never get rid of all the buffalo like i we could never cut down all the trees but the cumulative effect of lots of people thinking that way we're individually i have no incentive to leave the buffalo alive and i do have an incentive for my family individually to kill it but everybody thinking that way and increasing our um desire for how much we consume per capita our technology that allows us to
Starting point is 01:02:03 consume more per capita and developing more capital, more total people, well, then you start getting environmental destruction and species extinction at scale. And that's a long time ago, right? Like, that's much lower tech and much less people. And it's distributed action. It's a cumulative effect issue. And obviously, we see that with nobody's intending to fill the ocean with microplastics. But everybody's buying shit that is filling the oceans with microplastics. And so everyone is participating with systems where the system as a whole is sociopathic. The system is self-terminating. The system doesn't exist without all the agents interacting with it. All the agents feel like their behavior is so small that that justifies everybody
Starting point is 01:02:49 doing that thing, right? So that's what we mean by cumulative kind of catastrophic risk. But it's also true that whoever made that thermite bomb and hooked it to a drone and hit the Ukrainian munitions factory a couple years ago that caused a billion dollars in damage, explode the munitions factory, the effect of a bomb as big as the largest non-nuclear bomb the U.S. Arsenal has an incendiary bomb. That was a homemade little bomb and a drone, right? And so, and CRISPR gene drive. are cheap and easy, and it doesn't take that much advanced knowledge to start working with them. And so that starts to look like individuals and small groups with real catastrophic ability, not long-term and cumulatively. The increase in our tech gives us both issues. Via globalization and the overall system, you get these cumulative long-term effects. And with the exponential attack, creating decentralized catastrophic capabilities.
Starting point is 01:03:54 One of the core questions we have to answer is, how do we make a world that is anti-fragile in the presence of those kind of catastrophic capabilities that are easy to produce and thus decentralizable? And so how do we do that? What are the social systems that we need to employ to bind some of these bad effects in ways that the natural inclinations of self-interested actors
Starting point is 01:04:21 will drive things in that direction just to link this to the social media space for people. If I know that I can get a little bit more attention and a little bit more likes and clicks and follows and shares and so on, if I exaggerate the truth by 5%, just to use a little bit more of an extreme adjective, you know, I know that that in the long run would be bad if everybody did that. But for me right now, I can win a few hits and I can get more influence and I'm an Instagram influencer and I'm making $10,000 a month. And if I don't do it, I'm noticing everyone else's do it. And if I don't use the filter, everyone else is using the filter. So everyone ends up in this sort of another race to the bottom sort of situation that has that kind of cumulative degradation or cumulative derangement where there's increasing distance between what is true and what people believe because we've all been subtly exaggerating it to make our point and gain influence and so on.
Starting point is 01:05:08 And so just to give another example maybe for listeners in kind of the space that they're more familiar with. But going back, I mean, the whole premise of this is as we gain more exponential technologies that have. more capacity and more hands. So instead of having just the U.S. and Russia having this, you have, whether you mentioned CRISPR gene drives or some of the drone things that are out there, more and more people have access to these things. How can we bind those kinds of forces?
Starting point is 01:05:37 And what are the social systems that we need to make that happen? Yeah, I want to go back as you were describing this. I was thinking about how many people who listen to your show, who maybe work in technology, who might have, they work in technology because they see the positive things technology can do and have more of a kind of techno-optimist point of view and this overall conversation might sound very techno-pessimist
Starting point is 01:06:01 and like, did we not read Pinker and watch Hans Rossling and, you know, those types of things. And so I want to speak to that briefly. First, this is a meta point, but it's worth saying right now, particularly on this podcast and in the kind of post-truth, post- or fake fact world, where then so much of the emphasis has gone into we need fact checkers and we need real facts. Obviously, it's possible to have an epistemic error or even intentional error in the process of generating a fact. Is there corruption in the institutions and that kind of thing?
Starting point is 01:06:44 But let's even say that wasn't an issue. And the things that go through the right epistemic process as facts are, facts. Can you lie with facts? Totally. Can you mislead with facts? Yeah, because nobody's going to make their choice on one fact. They make their choice based on a situational assessment, based on a narrative, based on a gestalt of a whole thing that's lots of different facts. Well, which facts do I include and which facts do I not include? And am I decontextualizing the fact? So the quality of life has gone up so much because the average person lived on less than a dollar a day in the US in 1815 and now they live on this many dollars a day, which inflation adjusted means higher quality of life.
Starting point is 01:07:21 Yeah, but in 1815, most of their needs didn't come through dollars. They grew their own vegetables, they hunted. So I'm decontextualizing the facts to compare something that's really apples and oranges. So even if the fact is quote unquote true, the decontextualization and recontextualization makes it seem like it means something different than it means. And the same with the cherry picking of facts. And I can very easily say, oh, there's a lower percentage of people in extreme poverty, but I might also be changing the definitions of extreme poverty.
Starting point is 01:07:48 I can also, rather than focus on percentage, say, well, there's more total people in poverty than there were total people in the world before the Industrial Revolution. So there's the ability to decontextualize and recontextualized facts. There's the ability to cherry-pick facts, and there's the ability to lake-off frame facts and put particular kinds of sentiment and moral valence on it.
Starting point is 01:08:10 And so am I talking about them as illegal aliens or undocumented workers? And I get a very different kind of sentiment. So talking about it as a pre-owned car or a used car. Everyone loves a pre-owned car. No one wants a used car. And so these very simple semantic frames, contextual frames, cherry-picking of the things, means that I can make a narrative where all the facts went through the most rigorous fact-checker, and yet the narrative as a whole is misleading. And so fact-checking is necessary, but it is not sufficient for a good epistemics and good sense-making. And not only is it not sufficient, it's even weaponizable. This is a very important thing to understand, because if you are not pursuing that, if you're not recognizing that, you might be believing nonsense, thinking that you're using epistemic rigor. Okay, so the techno-pessimists and the techno-optimists, both cherry-pick, and they both lay co-frame, and this is true with the difference in almost every political ideology, the woke and the anti-woke, the pro-socialist, pro-capitalist, you'll notice that the way they do their arguments, the systemic racism is really, really terrible. No, there's not that bad the systemic racism. They both have stats. But this is actually, you could almost think of it as statistical warfare as a tool of narrative warfare. And so this is where a higher level of earnestness rather than a particular vested interest or body.
Starting point is 01:09:44 a higher willingness to look at bias as a higher level of rigor, you know, ends up being critical to actually overcome any of these things. So can I cherry pick stats that make it look like everything is getting better? Totally. Those things are true. And nobody wants to go back to a world before Novocaine when you have to do dentistry. And nobody wants to go back to a world before penicillin when basic bacterial infections go around. And like there's totally good stuff that has emerged. And are there all kinds of ubiquitous mental illnesses and chronic complex disease that didn't exist before and increase in the total number of addictive type behaviors within populations and a radical increase in the catastrophic risk landscape and negative effect
Starting point is 01:10:34 to environmental metrics? So things are getting better and things are getting worse at the same time. It's important to understand that, depending upon what you've It's just that the things that are getting worse are heading towards tipping points that make the whole thing no longer viable. And so that we're not denying that there are things that are getting better. We're saying that for the game to continue at all, right? To have it be an infinite game that gets to keep continuing, there are certain things that have to not happen. And you can't have the things that are getting worse, keep getting worse at the curve that they are, and have the things that are getting better be able to continue at all.
Starting point is 01:11:04 So I just want to say that. So naive techno-optimism can actually make you a part of the problem because then you do things like develop a solution to a narrowly defined problem and externalize harm to other areas because you weren't taking seriously enough not doing that. But techno-pessimism also makes you a part of the problem or at least not a part of the solution because the world is the future is not going to be determined by Luddites. It's not going to be determined by people who aren't developing the tools of power. So if you aren't actually looking at how do we develop a high tech world that is also a fundamentally desirable in terms of a high nature and high touch world, then you really aren't thinking about it in a way that ends up mattering. And so we are techno-optimist, but not naive techno-optimist. We go through the totally cynical phase of, man, tech is a serious issue, and then
Starting point is 01:11:56 you go to a post-synical phase of, if I want to be techno-optimist and not be silly, what does it take to imagine a world where humans have that much power and we are good stewards of it, meaning that we actually tend to each other well and we don't create a dystopic world that has exponential wealth and equality and an underclass that nobody in the upper class would want to trade places with and that doesn't cause catastrophic risk. Right now, the amount of power of exponential tech makes two attractors most likely. Catastrophic risk of some kind or social systems that do not preserve the values that we care most about, that are the ones that are currently most working to develop and deploy that technology
Starting point is 01:12:41 and to just give a very brief recap of the frame that Tristan you gave on it earlier. As you mentioned, China's not leaving 100% of its technology development to the market to develop however it wants, even if it harms the nation state. They are happy to bind technology companies that are getting too large and in ways it would damage the nation state. we saw with Ant Corporation, and they are doing a lot of very centralized innovation as well associated with long-term planning. Long-term planning is a key thing. In the U.S., term limits make long-term planning very hard, as does a highly rival-risk two-party system that is willing to
Starting point is 01:13:24 damage the nation as a whole to drive party wins. So in that system, almost all the energy just goes into trying to win, right? You spend at least a couple years, but even the years before that are fundraising, creating alliances to just try to win. Then you're not going to invest in anything heavily that has return times longer than four years because it won't get you reelected. So no real long-term planning. And then whatever you do do in those four years will get undone systematically in the next four years for the most part. All right, that system of governance will just fail comprehensively. relationship to a more, to a system that doesn't have that much internal infighting and that has
Starting point is 01:14:11 the capacity to do long-term planning. And there's a million examples we can look at, but just when did high-speed trains start? They started, you know, we saw them emerge in Europe, we saw them emerge in Japan, and in China. We've seen China now start to export them all around the world and the U.S. still doesn't have any high-speed trains. And it's like, what happened? Why? and we can see that the U.S. innovated in fundamental tech in the Manhattan Project kind of through the Apollo project, but then it started to privatize almost everything to the market. The market started to develop it in ways that really were not advancing the technology in a way that increased the coherence of the nation and the fundamental civic values and ideas
Starting point is 01:14:48 of the nation. Even the World War II thing, we can see we increased our military capacities radically, but that didn't mean we actually really advance the ideas of democracy or those values of do we make a better system to educate the people and inform them and help them participate in their governance? Do we make better governance? This is why the U.S. military is so powerful, but the U.S. government is so kind of inept, which is why nobody wants to fight a war with the U.S., a kinetic war, but it's very easy right now to engage in supporting narrative warfare where you turn the left and the right against each other increasingly, and where you do long-term planning where the U.S. can't do long-term planning of those kinds.
Starting point is 01:15:29 And so we can see that the government of the U.S., and not just the U.S., but like we can see that open societies are not innovating in how to be better open societies, for the most part, more effective ones where they're using the new tech to make better open societies. That's happening in the market sector. The market is making exponentially more powerful companies.
Starting point is 01:15:51 A company is not a democracy. It's not a participatory governance structure in general. It's a kind of very top-down autocratic type system. And so we see that there's more authoritarian nation states that are intentionally doing long-term planning of the development and deployment of exponential tech to make better nation states of that kind. And we can't even blame them when they look at, I mean, China have the benefit of getting to see both where the USA failed
Starting point is 01:16:19 and where the USSR failed and try to make something that didn't fail in either of those ways. And there's some things that are very smart about those approaches. So we see, though, exponentially empowered more autocratic type structures, and the emergence of one natural monopoly per tech sector, and then the interaction of those that kind of becomes like oligarchic feudalism, tech feudalism, neither of those have the types of jurisprudence or public accountability or whatever that we're really interested in. So the two attractors right now is the emergence of social systems that are deploying the exponential tech that will probably not preserve the social values that we're interested in and not be maximally desirable civilizations, probably pretty dystopic ones, or not even guiding it well enough to prevent catastrophic risk and catastrophic risk. Those are the two major types of attractors. We want a new attractor, which is how do we utilize the new financial technologies, the whole suite of them, to build new systems of collective intelligence,
Starting point is 01:17:27 new better systems of social technology. How do you make a fourth estate that can really adequately educate everyone in a post-Facebook world? Well, the same way that we're trying to optimize control patterns of human behavior for market purposes to get them to buy certain things and to direct their attention, could that be used educationally? Of course it could, if it was being developed for that purpose. And the AI tech that can take a bunch of faces and make a new face that is merged out of those, could it take semantic fields of people's propositions and values
Starting point is 01:18:02 and create a proposition that is kind of the semantic center of the space? And then could we use, we can't all fit into a town hall, but can we engage in digital spaces where we can have better processes of proposing refinements to the propositions? Of course we can. Could we use blockchain and other types of unconstitutional? incorruptible ledgers to solve corruption, which is something that universally everybody thinks is a good idea, should all government money be on a blockchain, the movement of it. So you have
Starting point is 01:18:30 provenance, so you can see where the money is actually going. And if someone wants to be a private contractor, they have to agree that the accounting system, if they want government money, goes on the blockchain. So we can see the entire provenance of the taxpayer money. So you can't have representation if there isn't transparency of how it happens. So there's a whole bunch of when you start to think about attention directing technology and what its pedagogical applications could be, when you start to think about AI and how it could actually help proposition development and parsing huge amounts of information to make a better epistemic commons, when you start to think about blockchain and could we actually resolve corruption using uncorruptible ledgers
Starting point is 01:19:09 and making the provenance of physical supply chains and information and money all flow across those, totally new possibilities start to emerge that never emerged before, that were never possible before. But if it doesn't become our central design imperative to develop those, those are not the highest marketed opportunities for those right now. The highest market opportunity for blockchain is speculative tokens that have no real utility. And for AI is things that actually drive ads and purchasing. And, you know, on and on. And for attention tech, it is the same thing. So you've sold me on the idea that we have two dystopian attractors that we don't want. And the third tractor that we're trying to develop here is some kind of open society
Starting point is 01:19:54 that is consciously using all the modern technologies towards the values that we care about. Can you give some concrete examples of what it would look like to use, you know, AI and attention driving tech and click driving tech and blockchains and all these things, but in a way that would make a stronger, healthier open society? Yeah, totally. So let's say we take the attention tech that you've looked at so much that when it is applied for a commercial application is seeking to gather data to both maximize time on site and maximize engagement with certain kinds of ads and whatever. That's obviously the ability to direct human behavior and direct human feeling and thought. in a way that is both emerged out of capitalism and has become almost a new macroeconomic structure more powerful than capitalism because even more powerful than being able to incent people's
Starting point is 01:20:56 behavior with money as being able to direct what they think and feel to where the thing that they think of as their own intrinsic motive has actually been influenced or captured. So if we wanted to apply that. that type of technology, and we figured out how to make the kind of transparency that made institutions that were trustworthy enough that we could trust them with us, and already we have institutions that have it that we have no basis to trust with it, could that same tech be used educationally to be able to personalize education to the learning style of a kid or to an adult to their particular areas of interest
Starting point is 01:21:38 and to be able to not use the ability to control them for game theoretic purposes, but use the ability to influence them to even help them learn what makes their own center,
Starting point is 01:21:57 their locus of action more internalized, right? We could teach people with that kind of tech how to notice their own bias, how to notice their own emotional behaviors, how to notice group think type dynamics, how to understand propaganda and media literacy. So could we actually use those tools to increase people's immune system against bad actors' use of those tools? Totally. Could we use them pedagogically in general to be able to identify,
Starting point is 01:22:23 rather than manufacturing desires in people or appealing to the lowest angels of their nature because addiction is profitable? Can you appeal to the highest angels in people's nature, but that are aligned with intrinsic incentives and be able to create customized educational programs that are based on what each person is actually innately, intrinsically motivated by, but that are their higher innate motivators. Everybody can have a reward circuit that is based on, you know,
Starting point is 01:22:51 chocolate cake and sloth, but the immediate spike that comes from the chocolate cake ends up then having a crash and increased weight and inflammation and whatever where the baseline of their happiness goes down over time. even though every time they eat the chocolate cake, they get a spike. The exercise reward circuit is maybe not that fun, maybe even kind of painful and dreadful in the moment, but then creates a higher baseline of energy and capacity and endurance and self-esteem,
Starting point is 01:23:19 and you start to actually have the process become more fun. You get a new reward circuit and the baseline goes up. So, of course, I can appeal to the lower reward circuit and say, hey, I'm just giving people what they want. Yeah, but if you have a billion dollar or a trillion dollar organization, that is preying upon the, and you discuss this very well all the time, the vulnerabilities that make people's life worse, to then have the plausible deniability to say,
Starting point is 01:23:44 yeah, but they wanted it. Yeah, but it was a manufactured demand and a vulnerability. Where's the nobles oblige? Where's the obligation of having that much power to actually be a good steward of power, a steward of that for other people, where if there are reward circuits that decrease the quality of their life, reward circuits that increase it,
Starting point is 01:24:01 that we're trying to appeal to one rather than the other. Could we do that? Yeah, totally we could. Could we have an education system as a result that was identifying innate aptitudes, innate interests of everyone and facilitating their development? So not only did they become good at something, but they became increasingly more intrinsically motivated, fascinated, and passionate by life, which also meant continuously better at the thing. Well, in a world of increasing technological automation coming up, both robotic and AI automation, where so many of the jobs are about to be obsoleted, our economy and our education system have to radically change to deal with that because the core of like one of the core things an economy has been trying to do forever was
Starting point is 01:24:48 deal with the need that a society had for a labor force and there were these jobs that society needed to get done that nobody would really want to do so either the state has to force them to do it or you have to make it where the people also need the job so there's a cemetery and so kind of the market forces them to do it. Well, when you technologically automate those jobs, and it happens to be that the things that are the most wrote are the least fun for people and the easiest to program machines to do. And so if you keep the same economy where if people don't produce, they don't have any basic needs met, then people want those crappy jobs, right? But if you make it to where they have other opportunities, then of course having those jobs be automated is
Starting point is 01:25:34 time. But what does it mean to really be able to have other better opportunities? So if one of the fundamental like axioms of all of our economic theories is that we need to figure out how to incent a labor force to do things that nobody wants to do, an emerging technological automation starts to debase that, that means we have to rethink economics from scratch because we don't have to do that thing anymore. So maybe if now the jobs don't need the people, can we remake a new economic system where the people don't need the jobs? Can we start to create Commonwealth resources that everyone has access to where people's access doesn't based on possession that automatically limits everyone else's access? If you get around transportation-wise with a car based on owning that
Starting point is 01:26:18 car where the vast majority of the life of the car it's just sitting not being used, for you to have access to the car, you have to have possession of it, which means that it's a mostly underutilized asset. I don't have access to the thing that you possess. Now, what we see with Uber, of course, is a situation where your access is not mediated by your possession. So now turn that into electric self-driving cars and now make the entire thing on a blockchain. So you disintermediate even the central business, make it a Commonwealth resource. And everyone has access to transportation as a Commonwealth resource. It'll take a 20th of the number of cars to meet the same level of convenience during peak demand time.
Starting point is 01:26:55 So much less environmental harm. It'll actually be more convenient because I don't have to be engaged in driving the thing and there's less traffic because of coordination and better maintenance, and there isn't a desire for an incentive for designed and obsolescence in that system. You can see a situation where, okay, can we make it to where the wealth-augmenting capacity of that technological automation goes back into a commonwealth because we don't have to have the same axioms of needing to incent the people. Oh, yeah, but if you don't incent the people, they'll all be lazy welfare people.
Starting point is 01:27:25 Nonsense. Einstein didn't do what he did based on economic incentive. And neither did Mozart and neither did Gandhi. And none of the people that we are most inspired by through history were doing that. And what kids will spend so much time doing where they ask questions about why this, why this, why this, and building forts and whatever is intrinsic motive. It's just we don't facilitate the things that they're interested in. We try to force them to be interested in things they aren't interested in. That's what ends up breaking their interest in life.
Starting point is 01:27:58 And then they just want a hypernormal stimuli and play video games, whatever. What if you had a system that was facilitating their interests the entire time? Now you have a situation where you can start to decrease the total amount of extrinsic incentive in the system as a whole. Use the technology to the automation to decrease the need for extrinsic incentive and make an educational system in culture that's about optimizing intrinsic incentive because if my needs are already met getting stuff, there's no, and everybody's needs are met through access to Commonwealth resources. There's no real status conferred. there's only status conferred by what I create. So now there is a, any status is bound to a kind of
Starting point is 01:28:35 creative imperative. That's an example. We can look at blockchain tech even more near term and say, but just to come back to this technological automation thing. So obviously it makes possible changing economics and changing education, but also what is the role of humans in a post-AI robotic automation world? Because that is coming very, very soon. And what is the, the future of education where you don't have to prepare people to be things that you can just program computers to be. Well, the role of education has to be based on what is the role of people in that world. That is such a deep redesign of civilization because the tech is changing the possibility set that deeply. So at the heart of this are kind of deep existential questions
Starting point is 01:29:22 of what is a meaningful human life and then what is a good civilization that increases the possibility space of that for everybody and how do we design that thing? We come back to blockchain and we say, well, blockchain is an uncorruptible ledger. Well, one thing that the left and right and everybody agrees on is that corruption happens and it's bad for the societies a whole and we don't like it. We just disagree on who does it. Is it possible that that tech could make possible decreasing corruption as a whole? It actually decreases the possibility set for corruption. Yeah.
Starting point is 01:29:58 In order to do corruption, I have to be able to hire. that I did it, right? I either have to break enforcement or break accounting, and mostly it's break accounting. And so what if all government spending was on a blockchain? And it doesn't have to be a blockchain. It has to be an uncruptible ledger of some kind. Hall of chain is a good example that is pioneering another way of doing it, but an uncruptible ledger of some kind where you actually see where all taxpayer money goes and you see how it was utilized the entire thing can have independent auditing agencies and the public can transparently be engaged in the auditing of it. And if the government is going to privately contract a corporation, the corporation
Starting point is 01:30:36 agrees that if they want that government money, the blockchain accounting has to extend into the corporation. So there can't be, you know, very, very bloated corruption. Everybody got to see that when Elon made SpaceX, all of a sudden, he was making rockets for like a hundreds to a thousands of the price that Lockheed or Boeing were, who had just had these almost monopolistic government contracts for a long time. Well, if the taxpayer money is going to the government, is going to an external private contractor who's making the things for a hundred to a thousand times more than it costs, we get this false dichotomy sold to us that either we have to pay more taxes to have better national security or if we want to cut taxes, we're going to have less national security.
Starting point is 01:31:20 What about just having less gruesome bloat because you have better accounting and we make the rockets for a hundredth of the price and we have better national security and better social services and less taxes. Well, that's, everyone would vote for that, right? Who wouldn't vote for that thing? Well, that wasn't possible before uncorruptible ledgers. Now, that uncorruptible ledger also means you can have provenance on supply chains to make the supply chain's closed loop so that you can see that all the new stuff is being made from old stuff and you can see where all the pollution is going and you can see who did it, which means you can now internalize the externalities rigorously. And nobody can destroy those emails or burn.
Starting point is 01:31:56 those files, right? What if the changes in law and the decision-making processes also followed a blockchain process where there was a provenance on the input of information? Well, that would also be a very meaningful thing to be able to follow. So this is an example of like, can we actually structurally remove the capacity for corruption by technology that makes corruption much, much, much harder that forces types of transparency on auditability. What if also you're able to record history, you're able to record the events that are occurring in a blockchain that's uncruptible where you can't change history later? So you actually get the possibility of real justice and real history and multiple different simultaneous timelines that are happening. That's humongous in terms
Starting point is 01:32:46 of what it does. What if you can have an open data platform and an open science platform where someone doesn't get to cherry pick which data they include in their peer-reviewed paper later we get to see all of the data that was happening we solve the oracle issues that are associated and then if we find out that a particular piece of science was wrong later we can see downstream everything that used that output as an input and automatically flag what things need to change that's so powerful like the least interesting example of blockchain is currency creation the these are actual like the capacity for the right types of accounting means the right type of choice-making, right? Let's take AI.
Starting point is 01:33:30 With AI, we can make super terrible deep fakes and destroy the epistemic commons, you know, using that and other things like that. But we can see the way that the AI makes the deep fake by being able to take enough different images of the person's face and movements that it can generate new ones. We can see where it can generate totally new faces, averaging faces together. Somebody sent me some new work that they were just doing on this the other day. I found very interesting. They said, we're going to take a very similar type of tech and apply it to semantic fields where we can take everybody's sentiment on a topic and actually generate a proposition that is at the semantic center or take everybody's sentiment and abstract from it, the values that they care about and create values taxonomies and say we should come up with a proposition that meets all these values. Then can you have digital processes where you can't fit everybody into a into a town hall, but everybody who wants to can participate in a digital space that rather than vote yes or no on a proposition that was made by a special interest group where we didn't have a say in the proposition or even the values it was seeking to serve, so it was made in a very narrow way that like we mentioned earlier benefits one thing and harms something else, which is why almost every proposition gets about half of the vote and inherently polarizes the population. Well, people are so dumb and so rival risk. The process of voting. with bad propositions and bad representation process is inherently polarizing and downgrading to people. So what if there's a process by which there's a decision that wants to be made?
Starting point is 01:35:04 You start by identifying what are the values everybody cares about. And then we say the first proposition that meets all these values well becomes the thing that we vote on. And then instead of just a direct vote, do we engage types of qualified and liquid democracy together, where you have to show that you understand the basics of that topic to be able to vote on it, but the education is free and you can keep retesting. And the basics don't show leaning one way or the other. It just shows you understand the stated pros and cons so that massive populism doesn't happen. But if you don't want to come to understand it, you concede your vote to someone else who has passed that thing. These are that type of liquid democracy, that type of qualified, educated democracy where
Starting point is 01:35:45 it doesn't have to be educated across everything. It can be per issue. And where you're not just voting on a thing, you're helping craft the propositions, these completely change the possibility space of social technology. And we could go on and on in terms of examples, but these are ways that the same type of new emergent physical tech that can destroy the epistemic commons and create atocracies and create catastrophic risks could also be used to realize a much more pro-topic world. So I love so many of those examples, and I especially on on the blockchain and corruption one, because I think, as you said,
Starting point is 01:36:23 something that the left and the right can both agree on is that our systems are not really functional, and there's definitely corruption and defection going on. And just to add to your example, imagine if citizens could even earn money by spotting inefficiencies or corruption in that transparent ledger so that we actually have a system that is actually profiting by getting more and more efficient
Starting point is 01:36:43 over time and actually better serving the needs of the people and having less and less corruption. And so there's actually more trust and faith, And that's actually a kind of digital society that when you look at, let's say, the closed China's digital authoritarian society, and you look at this open one that's actually operating more for the people with more transparency, with more efficiencies. So you get more SpaceX, Elon Musk-type cheap ways of sending rockets to the moon and becoming a multi-planetary civilization as opposed to more bloat and more mega monopolies defense contractors that are not taking us to where we need to go. That's just an inspiring vision. And I hope people listen to what you shared and kind of go back because there's a lot of different aspects there. I think the question on many people's mind right now is going to be, how do we get from where we are to the world that you're talking about?
Starting point is 01:37:27 What are the steps that are in between? Obviously, I don't know. Nobody knows there's going to, like, which projects emerge and first and start really making success. There's a lot of different possible paths. I can say some of the things that could happen and some of the things that I think need to happen. So we take all the catastrophic risks that exponential tech makes possible and the dystopic attractors and we say, okay, so we need to solve all those problems. But we're not doing really good at solving those problems right now. So our problem solving processes need upgraded.
Starting point is 01:38:07 And that means new institutions. And when we say institution, we usually think of a pretty centralized. thing and with things like decentralized governance emerging, the institution might be a decentralized one, but it's individual people aren't going to solve all of that, right? So it's new institution centralized and decentralized that have the right capacities to solve these types of problems need to come about. All right, well, who develops those institutions and who empowers them? And this is where the democratic idea of the power of government coming from the consent of the governed is one of the key ideas to what we would think of as the values of an
Starting point is 01:38:51 open society. Let's say that there's a small number of people who think we understand these problems. We understand the solutions that must happen. Everybody else doesn't get it. So we're going to make this thing happen. And because we have the power, we can just kind of implement it by force. And so that becomes its own dystopia, right? And implemented by force might be, well, the people think they need to be free. So we'll implement it by attention hijacking them so that they participate with it or don't even realize that it's happening and they just keep doing whatever's next. The cultural element, why we talk about the need for a new cultural enlightenment is, of course, when we look at like the founding of the U.S., we can see all that
Starting point is 01:39:30 was super wrong with it, right? I mean, just to mention, like, when Churchill said democracy is the worst form of government ever, save for all the other forms, there's, when Socrates he's talked about in the republic when Plato was discussing it why democracy was a dreadful idea. The arguments are good arguments, right? Like, do you want people who understand seafaring to man the boat or just a general population who knows nothing about it to man the boat? Well, that's not a very good idea. Do you want the general population that knows nothing about it to build the NASA rocket? Or do you want people that know what they're doing? Well, why would we think people who have no idea what they're doing are going to be good at figuring out how a civilization should be run?
Starting point is 01:40:13 What should our nuclear first strike policy should be? How should we deal with the stability of the energy grid against Carrington events? And so what does it take to have a population educated enough? And yet then if we say, okay, but then the other problem is if we say the people are too uneducated and maybe too irrational and rival risk to be able to hold that power so it needs to be held by some, how do we ensure non-corruption and who is a trustworthy authority to be able to hold that power and not have vested interests mess it up. And so this is why I think it was a Jefferson quote
Starting point is 01:40:46 of the ultimate depository of the power must be the people. And if we think the people too uneducated and unenlightened to be able to hold that power, we must do everything we can to seek to educate and enlighten them, not think that there is any other safe depository. And so even with that, we take the U.S. formation and you've got some founders who, had read most of the books of the time, right? Read most of the books of philosophy,
Starting point is 01:41:16 knew the history of the Magna Carta and the Treaty of the Forest and all these kinds of things, thought and talked deeply, spent many years, were willing to die fighting a revolutionary war, were not going along with winning at the current system, but really trying to do a fundamentally different thing, to develop a new system. Not everybody who was participating in the U.S. was doing that thing. They weren't all doing systems architecture, right? But they were all basically saying, we agree to this kind of systems architecture and we want to learn how to participate with it adequately. We'll read a newspaper. We will do a jury duty. We'll come to the town hall, that kind of thing. So in Taiwan's example, I think their population is 23 million people and their online citizen engagement platform has something like 5 million people engaging.
Starting point is 01:42:03 That's pretty awesome, right? That's not everybody. And no one should be forced to be engaging. And one of the critical things when we think deeper about, is it a democracy? Is it a republic? Is it an epistocracy? Is it a, we want to think about the values, not the previous frames for them. And the values exist in dialectics. And we need to be able to hold those together. Of course, we want individual liberty, but we don't want individual liberty that gets to harm other people and other things. So we want also, you know, law, justice, collective integrity. How do you relate those things? One of the core things is the relationship between rights and responsibilities. So if I have rights and I don't have responsibilities, there ends up being like tyranny and entitlement. And we can see that that's kind of rampant, the entitlement thing. If I have responsibilities and I don't have any attendant rights at servitude, neither of those involve a healthy just society. So if I want the right to drive a car, the responsibility to do the driver's education and actually learn how to drive a car safely is important. And we can see that some countries have less car accidents than others associated with better drivers' education.
Starting point is 01:43:13 And so increasing the responsibility is a good thing. We can see that some countries have way less gun violence than others, even factoring a similar per capita amount of guns based on more training associated with guns and mental health and things like that. So if I have a right to bear arms, do I also have a responsibility to be part of a well-organized militia, train with them, and be willing to actually sacrifice myself to protect the whole or sign up for a thing to do that. Do I have to be a reservist of some kind? Those are the right responsibility of pairing. If I want the right to vote, is there a responsibility to be educated about the issue?
Starting point is 01:43:47 Yes. Yes. Now, does that make it very unequal? No, because the capacity to get educated has to be something that the society invests in making possible for everyone. And of course, we would all be silly to not be dubious factoring the previous history of these things. but this is what we then have to insist upon because do we want people who really don't understand the issues but think they do voting? No, that's a dreadful system. But do we want people who know something to have no avenue or who care? Do we want people who know something to have no avenue to input that into the system or people who care to have no opportunity to learn? No, that's also dreadful. So how do we make the onramps to learning available for everyone, not enforced, but we're actually incentivizing. Can we use those same kind of social media behavior, to increase everyone's desire for more rights and attendant responsibilities so that there's actually a gradient of civic virtue and civic engagement. Yeah, we could totally do that.
Starting point is 01:44:47 So this is where the cultural enlightenment layer is. Of course, not everyone is going to be working on how do we develop AI and blockchain for these purposes, but they can certainly be saying, I am going to make sure that my representatives are talking about these issues. I want all the presidential candidates to be talking about these issues. I'm going to pay. attention to and support candidates who really do in earnest ways, I'm going to invest in companies that are doing those things. I'm going to divest from companies that are doing the other things. There is a cultural enlightenment that is needed to be able to create the demand and the support for where those projects that are earnestly working on and have the capability start
Starting point is 01:45:26 to emerge. So you've painted a compelling vision of some of the ways that a open society could consciously employ some of these technologies to revisit and re-fulfill some of the original values for which they were intended. How much of this, how does this work with the existing institutions that we have? How much is this going to rely on transforming the existing digital Leviathans into something new? How much is going to depend on blockchain projects? How much is this going to depend on existing institutions, be they the Brookings Institution or the New York times, can you speak to the role of new and future institutions in making this transition possible?
Starting point is 01:46:13 Yeah, it's interesting. When we look at institutions that emerged to try to solve some social or environmental problems, or non-profits in particular and some government branches that are associated with that, there's this kind of structural perverse incentive that if I I am an organization, which means I'm people in an organization that have some, that have job security and some actual power and access and whatever because of this position. And my job is to solve a problem. If I fully solve the problem, I would obsolete my job and obsolete myself. So then there's this kind of perverse incentive to continue managing the problem, continue manufacturing the narrative that we're needed to manage the problem. Continue manufacturing the narrative that the problem is really hard and is hard to solve. And so we've got to keep, you know, doing this thing. And so one of the fundamental dispositions of systems is that they want to keep existing. And so, and yet they might no longer be fit for purpose. They might even be antithetical to the purpose. We have to be very careful about this.
Starting point is 01:47:18 With regard to the new institutions we need, to what degree could existing institutions reform themselves, to what degree does it need to be new ones? It's kind of up to them. Like, it's kind of up to the depth of realization of the need and the sincerity and then the coordination capacity of people in current institutions, how much role they could play. We can see the way that going into World War II coming out of the Depression, the U.S. upregulated its coordination capacity so profoundly. So could we have a Manhattan Project-like-level organization?
Starting point is 01:47:52 By organization, I mean the capacity to organize, not a singular thing. that was oriented to how do we instantiate the next model of civilization? How do we instantiate the next model of social systems and social technologies? What is the future of education? What's the future of economics? What's the future of the fourth estate of law, etc., that fulfill the values that are meaningful and are anti-fragile in the presence of the current technologies and that can actually compete with the other applications of those technologies
Starting point is 01:48:28 towards things that serve different values and or aren't anti-fragile. I would love to see the U.S. make that a central imperative, Manhattan Project level, to be able to do that. Not just how do we create a more powerful military, but how do we create a more powerful, a healthier, fundamentally a healthier society that upregulates and engages collective intelligence and its own problem-solving and innovation better. I would like to see lots of countries do that. are countries that did not yet transition to democracy are interested in it and could completely
Starting point is 01:49:05 bypass the industrial era democracies and go directly to better systems. I think networks of small countries, you see what Taiwan is doing, Estonia's trying to do some interesting things. I think networks of small countries could start sharing best practices and sharing resources so they don't all have to develop the stuff from scratch, which could start to lead to coalitions of countries like the EU saying, let's do some fundamentally better things. I think it will happen also not at the level of nation states where like decentralized groups, blockchain type groups, say, all right, let's really earnestly take on what these primary problems are and work on developing these solutions and these capacities.
Starting point is 01:49:44 For the tech companies to do so, it would be very hard because while it could be still profitable long term, it would not be profit maximizing short term relative to the current thing they're doing. As we said, winning at the current game and building a new game are different things. And winning at a current game that's self-terminating is a very short-sighted thing to want to keep doing. So if Facebook or Google or whatever were to cut its ad model, it would have a hard time being able to meet its fiduciary responsibility to shareholders a different way. But could it, in conjunction with a participatory government regulatory process that wanted to help change its fiduciary responsibility
Starting point is 01:50:32 where it became more of a social utility, start to actually redirect its technology and redirect its decision-making process. Yeah, it could. That would be super interesting. So I would like to see, as we mentioned earlier, I'd like to see the UN recognize that the level of progress that it has made at the sustainable development goals, nuclear deproliferation, and other types of international things like economic equality globally writ large and preventing arms races and tragedy of the commons that, well, it hasn't done nothing. What it's doing is not converge. It's not adequate. It's not converging on eventually solving the problem set. It needs not just more of that approach. It needs a different approach. And so to say, okay, well,
Starting point is 01:51:19 Clearly, we don't know how to facilitate coordination of global problems well enough. So let's have the superseding focus be innovation towards better methods of global coordination. That becomes our new number one goal because we know we only get all the other goals if we get that. And you can see that during World War II when we had to crack the Enigma machine and figure out computation and whatever, we got touring, we got von Neumann, we got all of the smartest people from countries all around the world engaged in solving those problems. I would like to see the U.S., the U.N., I would like to see other countries, and I'd like to see private sector taking seriously the actual problem escape we have and innovating not for just short-term advantage or narrow in-group advantage, but for long-term advantage of the whole, how do we, since we have global effect, how do we build global coordination adequate to what is needed? To me, that has to become the central zeitgeist. and whatever groups figure out how to do it effectively will be the groups that can direct the
Starting point is 01:52:23 future. And I know that this is the work that you are working towards with the Consilience Project. Do you want to talk just about how you are working towards that with your work and how we're collaborating? Yeah. I mean, we're at the very, very beginning. The Consolience Project has a site up that is not even a beta yet just because we, in just starting, wanted to to, you know, work on building stuff in association of thinking. But this talk is very central, this conversation you and I are having is very central to the aims of the Consilience Project, which is we're wanting to inspire, inform, and help direct a innovation zeitgeist, where the many different problems of the world start to get seen
Starting point is 01:53:11 in terms of having interconnectivity and underlying drivers. and that the forcing function of the power of exponential tech is taken seriously that says in order to become good stewards of that requires evolutions of both our social systems and our culture, the wisdom to be able to guide that power, a recoupling, right, of wisdom and power in that adequate to what is needed. So how do we innovate in culture, the development of people, and how do we innovate in the social systems, the advancement of our coordination, both employing the exponential tech, and being able to rightly guide it. And so we have a really great team of people that are doing research and writing, basically, the types of things we're talking about here in more depth, explaining what is the role of the various social systems, like what is the role of education to any society, help understand fundamentally what that is, understand why there is a particularly higher educational
Starting point is 01:54:11 threshold for open societies where people need to participate, not just in the market, but in governance, understand how that has been disrupted by the emerging tech and will be disrupted further by things like technological automation. And then envision, what is the future of education adequate to an open society in a world that has the technology that's emerging? And we don't necessarily know what the answer is, but we know examples and we know criteria. So then it's like innovate in this area and make sure you factor these criteria. And the same thing with the fourth estate, the same thing with law. The same thing with economics.
Starting point is 01:54:43 And so the goal is not how do we take some small group of people to build the future. It's how do we help get what the criteria of a viable future must be. And if people disagree, awesome, publicly disagree and have the conversation now. But if we get to put out those design constraints, someone says, no, we think it's other ones. At least now the center of culture starts to be thinking about the most pressing issues in fundamental ways and how to think about them appropriately and how to approach them appropriately. So fundamentally our goal is supporting an increased cultural understanding of the nature of the problems that we face, a clearer understanding, rather than just there's lots of problems and it's overwhelming and it's a bummer. And so either some very narrow action on some very narrow part of it makes sense, which is most of activism, or just nihilism.
Starting point is 01:55:29 We want to be able to say, actually, because there are underlying drivers, there is actually a possibility to resolve these things. it does require the fullness of our capacity applied to it. And with the fullness of our capacity, so it's not a given, but with the fullness of our capacity applied to it, there is actually a path forward. And so we're writing these papers that basically would be kind of like a metacurriculum for people who want to be engaged in designing the future. And some of them have to do with current public culture and how to be able to change patterns of public culture that lead to better conversation, better sense.
Starting point is 01:56:07 sense making, better meaning making, and choice making so that there's an on-ramp into higher-quality conversations, meaning higher-quality process of conversation. And then some of them are things like what are the design criteria of the future social systems and how could we build those things? Then, not everybody will read those. Some people who have the ability to help start building them will. But we hope that other people will take that and translate it on podcasts and into animations and in whatever other forms of media so that those topics start to become increasingly present in people's awareness. Then, of course, the next part is what groups start emerging wanting to address those and what can we do to help facilitate good solutions in those groups.
Starting point is 01:56:56 And this is where, you know, you and I, I've learned a lot from you about the social media issues in particular and how central they are to the breakdown of sense making, because obviously without good shared sense making, there is no possibility for emergent order. You either just get chaos or you have to have imposed order. If you want emergent order, that means emergent good choice making. That means emergent good sense making. And so, you know, we've learned a lot and discussed these things for a long time. And obviously also not just you and I, there's a whole network of people that we're connected to that have been thinking deeply about these things and that we continue to try to think about what adequate solutions could look like.
Starting point is 01:57:34 And I think what CHT did with the social dilemma took one really critical part of this meta-crisis into popular attention, maybe in a more powerful way than I have seen done. Otherwise, because as big a deal as getting climate change in public attention is, it's not clear that climate change is something that is making, that is driving, that is driving the underlying basis of all the problems but a breakdown in sense making and they control of patterns of human behavior that kind of downgrade people like, oh, wow, that really does make all these other things worse. So I see that as a very powerful and personal on-ramp for those who are interested to be able to come into this deeper conversation. And some of them,
Starting point is 01:58:22 it'll simply help them be like, okay, now I know what I was intuitively feeling. Somebody's put it into words and I at least feel more oriented. And that's the extent because they don't necessarily have the ability to build new blockchain systems or whatever it is. And they should be doing the nursing or education or whatever really other important social value they're doing. Some people will be able to say this actually really resonates. I can translate this to other audiences and get more people engaged. And some people say I can actually start innovating and working with this stuff. And all of those are good. yeah I agree and I think what we've essentially been outlining here and you sort of hit it at the end
Starting point is 01:59:03 is going back to the the Charles Kettering quote which I learned from you and I've learned so many things from you over the years which is that a problem not fully understood is unsolvable and a problem that is fully understood is half solved and I just want to maybe leave our listeners with that, which is, I think people can look at the long litany of problems and feel overwhelmed or get to despair in a hurry, I think is your phrase for it. And I think that when you understand the core generator functions for what is driving so many of these problems to happen simultaneously, there's a different and more empowering relationship to that. And you've actually offered a vision for how technology can be consciously employed, these new technologies can be
Starting point is 01:59:48 consciously employed, in ways that should feel inspiring and exciting. I mean, I want that transparent blockchain on a budget for every country in the world. And we can see examples like Estonia and Taiwan moving in this direction already. And we can see Taiwan building some of the technologies you mentioned to identify propositions of shared values between citizens who want to vote collectively on something that previously would have driven up more polarization. So we're seeing this thing emerging. And I think what we need is to sort of have this be seen as a necessary upgrade to, let me do that again. I think we need to see this as not just an upgrade, but the kind of cultural enlightenment that you speak of that so many different actors are in a sense already working on.
Starting point is 02:00:28 We used to have this phrase that everyone is on the same team, they just don't know it yet. And once you understand the, I think, degree to which we are in trouble, if we do not get our heads around this and identify the kind of core generator functions that we need to be addressing, once we all see that, I'll just speak to my own experience. When I first encountered your work, and I encountered the kind of core drivers that drive so much of the danger that we are headed towards, I immediately, I was kind of already in this direction already, but I reoriented my whole life to say, how do we be in service of this not happening and of creating a better world that actually meets and addresses these problems? And I know so many other people whose work and whose lives and whose daily missions and purpose have been redirected by, I think, hearing some of the core frames that you offer and who I hope and who are, you know, many of whom are already working on active projects to deal with this and those who are not are supporting in other ways. And I just hope that our audience takes this as an inspiration for how can we in the face of stark and difficult realities as part of this process gain the kind of cultural. strength to face these things head on and to orient our lives accordingly because I have
Starting point is 02:01:47 while you know during periods of time hit probably low you know low grade despair myself I actually feel more inspired than ever the amount of things and the number of people who are waking up to these challenges and I'll just say that that I think when you face these challenges alone and you feel like you're the only one seeing them or you have a weird feeling in your stomach it can feel debilitating and when you realize the number of of people who are also putting their heads up to say, how can we change this? That's what feels hopeful, and that's where I derive my optimism. So, Daniel, thank you so much for coming on. It's an honor to have you. Your work has touched the lives and work of so many people
Starting point is 02:02:26 who may not always say so publicly. But I know that you had also a huge hand in inspiring some of the themes that emerged in the social dilemma, which has impacted so many people as well. So thank you so much. Really wonderful that we could have this conversation. Thanks, Justin. Absolutely.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.