Dwarkesh Podcast - Will MacAskill - Longtermism, Altruism, History, & Technology

Episode Date: August 9, 2022

Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, ...technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Episode website + Transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) - Effective Altruism and Western values(07:47) - The contingency of technology(12:02) - Who changes history?(18:00) - Longtermist institutional reform(25:56) - Are companies longtermist?(28:57) - Living in an era of plasticity(34:52) - How good can the future be?(39:18) - Contra Tyler Cowen on what’s most important(45:36) - AI and the centralization of power(51:34) - The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there’s a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don’t think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let’s say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we’re working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I’m pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies’ effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I’m skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson’s about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson’s idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It’s an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn’t been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let’s take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could’ve had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can’t be around if there’s an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It’s surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that’s the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity’ for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it’s very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they’ve historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you’re in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean’ argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you’re changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It’s certainly the trend over time. In which case, if we’re sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson’s argument that we’ll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it’s something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn’t. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what’s most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don’t know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn’t feel too controversial. Even though it’s hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you’re right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF’s and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF’s new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I’ve had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they’re too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There’s almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Transcript
Discussion (0)
Starting point is 00:00:06 Okay, today I have the pleasure of interviewing William McCaskill. Will is one of the founders of the Effective Altruist Movement, and most recently, the author of the upcoming book, What We O Ode the Future. Will, thanks for coming on the podcast. Thanks so much for having me on. So my first question is, what is the high-level explanation for the success of the Effective Altruist Movement? Is it itself an example of the contingencies you talk about in the book? Yeah, I think it probably is kind of contingent. Maybe not on the order of like this would never have happened, but at least on the order of decades. Evidence for the reason why effective altruism is somewhat contingent is just that similar ideas have been promoted at many times during history and not taken on. So we can go all the way back to ancient China. The Moists defended a kind of impartial view of morality and took very strategic actions to try and help all people, in particular providing defensive assistance to cities under siege.
Starting point is 00:01:07 Then, of course, there were early utilitarians. Effective altruism was broader than utilitarianism, but has some similarities. And then even Peter Singer in the 70s, he had been promoting the idea that we should be giving most of our income to help the very poor and hadn't had a lot of traction until, even like early 2010, after Givewell launched, after giving what we can launch. What explains the rise of it? I mean, I think it was a good idea waiting to happen at some point. I think the internet helped to gather together a lot of like-minded people that weren't possible otherwise. And I think there were some particularly lucky events like Ellie meeting Holden, me meeting Toby, that helped catalyze it at the particular time it did. Now, if it's true, as you see in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values probably aren't that good? They're probably mediocre or worse, because ex ante, you would expect. to end up with, you know, with the median of all the values we could have had at this point. And obviously, we'd be biased in favor of whatever values we were brought up in.
Starting point is 00:02:09 Absolutely. I think taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, wow, I'm so glad that model progress happened the way it did, and we don't have Jewish people around anymore. What huge model progress we had then. Like, that's a terrifying thought. And I think it should make us take seriously the fact that we're very far away from the moral truth right now. So I think one of the lessons I draw in the book
Starting point is 00:02:36 is, you know, we should not think we're at the end of moral progress and we should not think, oh, we should lock in the kind of Western values we have now. Instead, we should think we want to ensure that we spend like a lot of time trying to figure out what's actually morally right so that the future is guided by the right values rather than merely whichever happened to win out. So that makes a lot of sense, but I guess I'm asking
Starting point is 00:02:58 a slightly separate question. which is not only are there possible values that could be better than ours, but should we expect our values, I mean, we have this sense that we've made more of progress, so things are better than they were before, or better than most possible other worlds in 2100 or 2022, I mean. Should we not even expect that to be the case? Like, should our prior just be that, yeah, these are kind of me values? I think our prior should be that our values are, you know, as good as what one would have expected,
Starting point is 00:03:26 kind of on average. And then you can make an assessment, like, is the world, how the values of the world today, is that going particularly well? You know, and there are some arguments you could make for saying no, perhaps if the industrial revolution had happened in India rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming, which I think is like a moral atrocity. Having said that, my all-things considered view, is actually to think that, like, we're doing better than average.
Starting point is 00:03:52 Like, if civilization were just a reedro, then things would look worse in terms of our moral beliefs and attitudes where I think the abolition of slavery, the feminist movement, liberalism itself, democracy, these are all things that we relatively easily could have lost and are huge gains. But then if that's true, does that make the prospect of a long reflection kind of dangerous? Because if moral progress is sort of a random walk and we've ended up with a lucky lottery, then you're kind of just maybe reversing, maybe you're risking regression to the mean if you just have a thousand years of progress? I think that moral progress isn't a random walk in general.
Starting point is 00:04:34 Like there are many forces that act on culture and on what people believe. And one of them is just what's right, morally speaking? Like what's their best argument support? That is a force. I think it's like a somewhat weak force, unfortunately. And the idea of long reflection is getting society into a state that before we take any drastic actions that might lock in a particular set of values. We allow this force of reason and empathy and debate and good-hearted kind of model inquiry to guide which values we end up with. Okay, so in the book you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of
Starting point is 00:05:18 adults too early and too often. And so do you think it makes sense to extend the analogy this way and suggests that maybe we should be, you know, Birkian small, long-termist, and reject these inside view esoteric threats? Like, I think the Berkian arguments for, you know, taking history seriously and, like, what kind of model views have, like, stood the test of time. You know, I think that's important arguments to engage with.
Starting point is 00:05:43 My view kind of goes the opposite, actually, which is that, you know, we are cultural creatures, and we, in our nature, are very inclined to agree with what other people think, agree with tradition, even if we don't understand the underlying mechanisms. I think that works well in a low-change environment. So the environment we evolved towards, like things didn't change very much.
Starting point is 00:06:07 We were hunter-gatherers and small bands for hundreds of thousands of years, millions of years if you include other homo species. Whereas now, we're at this period of, like, enormous change where the economy is doubling every 20 years, new technologies alive every single year. It's like unprecedented. And I think actually means we should much more than would make sense historically
Starting point is 00:06:29 just be trying to figure things out more from first principles. Interesting. But at current margins, do you think that's still the case? Like if a lot of EA and long-termist thought is first principles thought, do you think more history would be better than the marginal first principles thinker? I think two things. So if it's about an understanding of history, then, yeah, I actually would love EA to have a better historical understanding.
Starting point is 00:06:53 mainly just like as a on the margin thing. You know, I think the most important subject, if you want to do good in the world and the EA way, our philosophy and economics. But we've got that like in abundance, whereas there's very little in the EA community in terms of historical knowledge. And I certainly felt like I learned a huge amount
Starting point is 00:07:10 over the last few years, understanding that better. Should there be even more first principles thinking? Yeah, probably. I mean, I think the kind of first principles thinking, really, or what you might call first principles thinking, I think paid off pretty well in the course of the coronavirus pandemic, where from January, even 2020, my Facebook wall was completely saturated with people freaking out,
Starting point is 00:07:34 or at least taking it very, very seriously, in a way that the existing institutions weren't. And they weren't because they were just in this mode of like, oh, business as usual, don't panic. They weren't properly updating to a new environment and new evidence. Now, in the book you point out several examples of societies that went through hardship. I mean, hardship is putting it mildly, but, you know, Hiroshima after the bomb, Vietnam, after the bombings, and then, yeah, Europe after the Black Death. And they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history, is not that are large. And it implies a sort of solo model of growth where, yeah, even if bad things happen, you can kind of just rebound.
Starting point is 00:08:15 And it really didn't matter. Yeah, in terms, in economic terms, I mean, I think that's the big difference between economic or technological progress and moral progress, where in the long run, at least, I think economic or technological progress is very non-contingent. I mean, it's actually fascinating some historical contingencies you can see in technology. The Egyptians had an early version of the steam engine. Semaphore was only developed very late, yet could have been invented like thousands of years in the past, similarly with like Kay's flying shuttle. But in the long run, like the instrumental benefits of tech progress and the
Starting point is 00:08:52 incentives towards tech progress and economic growth are just kind of so strong that it means, like, I think we get there in the end in a very wide array of circumstances. And in particular, just imagine there's a thousand different societies and none are growing, but one is, then in the long run, that one becomes the whole economy. Yeah, it seems that particular example you gave of the Egyptians having some ancient form of a steam engine. Maybe that points towards there being more contingency, because maybe the steam engine comes up in many societies, but it only gets turned into an industry revolution in one. In that particular case, there's a big debate
Starting point is 00:09:24 about whether quality of metal work was actually possible to build a pluppistee engine at that time. I was mentioning those examples to say, historically, you actually do get some amazing examples of contingency prior to the industrial revolution. I think it's still contingency only on the order of centuries
Starting point is 00:09:42 to thousands of years. And then in the post-industrial revolution world, I think there's much less contingency. It's much harder to see that our future. some, but I think it's much harder to see technologies that wouldn't have happened, you know, within decades if they hadn't been developed when they were. Okay, so I guess maybe the general model here is, of yours, is that there's maybe these general purpose changes in the state of
Starting point is 00:10:04 technology and those are very contingent, and it would be very important to like try to engineer one of those, but other than that, it's going to get done by some guys starting to create a startup anyways? No, I mean, I think more generally, so even the case of like the steam engine or semifor that I was pointing to, which historically are seem kind of maybe contingent. I think in the long run they get developed where, you know, if the industrial revolution hadn't happened in Britain in the 18th century,
Starting point is 00:10:28 would it have happened at some point or like would similar technologies have been developed that were vital in the industrial revolution? And I'm like, yes, because they're very strong incentives for doing so. If you've just got a whole bunch of cultures and they're all in a random walk and one hits upon like, hey, we're going to do industry,
Starting point is 00:10:44 where like a culture that's into making textiles and like doing that in an order, automated way, as was true of England in the 18th century, then that economy just takes over the world. And so that's why there's this structural reason, I think, why economic growth is, is like much, much less contingent than, like, model progress. Okay, so usually people, when they think of somebody like Norman Borlau, in the Green Revolution, it's like, oh, that, if you could have done something like that, you'd be, like, the greatest person in the 20th century. Obviously, he's still a very good man and everything,
Starting point is 00:11:18 but so that would that not be your view like you think the green revolution would have happened anyways? Yeah. So Norman Borlaug is sometimes credited with saving a billion lives. I think he was huge. I think he was like enormously important and good force for the world. I think it's not the case that had Norman Borlaug not existed, a billion people would have died. Rather, similar developments would have happened shortly afterwards. Perhaps he saved tens of millions of lives and that's a lot of lives for the person to save, but it's not as many as just simply saying, oh, this tech was developed, this tech was used, a billion people who would have otherwise been at risk of starvation used as technology. And in fact, even at the time, there were, you know, not long afterwards,
Starting point is 00:12:00 similar kind of agricultural developments. Yeah, okay, so then counterfactually, what group of people, like what kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers? or? Not quite moral philosophers, although perhaps sometimes. I think, you know, there are some examples. So just sticking on science technology, so if you look at Einstein, theory of special relativity would have been developed, you know,
Starting point is 00:12:27 very shortly afterwards. However, the theory of general relativity, I think, was plausibly like decades in advance. So you do sometimes get these like, oh, surprising leaps. But I think we're still only talking about decades rather than millennia. And so who really does make a very long, term difference. Yeah, I think it's like, moral philosophers could be one. Like, I think Marx and Engels made this like enormous very long-round difference. So did religious leaders,
Starting point is 00:12:50 I think that Muhammad, Jesus, Confucius, made an enormous and contingent long-than-difference. And moral activists as well. So, um, abolitionist, campaigners, the Quakers, uh, and, you know, yeah, other groups too. So if you think that the changeover in the landscape of ideas is very quick today, is it, but, but, Would you still think that maybe somebody like Marx has been, will be considered very influential of the long future? Because, I mean, communism lasted less than a century, right?
Starting point is 00:13:18 Maybe it's like longer and consequences are huge, but... It's all in expectation. So, as things, in fact, turned out, probably Marx will not be very influential over the long-term future. But that could have gone another way. It's not like such a wildly different history where, rather than liberalism emerging dominant
Starting point is 00:13:34 in the 20th century, it was communism. And then if it had, totally on the table for me that that, like, persists for an extremely long time, where if you develop certain, the better technology gets, the better a ruling ideology is to kind of cement its ideology and persist for the very long time. And so you can get like a set of knock-on effects where, okay, communism wins the war of ideas in the 20th century, let's say in the limit forms a world government based around, world state based around those ideas, then via kind of anti-aging technology or, um,
Starting point is 00:14:10 genetic enhancement technology or cloning or artificial intelligence, it's then able to build a society that literally persists forever in accordance with that ideology. Yeah, the death of dictators is especially interesting when you're thinking about contingency because, well, yeah, when Mao dies or when Stalin dies, there's like huge changes in the regime, which makes you think, yeah, the actual individual there was very important and who they happened to be was contingent and persistent or at least important in some interesting ways. For sure. So if you've got a dictatorship, then you've got a single person ruling the whole of a society. And that means it's just heavily contingent,
Starting point is 00:14:44 like what are the views and values and beliefs and personality of that person. Yeah, so going back to your signation, in the book you're very concerned about fertility because it seems to your model about how scientific and technological progress happens is number of people times average research or productivity. And then, yeah, if research productivity is declining and also the number of people isn't growing that fast,
Starting point is 00:15:06 then that's concerning. Yeah, number of people times flexion of the population devoted to Arndy. Yeah, thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history, like Venice, Athens, Bell Labs, or even something like FTX, right?
Starting point is 00:15:24 There's 20 developers making this multi-billion-dollar company. Do these examples suggest that maybe organization and congregation of researchers matters more than the actual total amount? So I actually think the model works for these going to be pretty well. So throughout history, you're starting from this very low baseline, like very low technological level compared to today. And most people aren't even trying to innovate.
Starting point is 00:15:51 Or if they're trying to innovate, it might be in things that we wouldn't now call like science or technology. So it might be theology. So one argument for why Baghdad lost its golden age, scientific golden age, is because the political landscape change such that what was incentivized was theological investigation rather than scientific investigation in the kind of 10th, 11th century, AD.
Starting point is 00:16:15 Similarly, one argument for why did Britain have a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music and that doesn't compound in the way that making textiles cheaper does. And so if you look at like Sparta versus Athens, for example, like what was the difference between Sparta and Athens?
Starting point is 00:16:36 I think it's just that they had different cultures such that in Athens, intellectual inquiry was awarded. And because they're starting from a much lower base, even just, you know, hundreds of people or thousands of people trying to do this thing that looks vaguely like science or vaguely like what we now think of as intellectual inquiry has these enormous kind of impact. I see. But then if you take an example like Bell Labs, right?
Starting point is 00:17:01 So late 20th century, the low-hanging fruit is mostly, gone, but then you have this one small organization that does six Nobel Prizes, I think. So, yeah, then is this a kind of a coincidence and lucky break? Yeah, I wouldn't say that at all. And I should acknowledge the, like, the model where what you're working with is just size of the population times, like, what fraction of the population you're putting towards R than D. That's, like, a toy model. It's like maybe the simplest model you could have of it. And so Bell Labs, like, I think, is like punching above its weight, I think you obviously can create like, you know, amazing things from like a certain environment where not only you're getting like the very most productive
Starting point is 00:17:43 people, but you're putting them in an environment where they're like 10 times more productive than they would otherwise be. However, I think what I would say is like when you're looking at the grand sweep of history, those effects are like comparatively small compared to just like sheer like compared to like the broader culture of a society or the sheer size of a population. I want to talk about your paper on long-termist institutional reform. So, yeah, one of the things you advocate in this paper is that we should have one of the houses be dedicated to which long-termist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body? Sure, yeah.
Starting point is 00:18:24 I mean, the thing I'll caveat with long-termist institutions is like I'm actually like pretty pessimistic about them. in the sense you know I have this paper exploring it but there's just this fundamental issue that like if you're trying to represent or even give consideration to future people you just have to face the fact that they're not around and they can't lobby for themselves and so you're going to have co-option by you know people in the present however you could have this like assembly of people who have some sort of like real regulatory power how would you constitute that like my best guess is you just like have a random selection from the population um how How would you ensure the incentives are aligned?
Starting point is 00:19:04 Well, there are things that, like, you can try, like, okay, in 30 years' time, their performance will be assessed by a panel of people who look back and say, like, okay, was the policies that were being recommended here? Were they good or not? And perhaps the people who are part of this assembly, their pensions are getting paid on the basis of that assessment. And then secondly, the people in that 30 years time, their assessment, both their policies and their assessment of the previous, you know, the 30 years previous futures assembly get assessed by another assembly 30 years after that. And so on.
Starting point is 00:19:43 And so it's like, and they're like, is some like math and economic analysis such like under certain conditions this checks out? Like you have this like backwards chaining where people in a thousand years time are evaluating the people in 970 years time who are evaluating the people in 970 years time who are evaluating the people. in 910 years time. And like, can you get that to work? I mean, like, maybe in theory, I'm like, again, a little bit more skeptical in practice. But, you know, I would love some country to try it and see what happens. The other thing I should say, actually, it's just like, there is some evidence as well that you can just get people to take the interest of future generations more seriously
Starting point is 00:20:15 by just telling them, like, this is your role. There was one study that, like, got people to, like, put on ceremonial robes and act as, like, trustees of the future. And they really did make, like, different policy recommendations than when they were just acting on the basis of their own beliefs of self-interest. Yeah, but if you are on that board that is judging these people 30 years before you, is there something you'd be like, okay, this is the metric I care about most, expected, I don't know, future GDP growth or something that you think would be the most informative about like how good those decisions were?
Starting point is 00:20:48 Yeah, I mean, there are things you could do, like, you know, it could be, yeah, GDP, like, of the country. It could be like, you could agree on like a set of metrics like, you know, homelessness rate, perhaps like some expert measure of like technological progress. I think you would absolutely want there to be expert assessment of like risk of catastrophe as well. We don't have this at the moment, but like you could imagine like you have a panel of super forecasters who are predicting like what are what is the chance of like a war between great powers occurring in the next 10 years, and that gets aggregated into like a war index. I think that would be like a lot more important an index than like the stock market index
Starting point is 00:21:31 and we don't have it. But you could imagine that being kind of fed in as well. Because you wouldn't want something which is just like, oh, you're only like incentivizing economic growth at the expense of like tail risks. Would that be your objection to a scheme like Robin Hanson's about just maximizing expected future GDP using production markets and making decisions that way? Yeah, I mean, I think maximizing future GDP is. more an idea I associate with Tyler Cohen.
Starting point is 00:21:57 It could be any metric, but yeah. Okay, yeah, then Robin Hansen's idea of Futaki, where you've got vote on values, bet on beliefs, and people can just, you know, vote on what collection of goods they want to have where GDP might be one of them, but also unemployment rate or also, like, whatever. Beyond that, it's just pure prediction markets.
Starting point is 00:22:15 Again, it's something I'd love to see tried, and I think it's an idea in a vein of just, like, speculative political philosophy or, like, reasoning about, like, how could a society be extraordinarily different? That is kind of really differently structured, that is incredibly neglected. Do I think it will work in practice?
Starting point is 00:22:31 Like, probably not. Most of these ideas wouldn't work in practice. You do have issues when it comes to prediction markets where they can be gamed, or they're just simply not liquid enough. So it's pretty notable since he developed those ideas and really worked on prediction markets. There hasn't been like a lot of success at prediction markets,
Starting point is 00:22:47 where there's least there has been a fair amount, more success on kind of forecasting. Now, perhaps you can solve those things. you have laws about what things can be voted on, like, or predicted in the kind of grand prediction market. It's not all of those things. You may have government subsidies to ensure there's enough liquidity. But like, overall, I think it's like pretty promising. And like, I'd love to see it. Like, you know, you could try it out on like a city level or something, like see how it goes. Let's take a scenario where the government starts taking the impact
Starting point is 00:23:15 on the long term seriously and institute some reforms to integrate that perspective in. You can take an example. You can take a look at the environment. environmental movement for an example of this where, you know, there's environmental review boards that will try to assess the environmental impact of new projects and they can repeal any proposals on this, based on this. And then so the impact here, at least in some states and in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these sorts of mechanisms in order to, in some cases, prevent projects that would actually help the environment, especially when you're talking about
Starting point is 00:23:50 something long-termism where it would take a long time to assess what the actual impact of something is, but then policymakers are tasked with evaluating the long-term impacts or something. Are you worried that it would be a system that would be really easy to game by malicious actors? And then, like, what do you think happened wrong with the way that environmentalism was codified in the law? Yeah, I mean, it's absolutely a worry, like, you know, as in potentially just a devastating worry, where, yeah, like, you create something, you're trying to represent future people. They're not actually around to lobby themselves,
Starting point is 00:24:23 so it can just be co-opted. And, yeah, my understanding of environmental impact statements has been similar. And it's kind of for similar reasons where it's not like the environment can represent itself. It can't say, like, what its interests are. And so what is the right answer there? Like, again, it's super tough.
Starting point is 00:24:41 Maybe there are these speculative proposals about, you know, having a representation. a sensitive body that assesses these things and are judged by people in 30 years time. That's the best we've got at the moment. But I think at the moment it's just like we need like a lot more thought to see if like any of these proposals like would actually be robustly good for the long term rather than just things that are like more narrowly focused. So regulation to have liability insurance for dangerous bio labs is not in any way like
Starting point is 00:25:11 about to represent the interests of future generations. But it's very good for the long. term. And so at the moment, I kind of primarily think that, like, if long term is trying to change the government, like, let's focus on, like, fairly narrow set of institutional changes that are very good for the long term, even though they're just not in the game of, like, representing the future. That's not to say I'm, like, opposed to all such things, but, like, there are major problems with implementation problems with any of them. I see. I guess if we don't know how we would do it correctly, do you at least have an idea of what went wrong with, like, how could environmentalism been
Starting point is 00:25:42 codified better? Like, why was that, in some cases, not just a situation? Yeah, honestly, I just don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could have had some system that wouldn't have been co-opted in the long term. Okay, so are corporations the most long-termist institutions we have today? Like, their incentive theoretically is to maximize future cash flow, which is at least they explicitly and theoretically should have an incentive to try to do the most good they can for their own company, which implies that, yeah, if there's an existential risk, then the company can't be around.
Starting point is 00:26:18 Yeah, I don't think so. I mean, I think different sorts of institutions have different kind of rates of decay associated with them. So a corporation, even a corporation that is in the kind of top 200 biggest companies, I think has a half-life of only about 10 years. It's actually like they're surprisingly short-lived. Whereas if you look at, say, universities, well, you know, Oxford and Cambridge are kind of 800 years old.
Starting point is 00:26:40 I think it's University of Bologna is even. older. These are like very long-lived institutions. And you do get like Corpus Christchurch at Oxford was making a decision about like should it have some new tradition that would like reoccur only every 400 years. And it's like yeah, that's the sort of decision it makes because it's such a long-lived institution. Similarly then religions like can be even longer lived again. And I think that like that kind of natural half-life really affects what sort of decisions like a company versus a university versus a religion. institution would make.
Starting point is 00:27:15 But does that suggest maybe there's, is there something fragile and dangerous about trying to make your institution last for a long time? If companies try to do that and they're not able to. Yeah, I mean, companies are composed of people. You know, is it in any, the interest of a company to last for the long time? It's like, well, is it in the interest of the people who constitute the company, like the CEO and the board and the shareholders for that company to last a long time? And it's like, no, they don't particularly care.
Starting point is 00:27:40 at least most you know some of them do but most don't where there's other institutions yeah i mean i think it goes both ways where in some cases like this is the issue of lock-in that i talk about a length in what we are the future is that you get these moments of plasticity the formation of a new institution whether that's the you know christian church or the constitution of the united states and that like locks in a certain set of norms and that can be really good if the set of norms and laws are good, like, I think the kind of U.S. Constitution, I don't know, looking back, it seems kind of mellaculous or something. It was like the first, like, the first democratic constitution, as I understand it, it was, like, created over the period
Starting point is 00:28:28 of four months. It really seems to have stood the test of time. Or alternatively, it could be, like, extremely dangerous. There were obviously, like, horrible things in that. I mean, we'll stick with the U.S. Constitution. There were horrible things in there. And it was, the legal right to slavery was proposed as like a constitutional amendment if that had gone in, that would have been like a horrible kind of piece of lock-in. And so I think it's hard to answer in the abstract because it really depends on like what is the thing that's persisting for a long time. Do you say in the book that you expect our current era to be a moment of plasticity? Why do you think that is? Yeah, I think the specific time is a moment of plasticity for two reasons.
Starting point is 00:29:07 one is that, so the world is completely, is like unified in a way that's very historically unusual. You can communicate with anyone, basically instantaneously. And there's a great diversity of model views. So we can have arguments, we can, you know, like people coming on your podcast can, like, debate, like, what's morally correct. It's plausible to me that one of, like, many different kind of sets of model views, like, might kind of win out or become, like, the most popular, ultimately. And then secondly, so without this period where things really can change, But it's a moment of plasticity because it also, at least plausibly, could come to an end, where I think there are various ways that in the coming decades or centuries,
Starting point is 00:29:45 the model change that we're used to could end. So if there was a single global culture or world government, again, like before, if there was a global communist state or global Nazi state, or other sort of world government that preferred ideological conformity, then combined with technology, I think, it becomes kind of unclear, like why would that end over the long term?
Starting point is 00:30:11 And I think the key technology here is artificial intelligence where the point and time, which may be sooner than we think, for all we know, where the rulers of the world are digital rather than biological, that could persist,
Starting point is 00:30:26 you know, once you've got that plus kind of global hegemony of a single ideology, then there's not much reason, it seems to me, for that set of values to like change over time. You've got immortal leaders and no competition. And what are the other kind of sources of value change over time? I think they can be accounted for too. But isn't the fact that we are in a time of interconnectedness that won't last if we settle space?
Starting point is 00:30:52 Isn't that a reason for thinking that lock in is not especially likely? If your overlords are many, many millions of light years away, then how well can they control you? Well, I think the weather I have is that the control will happen before. point of space settlement. So I think it's totally right that if, you know, one day we took to space and there's many different settlements of different solar systems and they, you know, are pursuing different visions of the good, then I think like, you know, you've made, you're probably going to maintain diversity for the very long time. I think it's like just given the kind of physics of the matter, I think like once a solar system has been settled, then it's very hard for other civilizations to like come along and conquer you, at least if we're like,
Starting point is 00:31:34 at a period of level of technological maturity where, you know, there aren't like new groundbreaking technologies to be discovered. But I'm worried that the control will happen earlier. Like I'm worried the control might happen this century within our lifetimes. I don't say that's very likely, but I think it's like seriously on the table, 10% or something. Yeah, so going back to the long term of the long term as a movement, there's many instructive foundations that were set up about a century ago, like, you know, Rockafelder Foundation, and Carnegie Ford Foundations. And they don't seem to be especially creative or impactful, especially today.
Starting point is 00:32:10 Like what do you think went wrong? Why was there, if not value drift, I guess just some decay of competence and leadership and insight? Yeah, I don't have super strong views about those particular examples. But two, natural thoughts. One is that four organizations that want to persist a long time and keep having influence for a long time, historically they've tended to specify their goals. in far too narrow terms.
Starting point is 00:32:35 So one fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But he specified it very specifically. It was to help blacksmith apprentices and so on. It was like, oh man, this doesn't make much sense
Starting point is 00:32:58 like once you're in the year 2000. Whereas he could have said something much more general, like for the prosperity of people in Philadelphia, for the prosperity of people in Boston. And then it would have had, like, at least plausibly more impact. The second is just maybe a regression to the mean argument, where, you know, you have some new foundation and it's doing, like, extraordinarily amount of good, as I think the Rockefeller Foundation did. Just over time, if you're saying that it's exceptional in some dimension, it's probably going
Starting point is 00:33:26 to get more close to average on that dimension. Just as a matter of, like, you're changing the people who are involved. if there's some people who are like exceptionally competent and far-sighted, the next people just statistically are probably going to be less so. So going back to that dead hand problem where if you specify your mission too narrowly, then yeah, it doesn't make sense in the future. Is there a trade-off where if you're just, if you're too broad, then again you have the ability of future actors, maybe they're malicious,
Starting point is 00:33:55 or maybe they're just like not as smart or as creative as you are to take the movement in ways that you would not approve of. So if it just like do good for Philadelphia, but then, yeah, it just turns into something that Ben Franklin would not have thought is good for Philadelphia. Yeah, I mean, it depends crucially on what your values and views are,
Starting point is 00:34:14 where if Benjamin Franklin, I don't think this was to, but if he was like, no, I just really care about blacksmiths apprentices and nothing else matters. Then he was correct to specify it in another way. But I think, as a matter of fact, certainly my own values, but I think more generally, they tend to be quite a bit more broad than that. And then,
Starting point is 00:34:34 um, uh, secondly, like in general, I expect people in the future to be like smarter than more capable. Like that's certainly the trend over time. In which case, like,
Starting point is 00:34:44 if, you know, we're sharing similar broad goals and they're implementing it in a different way, then, um, I think probably they're right and I'm wrong. Let's talk about how good we should expect the future to be.
Starting point is 00:34:55 Um, have you come across Robin Hansen's argument that in the future will all just be, subsistence level M's because there'll be a lot of competition and then you'll just try to like minimize compute per digital person, which will just be a miserable like barely living barely worth living experience for every entity? Yeah, I'm familiar with the argument, but we should distinguish the idea that M's are at subsistence level from the idea that they would have bad lives. So subsistence means that given their, yeah, you get a kind of balance of income per capita
Starting point is 00:35:26 and population growth. such that if they were any poorer, then deaths would be kind of outweighing kind of additional births. That actually doesn't tell you about their well-being. So you could be very poor as an emulated being. However, you're just in bliss all of the time. That's like perfectly consistent with the Malthusian theory. And so it might seem still not like, it might still seem very far away from the best possible future. That future still could be like very good.
Starting point is 00:35:57 Like those Ms, while at subsistence, still could have lives like thousands of times better than ours. Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned this study you had commissioned, where you were trying to find out of people in the developing world that's worth living. And it turns out that 19% of Indians would not want to relive their life every moment. But I think it was 31 president of Americans said that they would not. Yeah, so why are Indians seemingly much happier at less than a tenth of the GDP per capita? Yeah, I think the numbers are lower than that for the memory, at least. I think it was more like, and it depends exactly in the question asked, but for the memory, it's something more like 9% of Indians, like, wouldn't want to live their lives again
Starting point is 00:36:41 if they had the option and like 13% of Americans or something. But you are right that on this metric of how many people are happy to have lived, how many people think that they are not happy to have lived, the Indians we surveyed, were more kind of optimistic about their lives, like happier with their lives than people in the US were. Honestly, I just don't want to generalise too far from that because we were sampling comparatively poor Americans, competitively well-off Indians,
Starting point is 00:37:10 so perhaps it's just a sample effect. There are also, like, weird interactions with Hinduism and the belief in the reincarnation that I think, like, could, you know, could just, like, mess up the kind of journalizability of this as well. On one hand, yeah, so I basically don't want to, like, throw any strong conclusion from that. But it is pretty striking as a piece of information, given that normally what you find when you look at people's well-being is that richer countries are, you know, considerably happier than poorer countries, on average at least. Yeah, I guess you do generalize in the sense that you use it as evidence that most lives are worth living, that most lives today are worth living, right?
Starting point is 00:37:50 Yeah, exactly. So I put together various bits of evidence where very approximately. like 10% of people in the United States, 10% of people in India, seem to think that their lives are negative. They wouldn't, they think they contain more suffering than happiness. They wouldn't want to be reborn and live the same life if they could. And if you look at like other studies as well, like there's another study that just looks at people in the United States
Starting point is 00:38:22 or United States and other generally rich countries and asks them, about how much of their conscious life they'd be willing, they would want to skip if they could, whereby skipping it just means like you blink and then you come to the end of whatever activity you're engaging with. So perhaps I hate this podcast so much that I would just rather being conscious than be talking to you, in which case I would have the option of skipping, obviously not to do, but I'd have the option of skipping and, you know, it would be 30 minutes later and it would all be done. If you look at that and then also ask people about like the trade-offs they would be willing to.
Starting point is 00:38:57 to make as a measure of intensity of how much they're enjoying or how much they're not enjoying a certain experience, you get the conclusion that, like, yeah, from memory, again, a little over 10% of people were on balance that guard of their life as, that day, in fact, that was being surveyed as worse than if they'd been unconscious the entire day. Jumping topics here a little bit. On the 80,000 hours podcast, you said that you expect scientists who are explicitly trying to maximize their impact. that trying to do so might have an adverse impact because, yeah, they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but it might be more important.
Starting point is 00:39:38 Do you think this could be a general problem with long-termism that if you were, like, really trying to, like, find the most important things that are important long-term, you might be missing things that, yeah, wouldn't be obvious thinking this way? Yeah, I think it's a risk. So among the ways that people could argue, you know, against my general set of views, one way that I find, So, you know, I argue that in general we should be doing, like, fairly specific and targeted things, like trying to make AI safe, to know how well govern the advice of AI, to try to reduce, like, worst case pandemics that could kill us all, to prevent third world war, to ensure that good values are promoted and avoid value lock-in. But what some people could argue, and people like Tyler Cohen, perhaps a callousin, I think,
Starting point is 00:40:24 like take this line is man it's just very hard to predict the kind of future the like future impact of your actions and it's kind of a mugs game to even try so instead what you should do is just look at like what things have had done loads of good kind of consistently in the past and um try to just do the same things and then they in particular might argue that that means technological progress it might mean boosting economic growth yeah i guess like I just dispute that. But it's not something I feel I can give a completely knockdown argument to, because it's about, you know, when will we find out who's light,
Starting point is 00:41:04 like maybe in, you know, a thousand years' time or something? But I just, one piece of evidence is just like the success at forecasters in general. Again, the fact that like, I mean, this also is true for Tyler Cohen, but like, you know, people in effective altruism were just realizing that, the coronavirus pandemic was going to be a really big deal from very early stage was worrying about pandemics far in advance. I think there are some things that are just like actually quite predictable. So Moore's Law has held up for over 70 years.
Starting point is 00:41:39 I think the idea that like AI systems are going to get much, much larger, the leading models are going to get more and more powerful. That's like on trend. Similarly, the idea that like we will soon be able to develop viruses of like unprecedented destructive power. again, that's just like, I think that's not actually that controversial acclaim. And so even though I think that, yeah, for loads of things, it's just very hard to predict and they're going to be like tons of surprises. But there are some things, and I think especially when it comes to like fairly longstanding
Starting point is 00:42:09 technological trends, where we really can make reasonable predictions, at least about like the range of possibilities that are really on the table. But it kind of sounds like you're saying the things we know are important. now are important now. If something did turn out like a thousand years looking back to be very important, yeah, it wouldn't be salient to us now. What I was saying with me versus Patrick Colleton and Tyler Cohen, like, who is correct? Well, in some sense, we will only get that information in like a thousand years time because we're talking about which strategy is going to have a bigger impact on the long term. We might get like suggestive evidence earlier. So
Starting point is 00:42:53 If we're, if kind of me and others engaging in long termism are making kind of specific measurable forecasts about what is going to happen with AI or advances in biotechnology and then are able to take action such that we are relatively clearly reducing certain risks, I think that's like pretty good evidence in favor of our strategy. Whereas if in contrast, they're doing like, whoa, all sorts of stuff, like not really trying to have like firm predictions about what's going to happen. But then things just pop out of that where we think, oh, that was like really good from a long-term future perspective. You know, after, let's say we measure this kind of in 10 years time, well, that would be good evidence for their view. What you're saying earlier about the contingency in technology
Starting point is 00:43:40 implies that even given their worldview, maybe you should think that technological, so even if you're trying to maximize what in the past has had the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or like trying to change the rate of economic growth. Yeah, I mean, I really do take seriously the argument of like, look at how people acted in the past, especially for people who are trying to make a long-lasting impact, what things do they do that made sense and whatnot. So towards the end of the 19th century, John Stewart Mill and the other early utilitarians
Starting point is 00:44:14 had this like long-termist little wave where they started taking the interests of future generations very seriously. And their main concern was that Britain would run out of coal, and therefore future generations would be impoverished. And it's pretty striking because they had a very bad understanding of how the economy works. They didn't predict that, well, we would be able to transition away from coal because of continued innovation. And secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. And so that particular action just didn't make any sense, given like, what we know now. In fact, that particular action had to kind of keep coal in the ground, given Britain at the time, where, to be clear,
Starting point is 00:45:00 we're talking about much lower the amounts of coal, so small amounts of coal, so the climate change effect is like not, it's kind of negligible at that level. You know, it actually probably would have been harmful, but we could look at other things that John Stewart Mill could have done, such as like promoting better values. He, like campaigned for women suffrage. He was the first, MP, I think in fact, even the first politician in the world to promote women suffrage, that seems to be pretty good. That seems to have stood the test of time. And, you know, that's one historical data point, but, like, potentially we can draw a kind of more general lesson there. Do you think the ability of your global policymakers to come to a consensus is on
Starting point is 00:45:40 net a good or a bad thing? I mean, on the positive, maybe it helps to rent some dangerous tech from taking off, but, yeah, on the negative side, it prevented human challenge trials, maybe it causes some sort of lock-in in the future on that, like what do you think about that trend? Yeah, the question of global integration, you're absolutely right. It has two, it's double-sided, where on the one hand, it can help us reduce, you know,
Starting point is 00:46:02 global catastrophic risks. So the fact that the world was able to come together and ban chlorophyllular carbons was, you know, one of the great events of the last 50 years, allowing the hole in the ozone layer to the pair itself. But on the other hand, like, if it just means we all converge, to want some kind of monoculture
Starting point is 00:46:20 and we lose out on diversity, well, that's like, yeah, that's like potentially pretty bad. We could actually lose out on like most possible value that way. And I think the solution is like, you do the good bits and don't have the bad bits. So, you know, in a liberal constitution, you know, you can have a country
Starting point is 00:46:43 that is bound in certain ways by its constitution and by certain laws, yet still enables like a flourishing diversity of moral thought and different ways of life. And so similarly in the world, you could have very strong, you know, regulation and treaties in just that deal with certain global public goods like mitigation of climate change, prevention of development of like the next generation of weapons of mass destruction, without thereby having some very strong arm global government that implements a particular vision of the world.
Starting point is 00:47:25 Which way are we going? At the moment, it seems to me like we've been going in like a pretty good and not too well-eering direction, but I think that could change. Yeah, it seems the historical trend is when you have a federated political body where even if constitutionally the central power is constrained. that over time, they just tend to gain more power. You can look at the US, you can look at the European Union. But yeah, that seems to be the trend.
Starting point is 00:47:52 Yeah, and I think that's like, again, depending on like the culture that's embodied there, it's potentially a wady. It might not be if the culture itself is like liberal and promoting of moral diversity and moral change and model progress. But that needn't be the case. Now your theory of moral change implies that after small group starts advocating for a specific idea, it may take like a century or more before. that idea reaches common purchase.
Starting point is 00:48:16 To the extent that you think this is a very important century, I know you have disagreements about that with others, but yeah, to the extent that's true, does that mean that maybe there just isn't enough time for long-term isn't to gain power that way by changing moral values? Yeah, I mean, there are lots of people I know in respect very well who think that artificial general intelligence
Starting point is 00:48:37 will very, very likely lead to, you know, singularity level technological progress. of extremely rapid kind of rates of technological progress, and that that will happen more likely than not within the next 10, 20 years. And if so, then you're right. Values changes are something that pays off slowly over time.
Starting point is 00:48:59 I mean, in the world today, so I talk about moral change taking centuries, that's definitely more to you historically. I think we can have much faster change. So, you know, the growth of the effect of altruism movement is something I know well, where that's growing at something like 30, percent per year. Compound returns mean that, like, actually it's not long. You know, that's not
Starting point is 00:49:19 growth. That's not change that happens on the order of kind of centuries. I think if you look at other model movements like day rights movement, very fast, very fast model change by historical standards. So I think, yes, if you're thinking, look, we've got 10 years till the end of history, then probably don't just very broadly try and promote better values. But I think we should have at least a very very significant probability mass on the idea that we will not hit some like historical end this century. And in those worlds, then, you know, promoting better values could pay off like very well. Have you heard of slime, high mold, high mold's potato diet? I have indeed heard of slime old, time old, potato diet. And I was tempted, I was tempted as a gimmick to try it.
Starting point is 00:50:09 but they're onto something because as I'm sure you know the potatoes close to a superfood and you could survive indefinitely on just buttery mashed potatoes if you occasionally supplement with something like lentils or oats. Yeah, okay, interesting. A question about your career. Why are you still a professor? Does it still allow you to the things that you would otherwise have been doing, like converting more SBFs and making moral philosophy arguments for EA?
Starting point is 00:50:38 or yeah, you're curious about that? Yeah, I mean, it's very open to me what I should do. But my best guess, and, you know, I do spend significant amounts of time, co-founding organizations or being on the board of those organizations I've helped to set up. And more recently, yeah, working very closely with the Future Fund, you know, Sam, SBS's new foundation and helping them do as much good as possible. That being said, if there's like a single best guess for what I ought to do kind of longer term, and certainly that plays to most lengths better.
Starting point is 00:51:11 You know, it's like developing ideas, trying to get the big picture of roughly light, and then communicating them in a way that's understandable and get small people to, you know, get off their seats and start to do a lot of good, especially for the long term. And I think I've had like a lot of impact that way. And from that perspective,
Starting point is 00:51:31 having an Oxford professorship is pretty helpful. Sure, yeah. By the way, why do you think that there's, You mentioned in the book and elsewhere that there's a scarcity of people thinking about these big picture questions about, yeah, how contingentist history, how are people happy generally? Are these just questions that are too hard for other people to, or they just don't care enough? Like, what's going on? Why are there so people talking about this? I just think there's many, many issues that are enormously important, but are just not incentivized, basically anywhere in the world, where companies don't incentivize work on them because they're like too big picture. So some of these are like, you. Yeah, is the future good rather than bad? If there was a global civilizational collapse, would we recover? You know, how likely is a century's long stagnation?
Starting point is 00:52:15 It's almost no work done on any of these topics. And, yeah, companies aren't interested too grand in scale. And then academia, I think, has just developed a culture where you don't tackle such problems in academia. Partly that's because they fall through cracks of different disciplines. And partly because they just seem too big or too grand or too speculative. whereas academia is much more in the mode in general of making these kind of incremental gains in our understanding. But it didn't always used to be that way.
Starting point is 00:52:47 Like if you look back before the kind of institutionalization of academic research, philosophers would have all, you know, you weren't a real philosopher unless you had some grand unifying theory of like not just ethics and political philosophy, but also metaphysics and logic and epistemology and that probably the, the natural sciences to economics. And, you know, I think, I'm not saying that, like, all of academic inquiry should be like that, but should there be at least some people whose role is to, like, really think about the big picture?
Starting point is 00:53:19 And I think, yes. Will I be able to send my kids to McCaskell University? What's the status on that project? I'm really pretty interested in the idea of having, yeah, creating new university. There is a project that, as I've been in discussion about, with another person. who's like very excited about making it happen. Will it go ahead? I mean, time will see.
Starting point is 00:53:39 Time will tell. But yeah, I just think you can both do education, like far, far better than currently exists. I also think you can probably do the search far, far better than currently exists. It's extremely hard to kind of break in to giving kind of very, especially like creating something that's very prestigious because the leading universities
Starting point is 00:53:59 are almost all kind of hundreds of years old. But like maybe it's possible. and I think it could generate enormous amounts of value if we were able to pull it off. Yeah, yeah. Okay, excellent. All right, so the book is what we owe the future, and I understand free orders help a lot grow, right?
Starting point is 00:54:17 So, yeah, it was such an interesting read because how often does somebody write a book about, even the questions they consider to be the most important, even if they're not the most of our questions. Yeah, just that kind of like big picture of thinking, but you're also looking at any very specific questions and issues that come up. It was just super interesting read.
Starting point is 00:54:33 Great. you so much. Anywhere else they can find you or any other information that they might need to know? Yeah, sure. So what we owe the future out on August 16th in the US and 1st of September in the United Kingdom. If you want to follow me on Twitter, I'm at Will McCaskill. If you want to try and use your time and money to do good, given what we can, is an organization that encourages people to take a pledge to give a significant fraction of their income, 10% or more, to the charities that do the most good and has a list of recommended charities. 80,000 hours, if you want to use your career to do good is a place to go for advice on what careers like,
Starting point is 00:55:08 or really have the biggest impact at all. And they provide one-on-one coaching too. And so, yeah, all of these ways are kind of, if you're feeling inspired, if you think, look, I actually really want to do good in the world. I care about future people, and I want to help make their lives go better, then as well as leading what we are the future,
Starting point is 00:55:25 giving what we can in 80,000 hours, are the sources you can go to and get involved. Awesome. Thanks so much for coming on the podcast. This is a lot of fun. much. Yeah, I loved it. Cool. Thanks for watching. I hope you enjoyed that episode. If you did and you want to support the podcast, the most helpful thing you can do is share it on social media and with your friends.
Starting point is 00:55:47 Other than that, please like and subscribe on YouTube and leave good reviews on podcast platforms. Cheers. I'll see you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.