The Joe Walker Podcast - Against Bayesianism — David Deutsch

Episode Date: April 24, 2022

Bayesianism, the doctrine that it's always rational to represent our beliefs in terms of probabilities, dominates the intellectual world, from decision theory to the philosophy of science. But does it... make sense to quantify our beliefs about such ineffable things as scientific theories or the future? And what separates empty prophecy from legitimate prediction? David Deutsch is a British physicist at the University of Oxford, and is widely regarded as the father of quantum computing. He is the author of The Fabric of Reality (1997) and The Beginning of Infinity (2011). Full episode transcript available at: thejspod.comSee omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Jolly Swagman Podcast. Here's your host, Joe Walker. Ladies and gentlemen, boys and girls, swagmen and swagettes, welcome back to the show. It has been a while. I feel like I was saying that only a little over six months ago after my last hiatus. As with the last hiatus, work responsibilities have kept me from podcasting up a storm, but it's wonderful to now be back in the saddle. Running and hosting this podcast is a singular privilege and so much fun, as I think you're going to be able to tell from the next few conversations I will be releasing. This episode represents the beginning of season six of the show and our first episode of 2022. This year, my goal is to publish 20 timeless podcast episodes. Maybe more, we shall see.
Starting point is 00:00:59 Some housekeeping items before I introduce this episode. First, to everyone who has emailed or Twitter DMed me while I've been on break, thank you for your patience. I hope to reward it. Second, there has still been some great activity on the podcast front, even though I haven't been publishing episodes recently. I have a revamped website and a huge thanks is owed to the wonderful Pete Hartree for this. You can view the website at thejspod.com and there you can view episodes, transcripts and sign up for my weekend newsletter, which has continued going out every weekend, even during my podcast hiatuses.
Starting point is 00:01:34 I've also set up a new studio for in-person podcast recordings. I recorded my first conversation there on the 9th of April. You can check out pictures of the studio on Twitter. My handle is at josephnwalker. Finally, and as always, I love when you write in with ideas or to discuss episodes or things I've shared in my weekend newsletter. I've really enjoyed hearing from listeners over the past six months and many such interactions have turned into real-life catch-ups, projects and partnerships. I'm very lucky to have such a smart, thoughtful and open-minded audience and another one of my goals this year is to try to meet more of you in person including through some live events as well. If you ever want to get in touch with me, you can reach me at joe at thejspod.com. Let me introduce this episode which I've titled Against Bayesianism.
Starting point is 00:02:27 This episode is basically about prediction, what kinds of predictions are illegitimate, what kinds are legitimate, and where Bayesian updating is appropriate. This is not an episode about Bayes' theorem, a wonderful tool that I use intuitively almost every day. The theorem, named after one of its progenitors, the Reverend Thomas Bayes, I say one of because Laplace actually discovered it independently and developed it into the form we use today, tells you how to update a prior belief in light of new information. It's straightforward and fairly well known. It goes like this. The probability of A given B is equal to the probability of B given A multiplied by the probability of A over the probability of B.
Starting point is 00:03:12 This episode is not about that rule. Rather, it's about taking up that rule as a hammer and seeing too many things, including scientific theories, as nails. It's a podcast about Bayesianism. Applied to scientific theories, Bayesianism holds that, in the words of my guest, rational credences, that is degrees of belief, obey the probability calculus,
Starting point is 00:03:38 and that science is a process of finding theories with high rational credences, given the observations. Applied to decision-making, Bayesianism is, to quote Ken Binmore, the doctrine that Bayesian decision theory is always rational. Whether in philosophy or macroeconomics, science or decision-making, Bayes' rule rules. But in this episode, my guest attempts to dethrone Bayesianism. My guest is David Deutsch, a British physicist at the University of Oxford. David is widely regarded as the father of quantum computing. He's also the author of two books,
Starting point is 00:04:17 The Fabric of Reality, published in 1997, and The Beginning of Infinity, published in 2011. They're extraordinary, zany, brilliant books. They're popular science books, but they're more than that. If you haven't read them, they would be close to the top of my reading list for a generalist slash optimist interested in the world of ideas and in improving the real world. Both are remarkable for the fact that as many others have observed, they make original contributions to epistemology while being popular science books. It might seem like a quaint indulgence to be discussing epistemology and the philosophy of science at a time when Russia is visiting destruction upon Ukraine and the prospect of a great power conflict feels more salient than at any point in the last few decades. But this podcast is really about progress and how
Starting point is 00:05:05 to create it. Progress, scientific, technological, economic, may seem like an obvious good, but there is a growing counterculture hostile to it. I'm publishing this conversation now precisely because of, not despite, the horrors of current affairs, because I've never been more convinced that progress is the only thing keeping us from stumbling sooner or later into the abyss. I hope you enjoy the conversation. David Deutsch, welcome to the podcast. Thanks for having me. I wanted to focus in particular today on prediction and the types of predictions that we can make legitimately.
Starting point is 00:05:52 I'd like to begin with a bit of context. I was hoping you could tell me about Charles Parkin and how he led you to Karl Popper. Ah, Charlie Parkin. ah charlie parkin yes well he he was my um tutor as they call them in cambridge which is misleading because tutors do anything except teach uh so it's it's uh i think that in oxford they used to call them moral tutor but even that is misleading. It's basically a member of the college whose job it is to kind of connect with the particular students that they're assigned to and like be on their side. Like if they have some problem with the bureaucracy or with the university or with the college, then you would kind of, if you're in trouble or just if you need something, you would go to the tutor.
Starting point is 00:06:45 And there was a thing that you're supposed to go and see your tutor at the beginning and end of each term to kind of check in with them. And, you know, they check that you're not on drugs and stuff, you know, like that. And I never had anything to say to my tutor. And he was a historian and which is, was, was not now, but was then pretty far from my interests. So I, um, uh, on one occasion I had written this essay just for myself, because I'd read Bertrand Russell's History of Western Philosophy and another book. And I was really taken with this idea of philosophy of science. And I thought, you know, I want to be article about how it's important to do induction right, as it says in Bertrand Russell. And I sent this to Charlie Parkins so that we would have something to talk about when I went to see him. And he said, and I remember this very well, he said,
Starting point is 00:08:07 induction, hasn't that been proved wrong by that popper chappy? Well, I'd only heard of popper once before, which was from my mathematics teacher in school but i'd i've never followed up anything and so i i said oh well you know i didn't know um he said induction is old hat now uh so i i thought well if it's old hat i need to know what new hat is. So I went out and bought a Popper book. And that was the beginning of a complete change of course in my philosophical life. And I believe you actually met Popper, right? Can you tell me what that was like? Yes.
Starting point is 00:08:59 I met Popper once, unless you count one of his lectures that I went to, where I didn't meet him personally. So I only met him personally once. I went to his home with my boss, Bryce DeWitt, who also wanted to meet Papa. And we went to his home and we mainly wanted to tell him that although his philosophy was amazing, groundbreaking, he'd got quantum theory completely wrong. And he hadn't even understood what the problem was, let alone what the solution was. And the solution was Everett's multiverse interpretation, or so-called interpretation. It's Everettian quantum theory, as we now prefer to call it. And he just rejected that out of hand, again, for rather silly reasons. So we discussed many things with him. But among other things, we got round to discussing Everett
Starting point is 00:10:15 and we explained it to him, you know, what the problem really was and what he got wrong. And he listened incredibly carefully, asked all the right questions. You know, we'd been told he's intellectually very arrogant and he shouts down opposition and so on. None of that. We found him incredibly, you know, Popperian in his attitude to ideas. And at the end, he said, well, I've got a book in print now, and I'm going to have to change one of the chapters, something like that.
Starting point is 00:10:55 I'm going to have to make a radical change to something. I forget what it was. And we thought, wow, you know, we've kind of succeeded. But then when that book came out, and I forget actually which book it was, he wrote several that mentioned quantum theory, but it had none of that in it. It was all back to his original view. So I guess he must have changed his mind uh whereas in the heat of the moment he he thought we had a point but then thought no no they don't have a point so um that was my my one and only meeting with papa yeah did he show any engagement with the idea or he just completely glossed over it yes oh no in in the book that he subsequently published no no as far as i remember anyway the the only mentions are asides and they were rather disparaging right so why was encountering popper such a pivotal moment in your intellectual development?
Starting point is 00:12:09 It's hard to express exactly why. But for me personally, psychologically, I had my idea of philosophy and the philosophy of science and what philosophy is for and what it can do and so on. Like when I was in school and an undergraduate at first, it was really the everyday view of philosophy. And when I read Russell, it was again the common sense view because induction is common sense. You know, if we see the sunrise every day, then we think it's going to rise the following day as well. You know, there's that kind of thing.
Starting point is 00:13:02 But if we see it not rise, then we know there's something wrong with our theory that it'll rise every day uh and that's what i thought that's what russell thought and then when i read popper i saw that not only was that wrong but it took philosophy itself to a whole new level it's it like this is seriously, this is serious thinking about what the truth of the matter really is. And so it was the seriousness of Popper which first got me rather than the content. And it took me actually several years. I mean, I've been trying to think back from time to time how and it took me actually several years i mean i've been trying to think back from time to time how long it took me before the time when i would say yes i'm a popperian
Starting point is 00:13:52 to the time when i actually got it i think it was about four years and several more popper books when you say the seriousness of pop, what makes someone serious in that respect? Well, again, it's very hard to describe in words, but it's following ideas through and insisting that things make sense. The trouble with the theory of induction is that if you follow through any strand of like, how can this be, you end up with a problem, namely that it doesn't make sense. The philosophers call this the problem of induction. Like the original problem of induction was we see all these instances of things like sunr, and we infer that the sun is going to rise again. But that inference is not logically valid. So logic had been developed to quite a high degree already in antiquity by Aristotle and people.
Starting point is 00:15:00 And then even Aristotle already realized that this kind of inference is just not a valid inference. It doesn't follow. And then, you know, people tried various ideas. Okay, maybe it's not logically valid, but there's another form of reasoning, which you can call inductive reasoning, and that somehow makes sense. And every attempt to make that make sense didn't work either and and then uh then i realized that you know this whole problem is a misconception because it's just not true that the future is like the past um you know the the there's one thing about the future, and this may come into prediction if you want to talk about that later,
Starting point is 00:15:51 is that it's not like the past. It's never like the past. And then, well, the inductivists might say, well, yeah, but it's like the past in some ways, and it's different from the past in other ways. So it's approximately like the past. And, you know, so none of that works. And what Popper did was he said, OK, what are the assumptions that lead to this so-called problem, i.e. this so-called refutation? Sorry, this refutation of the whole theory what are the um assumptions behind it
Starting point is 00:16:28 which one of which one or more of which must be false and uh so he took seriously that there is something wrong with our theory of not our theory of just scientific knowledge but our theory of what knowledge is and uh knowledge had traditionally been thought again since antiquity been thought of as what was later called justified true belief so it's a kind of belief knowledge is a kind of belief and it's true and it is justified. Then you can also modify that by saying it's a form of belief that is mostly true or it's a form of belief that's probably true or probably justified or partially justified. Everything's been tried along those lines. And Popper realized that the argument, the problem of induction actually implies that there's no such thing as justified knowledge in the first place, and that we do not
Starting point is 00:17:37 need knowledge to be justified in order to use it. uh there is no process of justifying uh a theory so theories according to papa are always conjecture and thinking about theories is always criticism it's never a justificatory process so it's always a critical process. So, as David Miller says, a theory doesn't need to have any special credentials to be allowed into science. It's a conjecture. Any conjecture is allowed into science. But once's into science it's then criticized and when it's criticized successfully it is dropped and so that's that's the that's the that's where Popper begins then he has to answer all the questions you know what why why what's the rational reason for acting on theories and so on. But once you've got the idea that you don't need justification, everything eventually falls into place and it falls into place in a structure that makes sense.
Starting point is 00:19:02 And it makes sense of science and even beyond science. So, his solution to the problem of induction was to sort of reframe it and say that justification isn't needed in the first instance. For people wanting to read a good summary of his solution to the problem of induction, would you agree that the first chapter of his book objective knowledge is probably the best place to go yes many people say that i i um i think it where you should start with popper rather depends on where you're coming from because his uh what i've described is is is his philosophy of science narrowly conceived. But he had a very broad attack on different areas of philosophy. Basically, all the same
Starting point is 00:19:55 thing. It's all the idea of thinking of starting with problems rather than starting with existing theories and criticizing them, rather than seeking justifications for theories. Some people come to Popper via his political philosophy, though he denied being a political philosopher. But he was, and he was the greatest so far. So some people,, back in Cambridge, it was very hard to find Popper books in bookshops at the time, and there was no internet. Actually, the first one I read was The Open Society and Its Enemies, Volume 2. So, you know, that's the only one I could find at first.
Starting point is 00:20:53 Then I found Volume 1 and read that too. And the only direct connection that that had with the philosophy of science was in the underlying approach. And that's what attracted me. And that's why I looked further and tried to, you know, whenever I went into a bookshop, I first looked for popper books in the philosophy section. Very rarely found one. But yeah, I think objective knowledge is a good place to start or conjectures and refutations also other people find uh interesting um uh so yeah uh but but it it as i say it depends where you're coming from why do you think his books were so conspicuously absent from the shelves of Cambridge bookshops? I don't know. There is a mystery about Popper's like reception in the academic world, which I don't know about. And that's kind of the history of ideas, which which I'm not an expert on.
Starting point is 00:22:02 And I'd rather I'm more interested in ideas than in the history of ideas. Though sometimes the one is needed for understanding the other. But I know that Popper had great difficulty, especially with Oxford and Cambridge, but with the academic world generally. And he would never have come to England, as I understand it,
Starting point is 00:22:27 if it hadn't been for Hayek, who was professor at the LSE, kind of causing the LSE to create a professorship just for Popper in what I think was called a professor of the methodology of science, a professor of the scientific method, something like that. His lectures famously began, and you can find his first few lectures on the internet. The first one begins something like, I want to warn you that although I am called a professor of scientific methodology and I'm the only one in the British Empire, as he put it at the time, there is no such subject there is no such thing as scientific methodology and and so on and and he goes on you know brilliantly from from there on
Starting point is 00:23:35 that's great i've heard you recommend the myth of the framework as as the best of his books maybe not for beginners but certainly for people who've already read a few of his books. Why do you recommend The Myth of the Framework in particular? Yeah, well, it's not the book. So many of his books are collections of essays or lectures or whatever. And The Myth of the Framework, the book, is such a collection. And what I always recommend is the particular essay within that book called The Myth of the Framework.
Starting point is 00:24:08 So the book is named after that one essay or that one chapter. I think it's brilliant because this reaches out beyond philosophy of science, philosophy of politics, to just a general attack on, I don't know what you would call it, relativism, including postmodernism and all sorts of bad ideas about ideas where the framework the actual myth of the framework that he criticizes is that for for two people to make progress in a discussion it is important that they have an area of agreement and that they locate that and then work out from there to create agreement. Now, Popper attacks that idea from all directions. First of all, he says, the discussions can be valuable and usually are, even if you never reach agreement. And this is, I think, a crucial idea of Popper's because, again, this idea that the objective of a discussion is to reach agreement is authoritarian. The idea is that you're creating together a kind of authority, a kind of uniformity.
Starting point is 00:25:39 Whereas, in fact, all we have is conjectures and we are going to be wrong in various ways. And we're never going to arrive at the final truth because there's always improvements to be made. And when you have a public controversy, like people say debates in parliament are useless because nobody ever changes their mind. Well, first of all, people do sometimes change their mind, but that is not the point. The point is that by having a debate, you improve. You're not you're you don't improve your agreement necessarily with the other side, but you improve your understanding of the other side. You can, if you're right,
Starting point is 00:26:29 you improve your own arguments. So as to be better. And, uh, if, if you think about real life, you know, how people,
Starting point is 00:26:40 people changing their minds about things, you can very rarely remember a case where somebody has changed their mind during a debate. And yet, if you look at the big picture, if you look at opinion polls about, you know, would you live next to a person of a different race? You know, a generation ago, it's like 20% of people would, and now 95% of people would. And in that time, you can't find anybody who says, oh, right, now I've changed my mind about that, or hardly anybody.
Starting point is 00:27:23 What has happened is that they change they change their view they change their view on a larger scale and on a deeper scale including in the first instance the type of reasons that they give themselves for their for their ideas so as some somebody i can't find quote, but there's a marvelous quote by some moral philosopher saying, the reason we need moral philosophy is that people change the reasons for their behavior before they change their behavior. That, you know, you justify it in a different and better way even though you're justifying the same view and the same behavior as before you're now changing and eventually that will lead to your change changing your actual behavior in that in that little way we were talking about but you never see that because it happens as as a result of a deeper shift um so this in practice is what happens and in theory the theory of it is is
Starting point is 00:28:30 was first understood i think by popper uh and expressed in that essay that that quote you shared of the moral philosopher also reminds me of that great quote by john stewart mill he who knows only his own side of the case knows little of that. Right. How do you define Bayesianism and why, in your view, is Bayesianism a form of inductivism? Right. Well, the word Bayesianism is used for a variety of things,
Starting point is 00:29:01 a whole spectrum of things, at one end of which I have no quarrel with whatsoever and at the other end of which I think is just plain inductive ISM so at the good end Bayesianism is just a word for using conditional probabilities correctly. So if you want to know, you know, you find that your milkman was born in the same small village as you and and you know you you you wondering what kind of a um uh a coincidence that is and so on you you've got to uh look at the conditional probabilities like uh rather than the absolute probabilities so there there there isn't just one chance in so many million, but there's a smaller chance. And then against what background of population are you taking this estimate of the chance and so on?
Starting point is 00:30:18 So, you know, if you're not careful, you can end up concluding that your milkman is actually stalking you. And that's because you've used probability wrongly. So that is one end of the spectrum, which I have no quarrel with whatsoever. The other end of the spectrum, a thing which is called Bayesianism is what I prefer to call Bayesian epistemology, because it's the epistemology that's wrong, not Bayes' theorem. Bayes' theorem is true enough, but Bayesian epistemology is just the name of a mistake. It's a species of inductivism and currently the most popular species so the idea of Bayesian epistemology is that first of all it completely swallows the justified true belief theory of knowledge so it's saying how do we increase our knowledge
Starting point is 00:31:19 well we increase we increase our knowledge whenever we increase our credence for true theories credence is a belief belief is according to basing epistemology measured by a measure that is basically a probability in fact all probabilities are supposed to be these beliefs, which is another mistake, but never mind that for a moment. So the idea of science and of thinking generally based in epistemology is that we're trying to increase our credence for true beliefs and decrease our credence for false beliefs.
Starting point is 00:32:08 And so they use Bayes' theorem to show that when you encounter a true instance of a general theory you're and you use Bayes' theorem to calculate the new probability of that theory, the new credence for that theory, it has gone up. And so the basic plan of Bayesian epistemology is that that is how credences go up. And the way they go down is if you find a counterexample. So, credences of theories go up when you find a confirming instance and down when you find a disconfirming instance, and that just is inductivism. That is another way. It's inductivism with a particular measure of how strongly you believe a theory and with a particular kind of framework for how you justify theories. You justify theories by finding confirming instances.
Starting point is 00:33:17 So that is a mistake because although it's true that the credence of a theory so if theories had probabilities which they don't but if theories had probabilities then the probability of a theory probability or credence that in this in this philosophy you they're identical they're synonymous. If you find a confirming instance, the reason your credence goes up isive part of the theory whose credence goes up. But if you ask, apart from the deductions you can make, because the instances never imply the theory. So you want to ask, the part of the theory that's not implied logically by the evidence, why does our credence for that go up? Well, unfortunately, it goes down.
Starting point is 00:34:37 And that's the thing that Popper and Miller proved in the 1980s. And I and a colleague, Matias Leonardis, have been trying to write a paper about this for several years to explain why this is so in more understandable terms. Unfortunately, Popper ander's two papers on this are very uh condensed and mathematical and uh they they use uh
Starting point is 00:35:17 they use a kind of a special terminology that they made up in order to prove this. So the paper hasn't been taken on board, and we would like it to be taken on board, but we haven't yet managed to solve the problem, which evidently they didn't, of how to present this. I'm curious, are you aware of some of the analogous critiques made of Bayesian decision theory by people like ken bin more uh no um i i i'm not aware of having any quarrel with bayesian decision theory except unless this is referring to its ambiguity uh that that that you never know uh rather like the do-him-quine ambiguity in scientific reasoning. If that's what you're referring to, then I do know about it, but I haven't specifically read about it.
Starting point is 00:36:15 No. So I think the best place to start would be Ben Moore's book, Rational Decisions. But in the book, he takes Bayesianism to mean the doctrine that Bayesian decision theory is always rational. And he builds on Leonard or Jimmy Savage's distinction between small and large worlds. So, small worlds are worlds where you can, you know, look before you leap. Large worlds uh worlds where you have to cross that bridge when you come to it so to speak and savage argued although everyone in you know say economics
Starting point is 00:36:54 macroeconomists seem to have forgotten this that bayesian decision theory is only applicable like it's only sensible in small worlds but in large worlds uh do you mean um worlds where there's a finite number of things that you have propositions about therefore when you find that one of them is true you've actually made inroads into the into the whole set whereas for infinite things you never make any inroad into the whole set exactly exactly and i guess archetypal examples of large worlds are like high finance the macro economy etc um and it just doesn't make sense to apply bayesian decision theory in in large worlds so So, Binmore has this long kind of rant against Bayesian decision theory, but he argues that Bayesians are acting as if they've solved
Starting point is 00:37:52 the problem of scientific induction, even if they don't explicitly acknowledge that. I agree that they are, and I agree that that's an error. So, why is the future of civilization unpredictable in principle because future knowledge because it's going to be affected by future knowledge and future knowledge is unpredictable so if if you think, well, the example I give in my book is that if you'd been trying to predict the future of energy production in 1900, you wouldn't have included nuclear energy because radioactivity had only just been discovered in 1900
Starting point is 00:38:51 and it wasn't known that it could be used to produce energy. So then, and there was no way of predicting that nuclear energy was going to be discovered because if you had predicted that, that would be equivalent to already predicting it in 1900 so that's a logical contradiction you you you can't know knowledge that you don't know so uh now suppose you'd been magically told that there would be uh nuclear energy then then you might have predicted that, okay, carbon dioxide is not going to build up in the atmosphere because by the middle of the 20th century, we're going to have nuclear power and we won't have any use for fossil fuels or much less use for
Starting point is 00:39:42 fossil fuels anymore. So there won't be global warming. And so you could predict that there won't be global warming on the basis of the best knowledge known in 1900. But it was not known in 1900 that in the mid-20th century, there would be an environmental movement that would stigmatize nuclear energy, and so on. And so at each stage you don't know what the future of knowledge and these examples illustrate that knowledge can be erroneous as well so that that's another thing that Popper took seriously that false theories also contain knowledge and so this uh this is a nice example to show that it's impossible to know the
Starting point is 00:40:32 future that's going to be affected by knowledge now is the future you know which parts of the future are going to be affected by knowledge or not well that, that's also unknowable. So, you know, we predict the orbit of Mars, but the orbit of Mars is only going to be, our prediction is only going to be correct if nothing intervenes. So human knowledge could intervene. We might create the knowledge to shift Mars and we might want to shift Mars for some reason or another. And in the next hundred years or thousand years or million years, we might want to do that. And whether we do it or not depends on the knowledge we create. And that knowledge, not just scientific knowledge, but also all other forms of knowledge, moral knowledge, political knowledge, aesthetic knowledge, all those might affect the orbit of Mars in the future. But that doesn't mean that it's
Starting point is 00:41:37 completely useless to try and predict anything, because we have explanations. So what I've just said is not just a conditional prediction. It's also an explanation, because I have been saying that it would need some extreme changes in the human condition for human knowledge to affect Mars. Whereas have are already that human knowledge is affecting the atmosphere now and will affect it more in the future. Now that might be false, both those things might be false, but every particular theory about how it can be false is subject criticism and has failed criticism so our our best explanatory theories about the future say that the atmosphere is being affected and will be affected by more in the future and we um we can therefore conclude that given what we want from the
Starting point is 00:43:08 atmosphere we would we would do best to create the knowledge to make it change in the way we want so is the key point of differentiation between legitimate prediction and illegitimate prophecy that legitimate predictions rely on good explanations? Exactly. But legitimate predictions are not justified knowledge. They are conjectures just like everything else. It's just that their rivals have failed criticism, which doesn't mean they're false. They have just failed criticism. And the rational way of proceeding is to proceed according to the best explanation.
Starting point is 00:43:56 And they are, of course, subject to disappointment. Yes. So for most of our species history, knowledge was sparse and grew slowly if at all. Yes. Does this mean that people could have made better predictions about the near futures of their societies than we can of ours? For example, would it have been easier for someone in Ming China to make predictions about the future of their civilization than it is for a 21st century american to make predictions about the future of the united states yes how how could people in the past have known ex-ante that their seemingly static circumstances endowed them with this predictive power
Starting point is 00:44:38 without extrapolating their circumstances forward? The assumption was, it's not just any old prediction, the assumption was that nothing would change. Now sooner or later that assumption is going to be proved false. In almost all cases it was proved false by the destruction of that society, that civilization. Almost all civilizations that have ever existed have been static until they were destroyed, either by nature or by other humans. So whether you call that reliability of prediction or not, it is a matter of taste. you predict well nothing's going to ever change then you can predict that you know you're the kind of diseases that you have now are still going to be experienced by your great-great-grandchildren um until you're wrong you know until it comes to the 400 year mark and your great-great-grandchildren
Starting point is 00:46:00 are going to be have it much much worse than you, or much, much better than you. So it depends how you calibrate predictability. In a static society, you can make predictions conditional on the survival of the static society, whether or not you know that you're making them conditionally in a dynamic society it's much easier to see that uh your predictions um it's the other way around like given that your society is going to survive the future is opaque. So, to come back to good explanations, there are some quotes in the beginning of infinity that could potentially be misconstrued as prophecies. So, I just wanted to give you the opportunity to clarify them. So, for example, on page 455 of the paperback,
Starting point is 00:47:01 you talk about how humans will achieve immortality within the next few lifetimes. And I imagine that someone might pounce on that and say, ah, David's contradicting himself. That's a prophecy. But I suppose your retort would be that there's like a good explanation underlying that claim. Is that fair to say? I can't remember the wording. It's perfectly possible that I worded it in a way that was either ambiguous or plain wrong. So if I said humans are going to solve this problem within X years, then that is a prophecy and I shouldn't have put it that way. Now, if I said, I expect this to happen, then that technically escapes the criticism of prophecy, but it depends on the context. If I expect it can be taken in context to mean I predict, then it is a mistake, and I shouldn't have said it.
Starting point is 00:48:07 But if it's a description of my personal conjecture of what's going to happen, then it's accurate. And I think it's so. It is based on an explanation, but the explanation is kind of in a negative form. At present, I see nothing in our existing best theories of biology that suggest that there's a law of physics that says that the lifetime has to have a particular finite limit. We know that there are organisms that don't have that limit, like most microorganisms and so on. And the processes that we know that kill people are all of the form something goes wrong physically, which we can see and which we could undo if we had the knowledge. So it might not be true. discovered why humans can never be immortal um whereby immortal i mean uh you know that aging won't kill us but you know something else might kill us um so uh all if that comment in my book said that nothing is known that mandates that, then it's not prophecy. But if I accidentally phrased it as a prophecy, then I'm wrong.
Starting point is 00:49:55 You know, Popperians shouldn't be so embarrassed about being wrong as many people are. That's very, very honorable of you. One of my favorite genres, David, is old books about the future. I like reading about how people in the past thought about the future. And I sort of collect these books mainly to remind myself not to prophesy. But some of the ones I have sitting on my bookshelf downstairs are Toward the Year 2018, which was published in 1968. There's Lester Thoreau's book Head to Head. He was an MIT, I think, political scientist. And it was a
Starting point is 00:50:34 1993 book that envisioned Japan and Europe as America's great economic rivals in the 21st century, scarcely making mention of China. There's Servin Schreiber's book, The American Challenge, which envisioned American growth sort of continuing very aggressively into the 21st century. There is Kahn and Wiener's book, The Year 2000, which speculated on all of these future technologies that we would have. There is, of course, Ehrlich's The Population Bomb. There's Limits to Growth. There's a book called The Coming War with Japan by Friedman and Labarde. But I'm curious, do you have any of your own examples of- favourite examples of failed
Starting point is 00:51:18 predictions or doomsaying books that turned out to be false? Well, some of those I've actually read or at least seen some of those that you mentioned. Others are completely new to me. I haven't heard of many of them. I was recently rereading 20,000 Leagues Under the Sea. I'm not familiar with it oh jules verne a science fiction book uh written about 1870 something like that and uh it's amazing the things he gets utterly wrong and and the things he gets amazingly correct um you know, electric light, submarines, and some of the things he gets very, very wrong.
Starting point is 00:52:11 I saw there a nice, so Antarctica, it wasn't known at the time whether Antarctica was land or frozen sea like the like the arctic region so um in this book they go in a submarine and they go and try and find this the south pole to see if it can be reached under the ice and uh he gives a wonderful argument for why uh there must be a continent there and not just ice and he says near the south pole there are far more icebergs than there are near the north pole and icebergs can only form from glaciers which are on land the glaciers in the northern hemisphere form in places like greenland on and so on but not in the arctic itself that's why there are fewer of them and and therefore we must expect there to be an Antarctic landmass. And I thought, you know, that's just so typical of explanatory reasoning. There's no induction there.
Starting point is 00:53:37 There's no, you know, we found continents everywhere else, therefore there should be one in Antarctica as well. It's an explanatory theory. It's explaining the phenomenon of icebergs by something that absolutely isn't icebergs. It's a landmass in the middle of the Southern Ocean that's never been discovered. And that's brilliant. And I don't know whether it's even true. I haven't looked it up. I don't know whether there are more icebergs in the southern hemisphere.
Starting point is 00:54:12 I guess it must be true by the same argument. My best guess is that it is true. So you asked for examples of predictions or prophecies that turned out to be false. Well, I'm more impressed by the ones that turned out to be true. Yes, he predicts that on the same trip, he predicts that in the future, hunting whales will be made illegal. And this is 1870 or something like that. So, yeah, I think one thing that's happened is that between the 19th and 20th century, speculative fiction, science fiction turned pessimistic. The 19th century was more optimistic. And when people speculated about the future, they're more likely to have been wrong by overestimating progress than by underestimating it. In the 20th century, there was a sort of congealing of the intellectual climate
Starting point is 00:55:37 into a very rigid pessimism, so that a prediction or prophecy could only be taken seriously if it was negative um so uh there's another book i thought you might mention um i don't know exactly when it was written but i remember it coming out called will the soviet Survive Until 1984? And this was like in the 70s. And his answer was no, or rather, you know, it will survive until approximately 1984, then it will collapse. I remember this being vilified, basically on the grounds that it was sort of arrogant to assume that the West has it all right and there's nothing viable in the Soviet system and this is just arrogance on our part and so on.
Starting point is 00:56:39 Whereas it's not true. The book was just giving explanatory arguments about why this edifice could not survive and he was only five years out um uh it turned out to be correct but he at the time was was um vilified you you mentioned that switch from optimistic to pessimistic visions of the future, you know, where you've got in the first half of the 20th century, like Isaac Asimov speculating about how wonderful the future could be. And then by the second half of the 20th century, you've got, you know, movies like Terminator. And you mentioned that that shift may have been caused by this congealing intellectual environment around pessimism. Yes. I'm not sure if I'm offering an alternative explanation or adding to what you've said, but what do you think of the idea that economic and productivity growth began to slow for
Starting point is 00:57:41 whatever reason or reasons around, say, 1972, 73. And prior to that, where we had this amazing period of growth, people were kind of extrapolating that into the future. And so, it made sense to be talking about flying cars only a few decades away because people had just gone from you know almost nothing to electricity telephones radios flight but when when that growth started to slow the the pessimism kind of kicked in does that make sense to you i think that happened but i i don't think uh it's it's how can I put it I don't think there's an inexorable
Starting point is 00:58:26 evolution in this thing I think it happened because of specific mistakes that got embedded in systems particularly the academic world and
Starting point is 00:58:43 and government governmental bureaucracy and from there into the wider society so that again the idea that that how can I put it, that there's something wrong with aspiring to make radical improvements, that this is hubris or that this is dangerous or that this will inevitably have side effects this this idea is very widespread that various versions of this idea are very widespread and they have caused a slowdown in various areas. Of course, not in all areas. You've only got to look at computers to see that rapid improvement happened during that entire so-called period of stagnation. But many things, in many areas, improvement drastically declined. So decline not to zero, so we still have improvement all the time, but we don't have rapid improvement. We don't have rapid game changes happening anymore.
Starting point is 01:00:21 And I think that's completely unnecessary and could be turned around if people change their attitude yeah i mean that's a very optimistic interpretation i would like to believe that that that is true that there's just something wrong in the culture yes um some kind of um you know mental um mental problem like i guess that we can sort of tweak and get ourselves back on track that would be um much better than thinking that we'd sort of somehow picked all the low-hanging fruits or something like that yeah no it's right that is the epitome of the wrong theory uh i mean i i you know i'm a physicist, and so I can judge that in the context of physics. People have said, and I think the prevailing view is that the reason physics, fundamental physics progress has
Starting point is 01:01:17 slowed down is that we've picked all the low-hanging fruit. But that's not true it's there's there's more low-hanging fruit than there ever was seen before it's just that picking it is stigmatized that speculative fiction book from the 1870s you mentioned do you remember the the author's explanation for why he thought whale hunting would be outlawed in the future uh no i mean he says something like that that whales have large brains and uh something like that i i forget but but he was not at all opposed to uh hunting um sea creatures or land creatures he's quite okay with
Starting point is 01:02:09 that there's a scene where they have these they encounter these other predators who are going to prey on the whales and they're going basically literally make mincemeat of them and he's he describes this in in gleeful tones so it's it's not that he's against hunting
Starting point is 01:02:33 he's against whale hunting in particular right interesting so for you a good explanation is an explanation that is hard to vary while still explaining what it purports to explain yes but hard to vary by what standard um ultimately the standard is that um conjectures that have been put forward or which are on the horizon for being put forward have been refuted and what's more they have been refuted in such a way that they it's not just that the particular theories have been refuted, it's that their underlying assumption have been argued away. When I say refuted, I meant in this context argued away,
Starting point is 01:03:36 that nothing like that could happen, because if we were to find somehow a theory with that property, that, for example, that in reality, the Earth is flat and that it just looks round because light travels in not in straight lines or something like that, then that would spoil all sorts of other explanations, which those theories, the flat earth theories, do not address. And it looks as though they can't address them. Now, just because it looks as though they can't address them doesn't mean they can't. But we can't switch to a theory that isn't a good explanation because a theory that isn't a good explanation is obviously false. It's like, you know, I'm not going to step into the path of moving traffic on the motorway because it looks as though I'd be mashed by the next car that's coming along. Now, it's no good saying, well, you might be wrong. Yes, of course, I might be wrong. It might all be a hologram and everything. But I can't. It's not rational to make decisions on the basis of what might be true. It's rational to make decisions on basis of what looks as though it's true in the sense
Starting point is 01:05:05 that the contrary theory looks false yeah do you mind if I just make myself a cup of tea yeah absolutely I might get a water while you're doing it yeah because my voice is going so unless unless I lubricate it right i'm back great yes you were saying yes a few more questions about prediction so in his book the precipice the australian philosopher toby ord takes a bayesian approach to quantifying existential risks to humanity he He adds up the chance of various existential catastrophes befalling us in the next 100 years and reaches a rough overall estimate, the chance of an existential catastrophe
Starting point is 01:05:54 befalling humanity in the next 100 years is one in six. He stresses that it's not, you know, a precise estimate, but he thinks it's the right order of magnitude. And the one in six estimate takes into account our responses to escalating risks. Question for you, David, should we use base rates like that to estimate the probability of existential risks and help prioritize which ones we address? Basically, absolutely not um we should not uh but i have to qualify that by saying that uh in some cases um the probabilities uh can be known because they are the the the result of good explanations.
Starting point is 01:06:46 So, for example, we can calculate the probability that an asteroid from the asteroid belt will hit the Earth in the next thousand years or something. Unfortunately, we don't know the probability that an asteroid from somewhere else, from the Oort cloud or from somewhere outside the plane of the ecliptic or from elsewhere in the galaxy or from another galaxy. So we don't know any of those probabilities. There's no way of estimating them. So there is no way of using Bayesian terms in order to give them the appearance of being strong arguments, whereas in fact they're already strong arguments. They don't need Bayesianism to justify them. And so what you tend to get is a mixture of good arguments disguised as Bayesian
Starting point is 01:08:08 epistemology with bad arguments that actually use Bayesian epistemology. Toby Ord's book, I haven't read it all, but it definitely makes this mistake of Bayesianism in both senses. That is, a lot of the book is good argument and good proposals, but some of it is just lost behind the mist of prophecy. Nick Bostrom's vulnerable world hypothesis is the hypothesis that there's some level of technology at which civilization almost certainly gets destroyed unless quite extraordinary and historically unprecedented degrees of preventive policing or global governance are implemented. So, in simple terms, you know, the cost of destructive technologies fall
Starting point is 01:09:07 and we have a diverse set of motivations in the population, like there'll always be a few crazy or malevolent people and it's a near inevitability that those people use those destructive technologies to destroy civilization. Are you familiar with Bostrom's vulnerable world hypothesis? I disagree with both the conclusion and the argument. Right. Though again, you know, Bostrom is a wonderful writer and a lot of the things he says are very true. If you've read his letter from the future to the present, that's the most uplifting thing I've ever read, I think, and highly optimistic. But I think the argument about technology and the dangers of technology is just wrong so he has this analogy of the of the urn
Starting point is 01:10:07 from which one takes uh white white beads and black beads occasionally there are black beads and the white ones are and these are technological discoveries and the white ones are ones that are beneficial and the black ones are the ones that destroy us. And sooner or later, we're going to hit a black one. And that unless we take drastic steps to make sure that, first of all, we take them out more rarely. And second, that we examine them very closely before actually deploying them. And I think this is a recipe for totalitarianism. But even worse, it's a recipe for, it's precisely a recipe for civilization to be rescued by rapid growth of knowledge, whether or not we take totalitarian draconian steps to try to rein it in. In terms of the analogy, the mistake is that every time we take out a white bead,
Starting point is 01:11:20 we reduce the number of black beads so the so the the probability calculation that's implicit in that in that metaphor is a mistake we become more resilient the more we know about especially fundamental knowledge because fundamental knowledge can protect us from things that we don't yet know about unlike specifically directed knowledge which has less of that tendency so secondly it's not true that that technology has made us more vulnerable because we are, our species, depending on how you count species, our species, Homo sapiens sapiens, is, the terminology is changing too fast to keep track of it. Our species is one of six or eight or maybe more species that had the capacity to create explanatory knowledge. We know that because things like campfires require explanatory knowledge to to form them we know from
Starting point is 01:12:48 the evolution of language language must have been used before the structures in our throat adapted to make language evolved it must have been the language use that made them evolve it couldn't possibly have happened the other way around so we know that all these other species existed in the past and were capable of what we are namely creating new explanatory knowledge and they're all extinct except us and we know that our species almost went extinct at least once but probably more than once in our past as well so if we come nearer every civilization before the civilization we called the West now you know technological civilization or every civilization before technological civilization
Starting point is 01:13:46 was also destroyed some of them after four thousand years but fourth i think that's the longest the civilization has ever survived you know depending on where you draw the line but you know between its creation and what people regard as its destruction destruction of the knowledge that kept it going um that's nothing compared with the lifetime of a species and uh all those civilizations have been destroyed as well and all of them could have been all those species and all those civilizations could have been saved with just a little more knowledge from our perspective. farming and irrigation and that kind of thing to prevent being destroyed by climate change and so on. So a small amount of knowledge would have saved them. And on the other hand, not a single civilization was destroyed through creating too much scientific knowledge. So that's never happened.
Starting point is 01:15:06 So if you're going to be Bayesian, or if you're going to pull these beads out of a hat, then even by that standard, we should be pulling them out faster, not slower. Now, and as for the idea that a small number of people could destroy civilization, well, yes, but that's not the right measure. We have a small number of people who could work on things that could destroy civilization, but we have a large number of people that could be working on countermeasures. Now, you know, could be argued, and I think
Starting point is 01:15:47 there's a very good argument for this, that we are not doing enough of that. That is, we're not doing enough to counteract artificially caused pandemics, for example. Or as Carl Sagan put it, artificially caused meteor strikes. Now, and he said, we shouldn't be developing the technology for fending off asteroids or comets, because it could also be used to destroy us now i think that's a mistake we should be developing that technology and we should be developing it faster and it's again this is something that's actually going in the right direction because um say 20 years ago the idea of an asteroid defense system was ridiculed it was literally ridiculed and rejected just for being ridiculed whereas in that time we have set up a rudimentary asteroid detection system which
Starting point is 01:16:55 can detect asteroids and also there's been research into how to fend them off when we do detect them. Now, they will not fend off asteroids coming from an unexpected direction outside the plane of the ecliptic, nor faster than we expect. We could be vulnerable
Starting point is 01:17:18 to those just because we don't have a fleet of nuclear-powered spaceships. And we're going to be kicking ourselves if one of those heads towards us and we don't have the nuclear-powered spaceships and it's going to take us more than, you know, whatever it is, a year to build them. Now, you know, I don't know what we could do if our lives depended on it. You know, we could, I guess, we could do, like we did with the present pandemic, we could do things that were thought impossible previously.
Starting point is 01:17:55 But there's a level of things that we couldn't do. And we should be, if not making the fleet, we should be creating the knowledge to make the fleet of nuclear powered spaceships and all sorts of other things. And as I said, the most important knowledge in this respect is fundamental knowledge. And fundamental knowledge is created by things like fundamental science. Fundamental science has been held back by the phenomenon we discussed before, that the academic world has been trapped in a sort of Sargasso sea of bad assumptions, which have de-emphasized fundamental research in favor of incremental research and have the science funding system doesn't work and the the ref peer review system doesn't work and it it all goes together to increase the number of scientists doing things that won't save civilization and reduce the number working on things that will
Starting point is 01:19:17 i wanted to ask you this before but what practical steps would you take to improve the incentives for fundamental research over incremental research um well i'm not an expert on science funding and the reason for that is that that in part that that i have tried to get away from the entire system, the entire academic system, from funding down to academic politics and so on. I don't have a position, an official position at any university, and I'm not paid by anybody to do research. I write my books and I am an honorary member of various things in Oxford University, but not a paid member.
Starting point is 01:20:12 So I don't know how things are going. I only know the complaints that my colleagues raise and they are all the same complaints that funding is highly bureaucratic at the moment. So if you have an idea for some research you want to do, you've got to submit it to a committee. The committee consists of 20 people,
Starting point is 01:20:47 none of whom are experts in fields close to yours. Or even if they are, they've got a vested interest in not doing that kind of research, but in directing it towards their kind of research, which is only natural. Now, it used to be, even when I was a student, it used to be that research funding was not done in that way. Research funding, I don't know how the higher levels of it worked, but it was directed towards individual senior researchers, and they disbursed the funds and chose their own graduate students and postdocs to do the research that they thought was important. Those of them who thought that
Starting point is 01:21:34 fundamental research was important didn't have particular projects that they wanted. They were looking for young people who had ideas to do fundamental research. That was certainly true of the bosses that I had when I was a student and when I was a postdoc. Now that doesn't seem to exist anymore. Now it's the scientific department that has its priorities and which tells the professors what to do. And the professors, we had some ridiculous situations a couple of years ago, where we wanted to hire, well, one example was we wanted to hire a postdoc to work on foundations of constructive theory. And the reason we wanted to hire him is that he was the only person in the world who had proved a particular theorem and we wanted him to use his techniques. Anyway, it was impossible to hire him because we had to advertise his position
Starting point is 01:22:39 and then make a case to the relevant committee. And what did they know about it? There wasn't a box on the form. There wasn't a box called constructive theory because constructive theory doesn't exist yet. The whole point was that we're trying to create this new field. And the reason it doesn't exist yet is that it may not exist at all. It's a fundamental conjecture. But it's a fundamental conjecture that is thought worthwhile by me and
Starting point is 01:23:15 several other senior people. That should be enough to fund something. And the same thing has happened with graduate students. And it's ridiculous, by the way, that at least in the parts of the funding system that I see, if you're a young person wanting to do research on a particular thing in a particular department, you've got to apply for the funding in one place, typically the government or some giant charity or something like that, and to get into the department in a different place. So it can happen, like in the example I gave, that the department really wants you, but there's no funding. So sometimes the senior people can arrange a weird arrangement where you're funded for one thing, but you're really going to do another thing. And I think in general, people who apply for grants nowadays are are playing a game it you know you're gaming the system you're you're trying to tick as many of
Starting point is 01:24:33 the boxes as you can and you're trying to pretend that your research is directed towards those boxes whereas it actually it only satisfies the boxes incidentally, and it's really directed towards something else that couldn't get funding. Now, you know, so I shouldn't go on and on about this because this is only a tiny facet of the overall problem. People like Michael Nielsen and Patrick Collison have investigated the problem at a deeper level. Although they are inclined to, they have much more sympathy with the low-hanging fruit theory than it deserves. But at least they've gone into the problem in a broader sense than I have. So, you know, you should ask should ask them yeah there's also a great book by donald braben called scientific freedom have you come across that no i'd recommend
Starting point is 01:25:34 uh that to people as well i think stripe press recently um patrick collison's um publishing house recently republished it um but yeah i'm i'm just getting frustrated listening to you just feels like we're self-sabotaging as a species yes and and the um of course the bad guys have no such restrictions right there's an asymmetry yeah yeah but really the the natural asymmetry is the other way around you know that the good guys have a natural advantage in this game but not if we hog tie ourselves exactly a couple more questions on prediction and probability yeah are you familiar with ph Phil Tetlock's research on forecasting? No. And no? Okay. Afraid not.
Starting point is 01:26:26 I'll skip over that then. Do you have any explanations as to why frequencies in certain situations can be approximated by probabilities? Yes. It's because there is an underlying physical process for which, if we have good explanations of that process, we can use frequencies to predict probabilities, but not otherwise.
Starting point is 01:26:58 And the usual case is otherwise. That is, the usual case is that the frequencies are misleading especially when something important depends on it why do you think it took people so long to come up with probability theory humans were gambling long before cardano the maths isn't particularly difficult oh i think the maths is quite difficult mean, if you're going back to Cardano, expressed theories, it as it was then. So as Bronowski says, after Galileo, scientific research in Southern Europe came to a dead stop. But it continued in in northern europe and then then we had leibniz and and newton and descartes and so on and they they had their problems with authorities but but uh
Starting point is 01:28:35 they were relatively free to to pursue ideas and they had fundamental ideas and um i don't think it's at all surprising that they didn't invent, that probability theory wasn't invented earlier. When probability theory was first invented by Cardano and Pascal and those people, for that, it wasn't misused. It wasn't used in the same way as today
Starting point is 01:29:09 it was i think understood by all concerned that this was a theory of how to make a profit playing games where the process of randomizing was always part of the explanation of why you should take the fact that you know three aces have already been dealt that that changes the probability of the fourth ace that that this was based on a physical understanding of the situation where a randomized, a randomizing process had approximated probabilities. And nobody would have, I think, nobody would have tried to use this for predicting things like you know whether there's going to be another continent in in an uninhabited unexplored part of the world they they wouldn't have done that because they would have they would only have been expecting probability to have
Starting point is 01:30:19 these narrow range this narrow range of uses and it was only in the I don't know again I'm not a historian of ideas but I think it was only in the, I don't know, again, I'm not a historian of ideas, but I think it was only roughly the middle of the 20th century when people like Jaynes and so on started advocating a much broader, no, no, it was before that. I think there were, there was already the beginnings of it in the 19th century. But anyway, a much broader interpretation of probability, subjective interpretation of probability is, by the way, I haven't mentioned this. These bad interpretations of probability are all subjective, including Bayesian epistemology they're all thinking that probability is not
Starting point is 01:31:06 an attribute of the pack of cards it's an attribute of how we think about the pack of cards and that's a terrible mistake which Popper attacks as well I mean he's against subjective interpretations of anything except
Starting point is 01:31:22 psychology itself Speaking of subjective probabilities, and I know you are more interested in the ideas themselves than the history of ideas, but just as an aside, people often go back to Frank Ramsey when thinking about the birth of subjective probability. But it was recently when reading Popper that- God, it was in objective knowledge. It was- So, it's- For people who have the book, it's page 79 of objective knowledge. It's the second essay, Two Faces of Common Sense. But there's a footnote of Popper's. The theory is often ascribed to Frank Ramsey,
Starting point is 01:32:07 but it can be found in Kant. And I got a bit excited when I read that because I do get a bit sidetracked indulging in the history of ideas. I agree with you that I find it useful to the extent that it helps you understand the ideas themselves and kind of the debate between different ideas, even if only as like a memory device, I suppose. And yeah, I went back to Kant's critique of pure reason. And sure enough, in there, he talks about using bets to quantify subjective probabilities. Oh, really? I didn't know that. All the way back to Kant.
Starting point is 01:32:43 Yeah. Yeah. It's not surprising i guess because he was really into subjective interpretation of knowledge in general yeah but um i was like wow that that's um a great pickup by by carl yeah um so the the last question i wanted to ask you, David, was really something I just started thinking while we've been talking. And that is, I wonder whether you see the cultural malaise that's been afflicting science
Starting point is 01:33:18 and the kind of careerism and incrementalism as being at all a cause of the continued and perhaps increasing popularity of bayesian epistemology because i guess with bayesian epistemology you can kind of keep like tinkering with existing theories rather than coming up with fundamentally new ones so sociology is another kind of thing that I'm not particularly interested in I I don't want to psychologize or second-guess why people make the mistakes that they do I I would rather think that Bayesian epistemology is just one facet of a much larger thing. So it's not that Bayesianism has caused all the trouble in the world, it's that all the trouble in the world has caused Bayesian epistemology. However, it is striking that in Bayesian epistemology, it's all about increasing the authority of a theory, which in the big picture is all about
Starting point is 01:34:42 increasing authority, which means that there's, you know, let's follow the science as recently people have been saying about the pandemic and so on. As if science had some authority, had a moral authority or a finality or an indisputableness uh about it and at the same time it undervalues Bayesian epistemology that is undervalues criticism because the only kind of so everything is focused in Bayesian epistemology of in increasing our credence for something and okay we have a refutation that reduces it to zero. So it's a kind of structureless conception of how theories can fail. According to that theory, they fail all at once and when they are refuted by experiment. Whereas in reality, in the Popperian conception, science consists entirely of criticism or rather of conjecture, which is a thing that we don't know how to model, but theories don't have a source other than conjecture and the whole rich content of scientific reasoning comes in criticism a small part of which is
Starting point is 01:36:16 inventing experiments and doing them but most criticism is structural criticism of the theory qua explanation. And most theories are rejected for being bad explanations rather than actually refuted. And even when there is an apparent refutation, we don't take it seriously unless there's an explanation for it. Again, my example I gave in my book is the fast that was made when some people thought that they'd found neutrinos that travel faster than light. And they were thinking, oh, general relativity is refuted and so on. And actually, the explanation was that there was a faulty connector in some of their electronics. And that was it. That was the whole explanation for the neutrinos appearing to travel faster than light. Now, this is the Duomquine critique of science in general. It does not apply to Popperian epistemology, but it does apply to Bayesianism.
Starting point is 01:37:34 Because in Bayesianism, you never know whether your credence for the integrity of the experiment should be reduced or your credence for the theory should be reduced. Bayesian epistemology doesn't give a criterion for which of those to choose and nor does Popperian epistemology, but Popperian epistemology has an alternative account of what you should be doing, namely trying to find explanations. And then when you found the explanations, it's not that your probability or your credence for them changes. It's that their rivals become bad explanations.
Starting point is 01:38:23 And that's how we make progress. Yes. David Deutsch, thanks so much for your time. It's been fun. Nice chatting. Thank you so much for listening. I hope you enjoyed that conversation as much as I did. For show notes and the episode transcript,
Starting point is 01:38:40 head to my website, thejspod.com. That's thejspod.com. Until next time, thank you for listening. Ciao.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.