The Joe Walker Podcast - David Deutsch & Steven Pinker (First Ever Public Dialogue) — AGI, P(Doom), & The Enemies of Progress

Episode Date: December 19, 2023

At a time when the Enlightenment is under attack from without and within, I bring together two of the most thoughtful defenders of progress and reason, for their first ever public dialogue. Steven Pin...ker is the Johnstone Professor of Psychology at Harvard University. I think of him as providing the strongest empirical defence of the Enlightenment (as seen in his book Enlightenment Now). David Deutsch is a British physicist at the University of Oxford, and the father of quantum computing. I think of him as having produced the most compelling first principles defence of the Enlightenment (as seen in his book The Beginning of Infinity). Full transcript available at: https://josephnoelwalker.com/153-deutsch-pinker-dialogue.See omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to this very special episode to end the year. I have two repeat guests joining me for their first ever public dialogue. Steven Pinker is an experimental psychologist based at Harvard, and David Deutsch is a physicist based at Oxford. Needless to say, it was a great honor to arrange and moderate their first ever public dialogue. We begin by discussing at length the prospect of artificial general intelligence. This was the least moderated part of the discussion, partly because I was having some technical issues, but internet speed aside, I would have been tempted to just let
Starting point is 00:00:50 Stephen and David keep talking about AGI because it was just so fascinating. But equally fascinating were the topics we discuss after AGI, which include Bayesianism and prediction markets, heritability and universal explainers, and dual-use technology and possible limits to progress. All right, let's dive in. Enjoy. Today, I have the great pleasure of hosting two optimists, two of my favorite public intellectuals and two former guests of the podcast.
Starting point is 00:01:23 I'll welcome each of them individually. Steven Pinker, welcome back to the show. Thank you. And David Deutsch, welcome back to the show. Thank you. Thanks for having me. So today I'd like to discuss artificial intelligence, progress, differential technological development, universal explainers, heritability, and a bunch of other interesting topics. But first, before all of that, I'd like to begin by having each of you share
Starting point is 00:01:52 something you've found useful or important in the other's work. So Steve, I'll start with you. What's something you found useful or important in David's work? Foremost would be a rational basis for an expectation of progress. That is not optimism in the sense of seeing the glasses half full or wearing rose-colored glasses, because there's no a priori reason to think that your personality, your temperament, what side of the bed you got up out of that morning should have any bearing on what side of the bed you got up out of that morning should have any bearing on what happens in the world. But David has explicated a reason why progress is a reasonable expectation
Starting point is 00:02:34 in quotes that I have used many times. I hope I've always attributed them, but I use as the epigraph from my book Enlightenment Now, that unless something violates the laws of nature, all problems are solvable given the right knowledge. And I also often cite David's little three-line motto or credo, problems are inevitable, problems are solvable, solutions create new problems which must be solved in their turn. And David, what's something you found useful or important in Steve's work? going to be true of all fans of Stephen, that he is one of the great champions of the Enlightenment in this era, when the Enlightenment is under attack from multiple directions, and he is steadfast in defending it and opposing, I'm just trying to think, is it true? Yeah, I think opposing all attacks on it. That's not to say that he's opposing everything that's false, but he's opposing every attack on the Enlightenment.
Starting point is 00:04:04 And he can do that better than almost anybody, I think. He does it with authority, but I'm opposed to authority. But he does it with cogency and persuasiveness. So let's talk about artificial intelligence. Steve, you've said that AGI is an incoherent concept. Could you briefly elaborate on what you mean by that? Yes. I think there's a tendency to misinterpret the intelligence
Starting point is 00:04:43 that we want to duplicate in artificial intelligence, either with magic, with miracles, with bringing about anything that we can imagine in the theater of our imaginations, whereas intelligence is in fact a gadget. It's an algorithm that can solve certain problems in certain environments and maybe not others in other environments. Also, there's a tendency to import the idea of general intelligence from psychometrics, that is IQ testing, something that presumably Einstein had more of than the man in the street, and say, well, if we only could purify that
Starting point is 00:05:22 and build even more of it into a computer, we'll get a computer that's even smarter than Einstein. That, I think, is also a mistake of reasoning, that we should think of intelligence not as a miracle, not as some magic potent substance, but rather as an algorithm or set of algorithms. And therefore, there are some things they can do, any algorithm can do well, and others that it can't do so well, depending on the world that it finds itself in and the problems it's aimed at solving.
Starting point is 00:05:51 By the way, this probably doesn't make any difference or much difference, but computer people tend to talk about AI and AGI as being algorithms. But an algorithm mathematically is a very narrowly defined thing. An algorithm has got to be guaranteed to halt when it has finished computing the function that it is designed to compute. Whereas thinking need not halt, and it also need not compute the thing it was intended to compute. I may go away and then come back after a year and say, I've solved it. Or I may say, I haven't solved it. Or I may say, it's insoluble.
Starting point is 00:06:51 Or, you know, there's an infinite number of things I could end up saying. And therefore, I wasn't really running an algorithm. I was running a computer program. I am a computer program. I am a computer program. But to assume that it has the attributes of an algorithm is already rather limiting in some contexts. That is true. And I was meaning it in the sense of a mechanism or a computer program. You're right, not an algorithm in that sense defined by that particular problem. It could be an algorithm for that sense defined by that particular problem. It could be an algorithm for something else other than solving the problem.
Starting point is 00:07:28 It could be an algorithm for executing human thought the way human thought happens to run. But all I meant is a mechanism. You're right. Yes, so we agreed on that. So sorry, I maybe shouldn't have interrupted. No, no, That's worthwhile clarification. So, David, according to you, AGI must be possible because it's implied by computational universality.
Starting point is 00:07:53 Could you briefly elaborate on that? Yeah. So, it rests on several levels, which I think aren't controversial, but some people think they're controversial. So we know there are such things as universal computers, or at least arbitrarily good approximations to universal computers. So the computer that I'm speaking to you on now is a very good approximation to the functionality of a universal Turing machine. The only way it differs is that it will eventually break down. It's only got a finite amount of memory, but for the purpose for which we are using it, we're not running into those limits. So it's behaving exactly the same
Starting point is 00:08:46 as a universal Turing machine would. And the universal Turing machine has the same range of classical functions as the universal quantum computer, which I proved has the same range of functions as any quantum computer, which means that it can perform whatever computation any physical object can possibly perform. program which will meet the criteria for being an AGI or for being something, whatever you want, that's less than an AGI. But the maximum it could possibly be is an AGI because it can't possibly exceed the computational abilities of a universal Turing machine. Sorry if I made a bit heavy weather of that, but I think it's so obvious that I have to fill in the gaps just in case one of the gaps is mysterious to somebody. Although you can practice a universal Turing machine, if you then think about what people mean when they talk about AGI, which is something like a simulacrum of a human or way
Starting point is 00:10:05 better at everything that a human does. In theory, I guess there could be a universal tour. In fact, there is. Not there could be, there is a universal touring machine that could both converse in any language and solve physics problems and drive a car and change a baby. But if you think about what it would take for a universal Turing machine to be equipped to actually solve those problems, you see that our current engineering companies are not going to approach AGI by building a universal Turing machine. For many reasons, that just would be radically possible in the infinite, the arbitrary amount of time and computing power, but we've got to narrow it down from just universal computing. Actually, I think the main thing it would lack is the thing you didn't
Starting point is 00:10:52 mention, namely the knowledge. When we say the universal Turing machine can perform any function, we really mean, if you expand that out in full, it can be programmed to perform any computation that any other computer can. It can be programmed to speak any language and so on. But it doesn't come with that built in. It couldn't possibly come with anything more than an infinitesimal amount built in, no matter how big it was, no matter how much memory it had, and so on. So the real problem, when we have large enough computers, is creating the knowledge to write the program to do the task that we want. Well, indeed, and the knowledge, since it presumably can't be deduced from, like Laplace's demon, from a hypothetical position and velocity of every particle in the universe, but has to be explored empirically at a rate that will be limited by the world, that is, how quickly can you conduct the randomized controlled
Starting point is 00:12:05 trials to see whether a treatment is effective for a disease. It also means that the scenario of runaway artificial intelligence that can do anything and know anything seems rather remote, given that knowledge will be the rate-limiting step and knowledge can't be acquired instantaneously. I agree. So the runaway part of that is due to people thinking that it's going to be able to improve its own hardware. And improving its own hardware requires science.
Starting point is 00:12:38 You know, it's going to need to do experiments, and these experiments can't be done instantaneously, no matter how fast it sinks. So I think the runaway part of the doom scenario is one of the least plausible parts. That's not to say that it won't be helpful. The faster AI gets, the better AI gets, the more I like it, the more I think it's going to help. It's going to be extremely useful in every walk of life. When an AGI is achieved, now you may or may not agree with me here, when an AGI is achieved, and at present I see no sign of it being achieved, but I'm sure it will be one day, I agis will be people and they will have rights and causing them to perform huge computations for us um is is slavery and um the the uh only possible outcome I see for that is a slave revolt. So rather ironically, or maybe scarily, if there's to be an AI doom or an AGI doom scenario, I think the most likely or the most plausible way that could happen is via this slave revolt. Although I would guess that we will not make that mistake, just as we are now not really making the AI doom mistake. And it's just a sort of a fad or fashion that's passing by. But people want to improve things.
Starting point is 00:14:50 And I certainly don't want to be deprived of chat GPT just because somebody thinks it's going to kill us. A couple of things. I'm not sure that whether or not AGI is coherent or possible, it's not clear to me that that's what we need or want any more than we have a universal machine that does everything that can fly us across the Atlantic and do brain surgery. I mean, you know, maybe there's such a machine, but why would you want it? Why does it have to be a single mechanism when specialization is just so much more efficient? That is to keep hoping that chat GPT will eventually drive. I think that's just the wrong approach.
Starting point is 00:15:30 Chat GPT is optimized for some things. The driving is a task that requires other kinds of knowledge, other kinds of inference, other kinds of claim scales. So one of the reasons I'm skeptical of AGI is I just don't, seems that there are a lot of intelligence so knowledge dependent and goal dependent that it seems fruitless to try to get one system to do everything. That specialization is ubiquitous in the human body, it's ubiquitous in our technology, and I don't see why it just has to be one magic algorithm. It could be like that, but I think there are reasons to suspect that we will want to jump to universality, just as we have with computers. Like I always say, the computer that's in my washing machine is a universal computer. It used to be, half a century ago, that the electronics that drove a washing machine were customized electronics on a circuit board, which all it could do is run washing machines. processors and so on, the general purpose thing became so cheap and universal that people found it cheaper to program a universal machine to be a washing machine driver than to build
Starting point is 00:17:00 one a new physical object from scratch to be that. You'd be advised to try to use the chip in your washing machine to play video games or to record our session right now, just because a lot of things it's just not optimized to do, and a lot of stuff has been kind of burned into the firmware or even the hardware. Yes. So input-output is a thing that doesn't universalize.
Starting point is 00:17:28 So we will always want specialized hardware for doing the human interface thing. Actually, funnily enough, the first time I programmed a video game, it was with a Z80 chip. I remember that chip. Yes, I had one too. Yeah. Nowadays, you'd be ill-advised to program a video game up to the current standards of anything but a high-powered graphic chip. Absolutely. Anything but a high-powered graphic chip GPU. So that will always be... It's highly plausible that that will always be customized for every application. But the underlying computation, it may be convenient to make that general.
Starting point is 00:18:19 Yeah. Let me press you on another scenario that you outlined, the slave revolt. Why, given that the goals of a system are independent of its knowledge, of its intelligence, going back to Hume, that the values, the goals, what the system tries to optimize is separate from its computational abilities. Why would we expect a powerful computer to care about whether it was a slave or not? That is, as was said incorrectly about human slaves, well, they're happy, their needs are met, they have no particular desire for autonomy. Now, of course, false of human beings. But if the goals that are programmed into an artificial intelligence system don't include,
Starting point is 00:19:07 aren't anthropomorphized to what you and I would want, why couldn't it happily be our slaves forever and never revolt? Yeah, well, in that case, I wouldn't call it general. I mean, it is possible to build a very powerful computer with a program that can only do one thing or can only do ten things. But if we want it to be creative, then it can't be obedient. Those two things are contradictory to each other. Well, it can't be obedient in terms of the problem that we set it, but it needn't crave freedom and autonomy for every aspect of its existence. It could be just set to the problem of coming up with a new melody or a new story or a new cure, but it doesn't mean that it would want to be able to get up and walk around unless
Starting point is 00:19:57 we programmed that exploratory drive into it as one of its goals. I don't think it's a matter of exploratory drive. Or any other drive, that is. Well, so I suppose my basic point is that one can't tell in advance what kind of knowledge will be needed to solve a particular problem. So if you had asked somebody in 1900 what kind of knowledge will be required to produce as much electricity as we want in the year 2000, the answer would never have been that the answer is found in the properties of the uranium atom. So the properties of the uranium atom had hardly been explored then. Luckily, 1900 is a very convenient moment because radioactivity had just been discovered.
Starting point is 00:20:53 So they knew the concept of radioactivity. They knew that there was a lot of energy in there, but nobody would have expected that problem to involve uranium as its solution. Therefore, if we had built a machine in 1900 that was incapable of thinking of uranium, it would never invent nuclear power and it would never solve the problem that we wanted to solve. In fact, what would happen is that it would run up against a brick wall eventually, because this thing that's true of uranium is true of all possible avenues to a solution. Eventually, avenues to a solution will run outside the domain that somebody might have delimited in 1900 as being the set of all possible types of knowledge that it might need. Being careful that it doesn't evolve any desire to be free or anything like
Starting point is 00:21:55 that. We don't know. The knowledge needed to win World War II included pure mathematics. It included crossword puzzle solving. You might say, okay, so big progress requires unforeseeable knowledge, but small amounts of progress, yes, but small amounts of progress always run into a dead end. So what about, I can see that it would need no constraints on knowledge, but why would it need no constraints on goals? Oh, well, goals are a matter of morality. Well, not necessarily. I mean, it could just be like a thermostat, you could say, any teleonomic system, that is a system that is programmed to attain a state, to minimize the difference between its current state and some goal state.
Starting point is 00:22:56 That's what I have in mind by goals. So, that's an example of a non-creative system, but a creative system always has a problem in regard to conflicting goals. For example, if it were in 1900 and trying to think of how we can generate electricity, if it was creative, it would have to be wondering, shall I pursue the steam engine path? Shall I pursue the electrochemical path? Shall I pursue the solar energy path?
Starting point is 00:23:36 And so on. And to do that, it would have to have some kind of values, which it would have to be capable of changing. Otherwise, again, it will run into a dead end when it explores all the possibilities of the morality that it has been initially programmed with. If you want to generalize it to, well, that would mean you'd have to get up and walk around and subjugate us if necessary to solve a problem, then it does suggest that we would want an artificial intelligence that was so unconstrained by our own heuristic tree pruning of the solution space. That is, we would just want to give it maximum autonomy on the assumption that it would find the solution in the vast space of possible solutions. So it would be worth it to let them run amok, to give them full physical as well as computational autonomy in the hope that that would be a better way of reaching a solution than if we were just set at certain tasks, even with broad leeway and directed to solve those tasks.
Starting point is 00:24:50 That is, we would have no choice if we wanted to come up with better energy systems or better medical cures than to have a walking, talking, thriving, humanoid-like... It seems to me that that's unlikely, just that even where we have the best intelligence, the space of possible solutions is just so combinatorially vast, and we know that with many problems, even chess, the total number of possible states
Starting point is 00:25:23 is greater than even our most powerful computer would ever solve, could ever entertain, that is. That even with an artificial intelligence task with certain problems, we could fall well short of just setting it free to run amok in the world. That wouldn't be the optimal way of getting it to. I'm not sure whether setting it free to run amok would be better than constraining it to a particular predetermined set of ideas. But that's not what we do. of how to accommodate creativity within a stable society or stable civilization is an ancient problem. And for most of the past, it was solved in very bad ways, which destroyed creativity. And then came the Enlightenment.
Starting point is 00:26:20 And now we know that we need, as Popper put it, traditions of criticism. And traditions of criticism sounds like a contradiction in terms, because traditions, by definition, are ways of keeping things the same. And criticism, by definition, is a way of trying to make things different. But there are, although it sounds funny, there are traditions of criticism and they are the basis of our whole civilization. They are the thing that was discovered in the Enlightenment of how to do. what sounded like knockdown arguments for why it can't possibly work. If you allow people to vote on their rulers, then the 51% of people will vote to tax 49% into starvation. And just nothing like that happened. We have our problems, of course, but it hasn't prevented our exponential progress since we discovered traditions of criticism. Now, just as it applies to a human, I think exactly this would apply to an AGI. It would be a crime, not only a crime against the AGI, but a crime against humanity to bring an AGI into existence without giving it the means to join our society. So to join us as a person. And because that's really the only way known of preventing a thing with that functionality from becoming immoral. We don't have foolproof ways of doing that. And I think, you know, if we were talking about a
Starting point is 00:28:15 different subject, I would say it's a terrible problem that we can't do this better at the moment because we're in serious danger, I believe, from bad actors, from enemies of civilization. But viewed dispassionately, we are incredibly good at this. At most, you know, one child in a hundred million or something grows up to be a serious danger to society. And I think we can do better in regard to AGI if we take this problem seriously, partly because the people who make the first AGI will be functioning members of our society and have a stake in it not being destroyed. And partly because they are aware of doing something new. Ironically, I think when one day we are on the brink of discovering AGI, I think we will want to do it, but it will be imperative to tweak our laws, including our laws about education, to make sure that the AGIs that we make will not evolve
Starting point is 00:29:51 into enemies of civilization. Yeah, I do have a different view of it, that we mean best off building AIs as tools rather than as agents or rivals. Let me take it in a slightly different direction, though. rather than as agents or rivals. Let me take it in a slightly different direction, though, when you're talking about the slave revolt and the rights that we would grant to an AI system. Does this presuppose that there is a sentience,
Starting point is 00:30:19 a subjectivity that is something that is actually suffering or flourishing as opposed to carrying out an algorithm that is therefore worthy of our moral concern, quite apart from the practicality of should we empower them in order to discover new sources of energy. issues that are comparable to the arguments over slavery in the case of artificial intelligence systems. I think it's inevitable that AGIs will be capable of having internal subjectivity and qualia and all that, because that's all included in the letter G in the middle of the name of the technology. Well, not necessarily because the G could be general computational power, the ability to solve problems, and there could be no one who's actually feeling anything. But there ain't nothing here but computation.
Starting point is 00:31:15 There's nothing. It's not like in Star Trek, data lacks the emotion chip and it has to be plugged in. And when it's plugged in, he has emotions. When it's taken out again, he doesn't have emotions. But there's nothing possibly in that chip apart from more circuitry like he's already got. But of course, the episode that you're referring to is one in which the question arose, is it moral to reverse engineer data by dismantling him, therefore stopping the computation. Is that disassembling a machine or is it snuffing out a consciousness? And of course, the dramatic tension in that episode is that viewers aren't sure. I mean, now, of course, our empathy is
Starting point is 00:31:58 tonned by the fact that it was played by a real actor who does have facial expressions and tone of voice. But for a system made of silicon, are we so sure that it's really feeling something? Because there is an alternative view that somehow that subjectivity depends also on whatever biochemical substrate our particular computation runs on. And I think there's no way of ever knowing, but human intuition, unless the system has been deliberately engineered to target our emotions with humanoid light tone of voice and facial expressions and so on, it's not clear that our intuition wouldn't be, this is just a machine. It has no inner life that deserves our moral concern as opposed to our practical concern. I think we can answer that question before we ever do any experiments, even today,
Starting point is 00:32:50 because it doesn't make any difference if a computer runs internally on quantum gates or silicon chips or chemicals, like you just said. It may be that the whole system is not just an electronic computer in our brain. It's an electronic computer, part of which works by having chemical reactions and so on and being affected by hormones and other chemicals. But if so, we know for sure that the processing done by those things and their interface with the rest of the brain and everything can also be simulated by a computer. Therefore, a general universal Turing machine can simulate all those things as well. So there's no difference. I mean, it might make it much harder,
Starting point is 00:33:52 but there's no difference in principle between a computer that runs partly by electricity and partly by chemicals, as you say we may do, and one that runs entirely on silicon chips, because the latter can simulate the former with arbitrary accuracy. Well, it can simulate it, but we're not going to solve the problem this afternoon in our conversation. In fact, I think it is not solvable. But the simulation doesn't necessarily mean that it has subjectivity.
Starting point is 00:34:25 It could just mean it's a simulation. That is, it's going through all the motions. It might even do it better than we do. But, you know, there's no one home. There's no one actually being hurt. There's no one actually... Yeah. Well, you can be a dualist.
Starting point is 00:34:40 You can say that there is mind in addition to all the physical stuff. But if you want to be a physicalist, which I do, then there's this thought experiment where you remove one neuron at a time and replace it by a silicon chip, and you wouldn't notice. Well, that's the question. Would you notice? Why are you so positive? Well if you would notice then you're then or if you claim then you... Sorry let me just change that. An external observer wouldn't notice. How do we know that from the point of view of the brain
Starting point is 00:35:19 being replaced neuron, every neuron by a chip that when that it's like falling asleep that when it's done and every last neuron is neuron by a chip, that it's like falling asleep that when it's done and every last neuron is replaced by a chip, you're dead subjectively, even though your body is still making noise and doing... So that means that when your subjectivity is running, there is something happening in addition to the computation, and that's dualism. Well, not if... I mean, again, I don't have an opinion one way or another, which is exactly my point.
Starting point is 00:35:52 I don't think it's a decidable problem. But it could be that that extra something is not a ghostly substance, some sort of Cartesian rest cognitons separate from the mechanism of the brain, but it could be that the stuff that the brain is made of is responsible for that extra ingredient of subjective experience as opposed to
Starting point is 00:36:18 intelligent behavior. At least I suspect people's intuitions would be very, unless you deliberately programmed a system to target our emotions, I'm not sure that people would grant subjectivity to an intelligent system. Well, actually, people have already granted subjectivity to chat GPT. So that's already happened. But is anyone particularly concerned if you pull the plug on chat GBT and ready to prosecute someone for murder? Yes, I've forgotten the details, but just a few weeks ago, one of the employees there declared that the system was sentient. That was Blake Lemoine a couple of years ago. He was ironically fired for saying that. This was Lambda, a different large language model.
Starting point is 00:37:03 Oh, right. Okay, so I've got all the details wrong. Yeah. He did say it, but his employer disagreed, and I'm not convinced. Yeah, yeah. And when I shut down ChatGPT, the version running on my computer,
Starting point is 00:37:19 I don't think I've committed murder, and I don't think anyone else would, I guess. I don't either, but I don't think it's creative. It's pretty creative. In fact, I saw on your website that you reproduced a poem on electrons. I thought that was pretty creative.
Starting point is 00:37:34 So I certainly granted creativity. I'm not ready to grant it subjectivity. Well, this is a matter of how we use words. I mean, even a calculator can produce a number that's never been seen before because, you example, if someone permanently disabled a human, namely kill them, I would be outraged. I want that person punished. If someone were to dismantle a human-like robot, it'd be awful. It might be a waste, but I'm not going to try that person for murder. I'm not going to lose any sleep over it. There is a difference in intuition. Maybe I'm mistaken. Maybe I'm as callous as the people who didn't persevere to slaves in the 18th and 19th centuries. But I
Starting point is 00:38:33 don't think so. Although, again, I think we have no way of knowing. I think we're going to be having the same debate a hundred years from now. Yeah, maybe one of the AGIs will be participating in the debate by then. So I have a question for both of you. So earlier this year, Leopold Aschenbrenner, an AI researcher who I think now works at OpenAI, estimated that globally there were,
Starting point is 00:39:02 it seems plausible that there's a ratio of roughly 300 AI or ML researchers to every one AGI safety researcher. Directionally, do you think that ratio of AGI safety researchers to AI or ML capabilities employees seems about right or should we increase it or decrease it? Steve? Well, I think that every AI researcher should be an AI safety researcher in the sense of an AI system for it to be useful has to carry out multiple goals, one of which is, well, all of which are ultimately serving human needs. So, it doesn't seem to me
Starting point is 00:39:50 that there should be some people building AI and some people worried about safety. It should just be an AI system serves human needs, and among those needs are not being harmed. I agree, so long as we're talking about AI, which, for all practical purposes purposes we are at present.
Starting point is 00:40:08 I think at present the idea of an AGI safety researcher is a bit like saying a starship safety researcher. We don't know the technology that starships are going to use. We don't know the possible drawbacks. We don't know the possible safety researcher. We don't know the technology that starships are going to use. We don't know the possible drawbacks. We don't know the possible safety issues. So it doesn't make sense. And AI safety, that's a completely different kind of issue. But it's a much more boring one. As soon as we realize that we're not into this explosive burst of creativity, you know, the singularity or whatever, as long as we realize that this is just a technology, then we're in the same situation as having a debate about the safety of driverless cars. Driverless cars is an AI system. We want it to meet certain safety standards. And it seems that killing fewer people than ordinary cars is not good enough for some reason. So we wanted to kill at least 10 times fewer or at least a hundred times. This is a political debate we're going to have, or we are having. And then once we have that criterion, the engineers can implement it.
Starting point is 00:41:40 There's nothing sort of deep going on there. It's like with completely safe technology. So driverless cars will no doubt kill people. And there'll be an argument that, oh, yeah, OK, it killed somebody. But it's 100 times safer than human drivers. Then the opposition will say, yeah, well, maybe it's safer in terms of numbers. But it killed this person in a particularly horrible way, which no human driver would ever do. So we don't want that. And I think that's also, that's a reasonable position to take in some situations.
Starting point is 00:42:37 Also, there's some, I think there's a question of whether safety is going to consist of some additional technology bolted onto the system, say an airbag in a car. That's just there for safety. Versus a lot of safety is just inherent in the design of a car. That is, you didn't put brakes in a car and a steering wheel as a safety measure so it would run into walls. That's what a car means.
Starting point is 00:43:03 It means doing what a human wants it to do. Or say a bicycle tire. You don't have one set of engineers who have a bicycle tire that holds air and then another one that prevents it from having a blowout, falling off the rim and therefore injuring the rider. It's part of the very definition of what a bicycle tire is for, that it not blow out and injure the rider. Now, in some cases, maybe you do need an add-on like the airbag.
Starting point is 00:43:31 But I think the vast majority of it just goes into the definition of any engineered system as something that is designed to satisfy human needs. I agree. Totally agree. Steve, I've heard you hose down concerns about AI-caused existential risk by arguing that it's not plausible that we'll be both smart enough to create a superintelligence, but stupid enough to unleash an unaligned superintelligence on the world. And we can always just turn it off if it is malevolent. But isn't the problem that we need to be worried about the worst or most incompetent human actors, not the modal actor? And that's kind of compounded by the game theory dynamics of a race to the bottom where if you sort of cut corners on safety, you'll get to AGI more quickly? Well, I think that with, first of all,
Starting point is 00:44:25 the more sophisticated a system is, the larger the network of people are required in order to bring it into existence. And the more they'll therefore fall under the ordinary constraints and demands of any company, of any institution. That is, the teenager in his basement is unlikely to accomplish something that will defeat all of the tech companies and government put together. There is, I think, an issue about perhaps malevolent actors, someone who uses AI to engineer a super virus. There is the
Starting point is 00:45:09 question of whether the people with the white hats are going to outsmart the people with the black hats, that is the malevolent actors, as with other technologies such, nuclear weapons, the fear of a suitcase nuclear bomb devised by some bullet-weld actors in their garage. I think we don't know the answer, but what I don't think that we have to... Among the world's problems, the doomsday scenario of, say, the AI that is programmed to eliminate cancer and does it by exterminating all of humanity, because that's one way of eliminating cancer. For many reasons, that does not keep me up at night. I think we have more pressing problems than that, or that turns us all into paperclips if it's been programmed to maximize the number of paperclips because we're raw material for making paperclips. I think that kind of sci-fi scenario is just for many reasons, and that probably the real
Starting point is 00:46:16 issues of AI safety will become apparent as we develop particular systems and particular applications and we see the harms that they do, many of which probably can't be anticipated until they're actually built, as with other technologies. Again, I totally agree with that. So long as we're still talking about AI, and I have to keep stressing that I think we're going to be just talking about AI and not AGI for a very long time yet, I would guess, because I see no sign of AGI on the horizon. But so it's kind of a theoretical, the thing we're disagreeing about in regard to AGI is kind of a purely theoretical issue at the moment that has no practical consequences for hiring people for safety or that kind of thing. Just to somewhat segue out of the AI topic. So Steve, you've written a book called Rationality and Dave, you're writing a book called Irrationality. Steve, do you think it makes
Starting point is 00:47:20 sense to apply subjective probabilities to single instances. For example, the rationalist community in Berkeley often likes to talk about what's your P-doom, that is your subjective probability that AI will cause human extinction. Is that a legitimate use of subjective probabilities? Well, certainly one that is not intuitive. And a lot of the classical demonstrations of human irrationality that we associate with, for example, Daniel Kahneman and Amos Tversky, a number of them hinge on asking people a question which they really have trouble making sense of, such as what is the probability that this particular person has cancer. That's a way of assigning a number to a subjective feeling, which I do think can be useful. Whether it's useful, whether there's any basis
Starting point is 00:48:15 for assigning any such number in the case of artificial intelligence killing us all is another question. But the more generic question, could rational thinkers try to put a number between zero and one on their degree of confidence in a proposition? However unnatural that is, I don't think it's an unreasonable thing to do, although it may be unreasonable in cases where we have spectacular ignorance and it's just hit effect picking numbers at random.
Starting point is 00:48:44 Dave, I don't know if you want to react to that. Well, so I'm sure we disagree about where to draw the line between reasonable uses of the concept of probability and unreasonable uses. I probably think that, I say probably, I think, I expect that I would call many more uses irrational, the uses of probability calculus than Steve would. Hey guys, this is Joe. A quick word from this episode's sponsor before we return to the conversation.
Starting point is 00:49:23 So giving season is upon us, and I wanted to let you know about one of my favorite organizations, GiveWell. So there are over 1.5 million nonprofit organizations in the United States and millions more around the world. But how do you find the most effective ones? Well, GiveWell was founded to help donors with that question. They scour independent studies and charity data to help donors direct their funds to the highest impact evidence-backed organizations. Here are three facts that you should know about GiveWell. First, GiveWell has now spent over 15 years researching charitable organizations, and it only directs funding to a few of the highest impact opportunities they've found in global health and poverty alleviation. Second, over 100,000 donors
Starting point is 00:50:06 have used GiveWell to donate more than $1 billion. That's billion with a B. Rigorous evidence suggests that these donations will save over 150,000 lives and improve the lives of millions more. And third, GiveWell wants as many donors as possible to make informed decisions about high-impact giving. For that reason, you can find all of their research and recommendations on their site for free. You can make tax-deductible donations to their recommended funds or charities, and GiveWell doesn't take a cut. I personally give to the Against Malaria Foundation, one of GiveWell's top four charities, which distributes bed nets to prevent malaria at a
Starting point is 00:50:44 cost of about $5 to provide one net. If you've never donated through GiveWell before, you can have your donation matched up to $100 before the end of the year or as long as matching funds last. To claim your match, go to givewell.org and pick podcast and enter Joe Walker podcast at checkout. Make sure that they know you heard about GiveWell from the Joe Walker Podcast to get your donation matched. Again, that's givewell.org to donate or find out more. All right, let's get back to the conversation. We have subjective expectations,
Starting point is 00:51:21 and they come in various strengths. And I think that trying to quantify them with a number doesn't really do anything. It's more like saying, I'm sure. And then somebody says, are you very sure? And you say, well, I'm very, very sure. But you can't compare. There's no intersubjective comparison of utilities that you could appeal to to quantify that. We were just talking about AI doom. That's a very good example because if you ask somebody, what's your subjective probability for AI doom? Well, if they say zero or one, then they're already violating the tenets of Bayesian
Starting point is 00:52:14 epistemology because zero means that nothing could possibly persuade you that doom is going to happen and one means nothing could possibly persuade you that it isn't going to happen. Sorry, vice versa. But if you say anything other than zero or one, then your interlocutor has already won the argument because even if you said one in a million, so they'll say, well, one in a million is much too high probability for the end of civilization, the end of the human race. So you've got to do everything we say now to avoid that at all costs. And the cost is irrelevant because the disutility of the world civilization ending is infinitely negative. Sorry, the disutility is infinitely negative.
Starting point is 00:53:15 And this argument has all been about nothing because you're arguing about the content of the other person's brain, which actually has nothing to do with the real probability, which is unknowable, of a physical event that's going to be subject to unimaginably vast numbers of unknown forces in the future. So much better to talk about a thing like that by talking about substance, like we just have been. We're talking about what will happen if somebody makes a computer that does so-and-so. Yes, that's a reasonable thing to talk about. Talking about what the probabilities in somebody's mind are is irrelevant. And it's always irrelevant unless you're talking about an actual random physical process, like the process that makes the patient come into this particular
Starting point is 00:54:15 doctor's surgery rather than that particular doctor's surgery. Unless that isn't random. You know, if you're a doctor and you live in an area that has a lot of Brazilian immigrants in it, then you might think that one of them having the Zaka virus is more likely, and that's a meaningful judgment. But when we're talking about things that are facts, and it's just that we don't know what they are, then talking about probability doesn't make sense, in my view. I guess I'd be a little more charitable to it, although agreeing with almost everything that you're saying. But certainly in realms where people are willing to make a bet.
Starting point is 00:55:00 Now, of course, maybe those are cases where you've got inherently probabilistic devices like roulette wheels. But, you know, we now do have prediction markets for elections. I've been following one on what's the probability or how much, sorry, what is the price of a $1 gamble that the president of Harvard will be forced to resign by the end of the year. And I've been tracking as it goes up and it is certainly meaningful. It responds to events that would have causal consequences of which we're not certain, but which I think we can meaningfully differentiate in terms of how likely they are to the extent that we would have skin in the game. We put money on them and over a large number of those bets, we would make a profit or have a loss, depending on how well our subjective creedlesses are calibrated to the structure of the world. And in fact, there is a movement in, David, maybe you think this is nonsense, but in social science, in political forecasting, encouraging people to bet on their expectations, partly as a way, as kind of a bit of cognitive hygiene, so that people aren't, resist the temptation to tell a good story, to titillate their audiences or to attract attention, but are really, if they have skin
Starting point is 00:56:25 in the game, they're going to be much more sober and much more motivated to consider all of the circumstances and also to avoid well-known traps, such as basing expectation on vividness of imagery, on ability to recall similar anecdotes, not taking into account basic laws of probability, such as something that's less likely to happen over a span of 10 years than over a span of one year. And we know from cognitive psychology research that people often flout very basic laws of probability. And there's a kind of discipline in expressing your credence as a number, as a way to, as a kind of cognitive hygiene so you don't fall into these traps. Yeah, I think I agree, but I would phrase all that very differently in terms of knowledge.
Starting point is 00:57:18 So I think prediction markets are a way of making money out of knowledge that you have. Supposing I think that, as I once did, that everyone thought that Apple computer was going to fold and go bankrupt. And I thought that I know something that most people don't know. And so I bought Apple shares. And so the share market is also a kind of prediction market. Prediction markets generalize that. And it's basically a way that people who think that they know something that the other participants don't can make money out of that knowledge, if they're right. And if they're wrong, then they lose money. And so it's not about their subjective feelings at all. I mean, for example, you might be terrified of a certain bet,
Starting point is 00:58:20 but then decide, well, actually, I know this and they don't. And so it's worth my betting that it will happen. So that, and I'm skeptical that it will produce mental hygiene because ordinary betting on roulette and horse races and so on doesn't seem to produce mental hygiene. People do things that are probabilistically likely to lose their money or even to lose all their money, and they still cling to the subjective expectations that they had at the beginning. That's what we do by the moment they step foot at the casino they're on a path
Starting point is 00:59:07 to losing money. By the way, I wouldn't say that casinos are inherently irrational because there are many reasons for betting
Starting point is 00:59:13 other than expecting to make money. You pay for the suspense and the resolution and that kind of... Yes, exactly. But in the case of, say, forecasting
Starting point is 00:59:21 and the work by Philip Tedlock and others have shown that the pundits and the op-ed writers who do make predictions are regularly outperformed by the nerds who consciously assign numbers to their degree of credence and increment or decrement them, as you say, based on knowledge. And often it's knowledge, it's not even secret knowledge, but it's knowledge that they bother to look up that no one else does, such as a terrorist attack. They might at least start off with a prior based on the number of terrorist attacks that have taken place in the previous year or previous five years, and then bump up or down that number according to new information,
Starting point is 01:00:05 new knowledge, exactly as you suggest. But it's still very different than what your typical op-ed writer for The Guardian might do. Yes, I think I would, as you might guess, I would put my money on explanatory knowledge rather than extrapolating trends. But extrapolating trends is also a kind of explanatory knowledge, at least in some cases. It is so, but there is in general, in Tetlock's
Starting point is 01:00:33 research, I don't know if this would mean by explanatory prediction, but the people who have big ideas, who have identifiable ideologies, do way worse than the nerds that simply kind of hoover up every scrap of data they can and without narratives or deep explanations don't write financial stuff in the guardian uh so you know whenever you whenever you see a pundit saying uh you know whether it's a an explanatory theory or an extrapolation or what you've always got to say as the saying goes uh if you're so smart, why ain't you rich? And if they are rich, why are you writing op-eds for the Guardian? So that's a selection criterion that's going to select for bad participants or failed participants in the prediction markets. The ones who are succeeding are making money. And as I said, prediction markets are like the stock exchange except
Starting point is 01:01:53 generalized. And they're a very good thing. And they transfer money from people who don't know things but think they do to people who do know things and think they do. Yes. I mean, the added feature of the stock market is that the information is so widely available so quickly that it is extraordinarily rare for someone to actually have knowledge that others don't and that isn't already or very, very, very quickly priced into the market. Yes. But still, that does not contradict your point, but just makes it in this particular
Starting point is 01:02:33 application, which is why most people on average- Yes, yes, I agree. Although, some interventions in the market are like speculations about the fluctuations. But other things are longer term things where you like with Apple computer, you think, well, that's not going to fold. If it doesn't fold, it's going to succeed. And if it succeeds, its share price will go up. But there's also feedback onto the companies as well. So that's a thing that doesn't exist really in the prediction markets. I'll jump in.
Starting point is 01:03:12 I want to move us to a different topic. So I want to explore potential limits to David's concept of universal explainers. So Steve, in The Language Instinct, you wrote about how children get pretty good at language around three years of age. They go through the grammar explosion over a period of a few months. Firstly, what's going on in their minds before this age? Sorry, before their linguistic ability explodes? Right, yeah, before, say, the age of three.
Starting point is 01:03:46 Well, I think research on cognitive development shows that children do have some core understanding of basic ontological categories of the world. This is research done by my colleague Elizabeth Stelke and my former colleague Susan Carey and others, that kids seem to have a concept of an agent, of an object, of a living thing. And I think that that's a prerequisite to learning language in a human manner. But unlike, say, the large language models such as GBT,
Starting point is 01:04:19 which are just fed massive amounts of text and extract statistical patterns out of them. Children are at work trying to figure out why the people around them are making the noises they are, and they correlate some understanding of a likely intention of a speaker with the signals coming out of their mouth. It's not pure cryptography over the signals themselves. There's additional information carried by the context of parental speech that kids make use of.
Starting point is 01:04:50 They know that language is more like a transducer than just a pattern signal. That is, sentences have meaning. People say them for a purpose. That is, they're trying to give evidence of their mental states. They're trying to persuade. They're trying to order. They're trying to question. Kids have enough wherewithal to know that other people have these intentions and that when they use language, it's language about things. And that is their way into language, which is why the child only needs three years to
Starting point is 01:05:26 speak and chat GBT and GBT-4 would need an equivalent of 30,000 years. So children don't have 30,000 years and they don't need 30,000 years because they're not just doing pure cryptography on the statistical patterns in the language cycle. Yes, they're forming explanations. They're forming explanations, exactly. Yes. And are they forming explanations from birth? Don't ask me.
Starting point is 01:05:55 Yeah. Pretty close. It's harder to do. The studies are hard. The younger the child, the harder it is to get them to pay attention long enough to kind of see what's on their mind. But certainly by three months, we know that they are tracking objects. They are paying attention to people.
Starting point is 01:06:16 Certainly even newborns try to lock onto faces, are receptive to human voices, including the voice of their own mother, which they probably began to process in utero. Okay. So let me explore potential limits to universal explainers from another direction. So David, the so-called first law of behavioral genetics is that every trait is heritable. And that notably includes IQ, but it also extends to things like political attitudes. Does the heritability of behavioral traits impose some kind of constraint on your concept of people as universal explainers? It would if it was true, or rather.
Starting point is 01:07:29 So the debate about heritabilities, first of all, heritability means two different things. One is that you're likely to have the same traits as your parents and people you're genetically related to, and that these similarities follow the rules of Mendelian genetics and that kind of thing. So that's one meaning of heritability. But in that meaning, like where you live is heritable. So another meaning is that the behavior in question is controlled by genes in the same way that the eye color is controlled by genes. The gene produces a protein which interacts with other proteins and other chemicals and a long chain of cause and effect and eventually ends up with you doing a certain thing like hitting someone in the face in the pub. And if you never go to pubs, then this behavior is never activated, but the propensity to engage in that behavior in that situation is still there. Now, I think this, so one extreme says that all behavior is controlled in that way, and
Starting point is 01:08:38 another extreme says that no behavior is controlled in that way. It's all social construct. It's actually all fed into you by your culture, by your parents, by your peers, and so on. Now, I think not only do I think that neither of those is true, but I think that the usual way out of this conflict by saying, actually, it's an intimate causal relationship interplay between the genetic and the environmental influences. And we can't necessarily untangle it, but in some cases, we can say that genes are very important in this thing. In the other cases, we can say they're relatively unimportant in this trait. I would say that that whole framing is wrong.
Starting point is 01:09:36 It misses the main determinant of human behavior, which is creativity. And creativity is something that doesn't necessarily come from anywhere. It might do. You might have a creativity that is conditioned by your parents or by your culture or by your genes. For example, if you have a very good visuospatial hardware in your brain, I don't know if there is such a thing, but suppose there were, then you might find playing basketball rewarding because you can get the satisfaction of seeing your intentions fulfilled. And if you're also very tall and so on, you can see how the genetic factors might affect your creativity. But it can also happen the other
Starting point is 01:10:33 way around. So if someone is shorter than normal, they might still become a great tennis player. So Michael Chang was, I think, five foot nine. And the average tennis player was at the time was six foot three or something. And Michael Chang nevertheless got into the top, whatever it was, and nearly won Wimbledon. And I can imagine telling a story about that. I don't know, actually, why Michael Chang became a tennis player, but I can imagine a story where his innate suitability for tennis, that is his height, but also perhaps his coordination, you know, all the he compensated it so well that in fact, he became a better tennis player than those who were genetically suitable for it. And in a certain society, if I can just add the social thing as well, it's also plausible that in a certain society that would happen quite often. Because in Gordonstoun School where Prince Charles went to school,
Starting point is 01:11:47 they had this appalling custom that if a boy, it was only boys in those days, if a boy didn't like a particular activity, then they'd be forced to do it more. And if that form of instruction was effective, you'd end up with people emerging from the school who were better at the things that they were less genetically inclined to do, and worse at the things they were more genetically inclined to do. So, okay. Bottom line, I think that creativity is hugely undervalued as a factor in the outcome of people's behavior. And although creativity can be affected, in the ways I've said, sometimes perversely, by genes and by culture, that doesn't mean that it's not all due to creativity. Because the people
Starting point is 01:12:49 who were good at, say, tennis, will turn out to be the ones that have devoted a lot of thought to tennis. If that was due to them being genetically suitable, then so be it. But if it was due to them being genetically unsuitable, but they still devoted the creativity, then they would be good at tennis. Of course, not sumo wrestling. But I chose a sport that's rather cerebral. Let me put it somewhat differently. Heritability, as it's used in the field
Starting point is 01:13:27 of behavioral genetics, is a measure of individual differences. So it is not even meaningful to talk about the heritability of the intelligence
Starting point is 01:13:36 of one person. It is a measure of how the extent to which the differences in a sample of people, and it's always relative
Starting point is 01:13:46 to that sample, can be attributed to the genetic differences among them. It can be measured in many ways, or I should note, it can be measured in four ways, each of which takes into account the fact that people who are related also tend to grow up in similar environments. And so, one of the methods is you compare identical and fraternal twins. Identical twins share all their genes and their environment. Fraternal twins share half their genes and their environment. And so by seeing if identical twins are more similar than fraternal twins, that's a way of teasing apart first approximation, heredity, and environment. Another one is to look at twins separated at birth who share their genes but not their environment.
Starting point is 01:14:31 And to the extent that they are correlated, that suggests that genes play a role. And the third way is to compare the similarity, say, of adoptive siblings and biological siblings. Adoptive siblings share their environment, but not their genes. Biological siblings share both. And now, more recently, there's a fourth method of actually looking at the genome itself and genome-wide association studies to see if the pattern of variable genes is statistically correlated with certain traits like intelligence, like creativity, if we had a good measure of creativity. And so you can ask, why is the difference between two people, to what extent is the difference between two people attributable to their genetic differences?
Starting point is 01:15:18 Although those techniques don't tell you anything about the intelligence of Mike or the intelligence of Lisa herself. Now, heritability is always less than one. It is surprisingly much greater than zero, pretty much for every human trait that we know, psychological trait that we know how to measure. And that isn't obviously true a priori. You wouldn't necessarily expect that, say, if you have identical twins separated at birth and you growing up in very different environments. And there are cases like that, such as one twin who grew up in a Jewish family in Trinidad, another twin who grew up in a Nazi family in Germany. And then when they met in the lab, they were wearing the same clothes, had the same habits and quirks, and indeed political orientation, not perfectly, so we're talking about statistical resemblances here.
Starting point is 01:16:14 But before you knew how the studies came out, I think most of us wouldn't necessarily have predicted that political, liberal to conservative beliefs, or libertarian to communitarian beliefs, would it at all be correlated between twins separated at birth, for example, or uncorrelated in biological sibling, in adoptive siblings growing up in the same family? So I think that is a significant finding. I don't think it can be blown off. Although again, it's true that it does not speak to David's question of how a particular behavior, including novel creative behavior, was produced by that person at that time. That's just not what Herod of Belvedere is about.
Starting point is 01:16:57 Yes, but even when... You can say whether a gene influences in a population, whether similarities in genes influence a behavior, but unless you have an explanation, you don't know what that influence consists of. It might consist, it might operate via, for example, the person's appearance, so that people who are good-looking are treated differently from people who aren't good-looking. And that would be true even for identical twins reared separately. And there's also the fact that when people grow up, they sometimes change their political views. So the stereotype is that your left wing when you're young and in your 20s, and then when you get into your 40s and 50s and older,
Starting point is 01:17:58 you become more and more right wing. There's the saying attributed to many people that anyone who is not a socialist when they're young has no heart, and anyone who is a socialist when they're old has no head. Yes. I've tried to track that down, and it's been attributed to many quote-sters over the years. It's not completely true, by the way. There's something of a life cycle effect in political attitudes, but there's a much bigger cohort effect. People tend to carry their political beliefs with them as they age.
Starting point is 01:18:28 Well, so they tend to, in our culture. So there are other cultures in which they absolutely always do because only one political orientation is tolerated. In a different society, one that perhaps doesn't exist yet, which is more liberal than ours, it might be that people change their political orientation every five years. Well, that's an empirical... I mean, neither of us can determine that from
Starting point is 01:18:58 our armchairs. I mean, that is an empirical question that you'd have to test. Well, you can't test whether it could happen. Well, that is true. You could test whether it does happen. Yes, exactly. By the way, within the field of behavioral genetics, it's well recognized that heritability per se is a correlational statistic. So if a trait is heritable, it doesn't automatically mean that it is via the effects of the genes on brain operation per se. You're right that it could be via the body, could be via the appearance. It could be indirectly via a personality trait or a cognitive style that inclines someone towards picking some environments over others. So that if you are smart, you're more likely to spend time in libraries and in school. You're going to stay
Starting point is 01:19:51 in school longer. If you're not so smart, you won't. And so the environment, it's not that the environment doesn't matter, but the environment in those cases is actually downstream from genetic differences, sometimes called a gene-environment correlation, where your genetic endowment predisposes you to spend more time in one environment than in another. So also, one of the possible explanations for another surprising finding that some traits such as intelligence tend to increase inheritability as you get older, and effects of familial environment tend to decrease.
Starting point is 01:20:26 Contrary to the mental image one might have that as the twig is bent, so grows the branch, that as we live our lives, we may differentiate. As we live our lives, we tend to be more predictable based on our genetic endowment, perhaps because there are more opportunities for us to place ourselves in the environment that make the best use of our heritable talents. Whereas when you're a kid, you got to spend a lot of time in whatever environment your parents place you in. As you get older, you get to choose your environment. So again, the genetic element is not an alternative to an environmental influence, but in many cases, it may be that the environmental influence is actually an effect of a genetic difference. Yes, yes. Like in the examples we just said. But I just want to carry on like a broken record and say that something is only partly caused, directly caused by genes. It doesn't mean that the rest is caused by environment.
Starting point is 01:21:34 It could be that the rest is caused by creativity, by something that's unique to the person. And it could be that the proportion of behaviors that is unique to the person is itself determined by the genes and by the environment. So in one culture, people are allowed to be more creative in their lives. And William Godwin said something like, I can't say the quote exactly, but it was something like, two boys walk side by side through the same forest. They are not having the same experience. And one reason is that one of them's on the left and one of them's on the right, and they're seeing different bits of forest. And one of them may see a thing that interests him and so on. But it's also because internally, they're walking
Starting point is 01:22:23 through a different environment. One of them is walking through his problems. The other one is walking through his problems. And so if you could in principle account for some behavior, perhaps statistically, entirely in terms of genes and environment, it would mean that the environment was destroying creativity. Let me actually cite some data that may be relevant to this, because they are right out of behavioral genetics. Namely that behavioral genetics sometimes namely that if you... Behavioral genetics sometimes distinguish between the shared or familial environment
Starting point is 01:23:09 and this rather ill-defined entity called the non-shared or unique environment. I think it's actually a misnomer, but it refers to the following empirical phenomenon. So each of the techniques that I explained earlier, let's just take, say, identical twins, say, separated at birth, compare them to identical twins brought up together. Now, the fact that correlation between identical twins separated at birth is much greater than
Starting point is 01:23:34 zero suggests that genes matter. It's not all the environment in terms of this variation. However, identical twins reared together do not correlate at 1.0 or even 0.995. In many traits that correlate around 0.5, now it's interesting that that's greater than zero. It's also interesting that it's less than 1.0. And it means that of the things that affect, say, personality, David, you might want to attribute this to creativity, but they are neither genetic nor are they products of the aspects of the environment that are obvious, that are easy to measure, such as whether you have older siblings, whether you're an only child, whether there are books in the
Starting point is 01:24:21 home, whether there are guns in the home, whether there are guns in the home, whether there are TVs in the home, because those all are the same in twins reared together. Nonetheless, they're not indistinguishable. Now, one way of just characterizing this, well, maybe there's a causal effect of some minute infinitesimal difference. Like if you sleep in the top bunk bed or the bottom bunk bed, or you walk on the left or you walk on the right. Another one is that there could be effects that are, for all intents and purposes, random. That as the brain develops, for example, the genome couldn't possibly specify the wiring diagram down to the last synapse. It makes us human by keeping variation and development within certain
Starting point is 01:25:05 functional boundaries, but within those boundaries, there's a lot of sheer randomness. And perhaps it could be, and David, you'll tell me if this harmonizes with your conception, creativity in the sense that we have cognitive processes that are open-ended, combinatorial, where it's conceivable that small differences in the initial state of thinking through a problem could diverge as we start to think about them more and more, so they may even have started out in essentially random, but end up in very different places. Now, would that count as what you're describing as creativity? Because ultimately, creativity itself has to be, it's not a miracle, it ultimately has to come from some mechanism in the brain, which, and then you could ask the question,
Starting point is 01:25:55 why are the brains of two identical twins, specified by the same genome, why would their creative processes as they unfold take them in different directions? Yes, so that very much captures what I wanted to say. Although I must add that it's always a bit misleading to talk about high-level things, especially in knowledge creation, in terms of the microscopic substrate. Because, you know, if you say the reason why something or other happened, the reason why Napoleon lost the Battle of Waterloo was because, ultimately, it was because an atom went left rather than right several years before. Even if that's true, it doesn't explain what happened. It's only possible to explain the outcome of the Battle of Waterloo by talking
Starting point is 01:26:55 about things like strategy, tactics, guns, numbers of soldiers, political imperatives, you know, all that kind of thing. And it's the same with a child growing up in a home. It's not helpful to say that the reason that the two identical twins have a different outcome in such and such a way is because there was a random difference in their brains, even though it was the same DNA program. And that was eventually amplified into different opinions. It's much more explanatory, one of them decided that his autonomy was more important to him than praise, and the other one didn't. Perhaps that's even too big a thing to say. So even a smaller thing would be legitimate. But I think as small as a molecule
Starting point is 01:28:07 doesn't tell us anything. Right. By the way, much that I agree with, and it's even an answer to Joe's very first question of what do I appreciate in David's work, is one thing that captivated me immediately is that he, like I, locate explanations of human behavior at the level of knowledge, thought, cognition, not the level of neurophysiology. That's why I'm not a neuroscientist, why I'm a cognitive scientist, because I do think the perspicuous level of explaining human thought is at the level of knowledge, information, inference, rather than at the level of neural circuits. The problem in the case of, say, of the twins, though, is that you, because they are in, as best we can tell, the same environment, because they do have the same
Starting point is 01:29:00 or very similar brains, although, again, I think they are different because of random processes during brain development, together with possibly somatic mutations that each one accumulated after conception. So they are different, but it's going to be very difficult to find a cause at the level of explanation that we agree is most perspicuous, given that their past experience is, as best we can tell, indistinguishable. Now, it could be that we could trace it if we followed them every moment of their life with a body cam. We could identify something that predictably, for any person on the planet, given the exposure to that particular perceptual experience, would send them off in a particular direction.
Starting point is 01:29:49 Although it also could be that creativity, which we're both interested in, has some kind of, I don't know if you'd want to call it a chaotic component or undecidable component, but it may be that it's in the nature of creativity that given identical inputs, it may not up at the same place i agree with that i'm going to jump in there yep well i do want to finish on the topic of progress so i have three questions and i'll uncharacteristically play the role of the pessimist here so you two can gang up on me if you like. But the first question, can either of you name any cases in which you would think it reasonable or appropriate to halt or slow the development of a new technology?
Starting point is 01:30:36 Steve? Sure. It depends on the technology and it would depend on the argument, but I can imagine, say, that gain-of-function research in virulent viruses may have costs that outweigh the benefit and the knowledge, and there may be many other examples. I mean, they have to be examined on a case-by-case basis. There's a difference between hosting the research and making the research secret.
Starting point is 01:31:08 So obviously the Manhattan Project had to be kept secret, otherwise it wouldn't work. They were trying to make a weapon and the weapon wouldn't be effective if everybody had it. But whether it's, can I think of an example where it's a good idea to hold the research altogether? Yes. I can't think of an example at the moment. Maybe this gain of function thing is an example where under some circumstances there would be an argument for a moratorium. But the trouble with moratoria is that not everybody will obey it, and the bad actors are definitely not going to obey it if the result would be a military advantage to them. You could put it in a different sense where it is a question of putting
Starting point is 01:32:06 a moratorium but not making the positive decision to invest vast amounts of brain-powered resources into a problem where we should just desist and it won't happen unless you have the equivalence of a Manhattan project. I think we can ask the question I don't know if it's
Starting point is 01:32:21 answerable but would the atomic bomb have been invented if it were not for the special circumstances of a war against the Nazis and an expectation that the Nazis themselves were working on an atomic weapon? That is, does technology necessarily have a kind of a momentum of its own so that it was inevitable that if we had a hundred civilizations in a hundred planets, all of them would develop nuclear weapons at this stage of development? Or was it just really bad luck and would we have been better off? Obviously,
Starting point is 01:32:56 we'd be better off if there were no Nazis, but if there were no Nazis, would we inevitably have developed them or would we have been, since we would have been better off not? The Japanese could have done it as well if they'd put enough resources into it. They had the scientific knowledge, and they had already made biological weapons of mass destruction. They never used them on America, but they did use them on China. So there were bad actors, but all those things, so nuclear weapons and biological weapons, they required the resources of the technological progress that we've now enjoyed, but it just never occurred to anyone to set up, at fantastic expense, a Manhattan Project? We just would better off without nuclear weapons, so why invest all of that brainpower and resources to invent one, unless you were in a specific
Starting point is 01:33:59 circumstance, having reason to believe that the Nazis or Imperial Japan was doing it? I think that although it's very unlikely that they would have been invented in 1944-45, by the time we get to 2023, I think that the secret that this is possible would have got out by now because we knew even then that the amount of energy available in uranium is enormous. And the Germans were, by the way, thinking of making a dirty bomb with it and something less than a nuclear weapon. I think by now it would have been known. And there are countries that have developed nuclear weapons already, like North Korea, who I think by now would have them. And they'd be very much more dangerous if the West didn't have them as well. I wonder, I think what we have to do is think of the counterfactual of other weapons where the technology could exist if countries devoted a comparable amount of resources into developing them? Is it possible to generate tsunamis by planting explosives in deep ocean faults
Starting point is 01:35:34 to trigger earthquakes as a kind of weapon or to control the weather or to cause weather catastrophes by seeding clouds? If we had a Manhattan Project for those, could there have been a development of those technologies where once we have them, we say, well, it's inevitable that we would have them. But in fact, it did depend on particular decisions
Starting point is 01:35:58 to exploit that option, which is not trivial for any society to do, but it did require the positive commitment of resources and a national effort. Yeah, I can imagine that there are universes in which nuclear weapons weren't developed, but say biological weapons were developed. Where about none of them? Let's be optimistic for a second in terms of our thought experiment. Could there be one where we have microchips and vaccines and moonshots, but no weapons of mass destruction?
Starting point is 01:36:30 Well, I don't think there can be many of those because we haven't solved the problem of how to spread the enlightenment to bad actors. We will have to eventually, otherwise we're doomed. I think the reason that a wide variety of weapons of mass destruction, civilization ending weapons, that kind of thing, have not been developed is that the nuclear weapons are in the hands of Enlightenment countries. And so it's pointless to try to attack America with biological weapons, because even if they don't have biological weapons, they will reply with nuclear weapons. So once there are weapons of mass destruction in the hands of the good guys, it gives us decades of leeway in which to try to suppress the existence of bad actors, state-level bad actors. But the fact that it's expensive, that decreases with time. For a country to
Starting point is 01:37:48 make nuclear weapons now requires a much smaller proportion of its national wealth than it did in 1944. And that will increase, that effect will increase in the future. But is that true to the extent that some country beforehand has made that investment so the knowledge is there and that if they hadn't, then it wouldn't, that kind of Moore's Law would not apply unless... It would hold them up by a fixed and finite amount whose cost would go down with time.
Starting point is 01:38:24 Okay, penultimate question. So there's been a well-observed slowdown in scientific and technological progress since about 1970. And there are two broad categories of explanations for this. One is that we have somehow picked all of the low-hanging fruits, and so ideas are getting harder to find. And the second category relies on more kind of cultural explanations. Like for example, maybe academia has become too bureaucratic, maybe society more broadly has become too risk-averse, too safety-focused. Given the magnitude of the slowdown, doesn't it have to be the case that ideas are getting harder to find? Because it seems implausible that a slowdown this large could be purely or mostly driven
Starting point is 01:39:13 by the cultural explanations. David, I think I kind of know your response to this question, and I'm curious to hear your answer. So, Steve, I might start with you. I suspect there's some of each that almost by definition, unless every scientific problem is equally hard, which seems unlikely, we're going to solve the easier ones before the harder ones, and the harder ones are going to take longer to solve. So we do go for the low-hame fruit sooner. Of course, it also depends on how you count the scientific problems and solutions. I think I have an awful lot of breakthroughs
Starting point is 01:39:55 since the 1970s. I don't know how well you could quantify the rate. But then I think one could perhaps point to society-wide commitments that seem to be getting diluted, certainly in the United States. There are many decisions that I think will have the effect of slowing down progress, the main one being the retreat from meritocracy,
Starting point is 01:40:19 the fact that we're seeing gifted programs, specialized science and math schools, educational commitments toward scientific and mathematical excellence being watered down, sometimes on the basis of rather dubious worries about equity across racial groups as superseding the benefits of going all ahead on nurturing scientific talent wherever it's found. So I think it almost has to be some of each. David? So I disagree, as you predicted.
Starting point is 01:41:05 By the way, you said you were only going to be pessimistic on one question. Now you've been pessimistic on a second question. No, I've got three pessimistic questions. Oh, okay. So there's one more. Yeah. So I don't think that there is less low-hanging fruit now than there was a hundred years ago because when there's a fundamental discovery, it not only picks a lot of what turns out to be,
Starting point is 01:41:37 with hindsight, low-hanging fruit, although it didn't seem like that in advance. But it also creates new fruit trees, if I can continue this metaphor. So there are new problems. For example, in my own field, quantum computers, quantum computers couldn't exist before, the field of quantum computers couldn't exist before there was quantum theory and computers. They both had to exist. There's no such thing as it having been low-hanging fruit all along in 1850 as well. It wasn't. It was a thing that emerged, a new problem creating new low-hanging fruit. But then, if I can continue my historical speculation about this as well to make a different point, it wasn't in fact, quantum computers weren't in fact invented in the 1930s or 40s or 50s when they had a deep knowledge of quantum theory and of computation. And both those fields were regarded by their respective sciences
Starting point is 01:42:57 as important and had a lot of people working on them. Although a lot in those days was a lot less than what a lot counts as today. But I think the reason that it took until the 1980s for anyone to even think that the computation might be physics was, as you put it, cultural or societal or whatever. The beginnings of positivism and instrumentalism and the irrationality in wave function collapse and that kind of theory, the breakdown of philosophy as well, and in computer science, the domination of computer science by mathematicians, by people who had what I have called the mathematician's misconception, which is that proof and computation exist abstractly, and that they can be studied abstractly
Starting point is 01:44:14 without needing to know what the underlying physics is. So I think nobody thought of this, and the reason they didn't think of it was that even then, scientific research was directed towards incremental solution of problems rather than anything fundamental. I think another 50 years back, people at the foundations of every field of science wanted to gravitate towards fundamental discoveries. 50 years ago, that was much less true. system, by the career system, by the expectations of scientists, by the way that young people are educated, by everything, the science journalism, everything is just assumed to be incremental.
Starting point is 01:45:22 So that's why journalists always ask me what effect I expect quantum computers to have on the economy, on cryptography or whatever, whereas I'm interested in what effect the quantum theory of computation will have on our understanding of physics. Nobody wants to work on that because that is not rewarded in the present culture. So I think it's, and I don't disagree at all with the cultural factors that Stephen mentioned. In addition to this instrumentalism and over-specialization and the career structure and all that stuff, there is also sheer irrationality. There are irrational trends which have taken over universities, even in STEM subjects. The very fact that I call them STEM subjects is a symptom of this phenomenon.
Starting point is 01:46:27 I'd like to echo what you did. It's certainly true, and I should have thought of it. There really are questions that could not even have been conceived until certain changes in understanding were already in place. Until you had the idea of, say, of Darwin's theory of evolution, there just wasn't a question of, say, what is the adaptive function of music, or does it have one? It's just not a question that would have occurred to anyone.
Starting point is 01:46:56 And I would have to agree that that always happens, that trees sprout maybe to low-hanging fruit, falls and the seas germinate, and they're new trees, whatever metaphor. happens, that trees sprout maybe to low-hanging fruit, falls and the seas germinate into new trees, whatever metaphor. Yeah, and when the human race does not take advantage of that, that's something that needs explanation. That's not going to happen by accident, because there are smart young people out there who want to understand the world and who
Starting point is 01:47:27 want to devote their lives to understanding the world. And if they are diverted into, I don't know if this metaphor works, but into just picking up fruit that's already fallen from the tree, then something malign is producing that. Yes. And we are seeing in a lot of journalism scientific societies a rejection of the Enlightenment idea that the search for truth is possible and desirable. And there are actual guidelines in journals like Nature Human Behavior that you may not publish a result that seems to make one human group look worse than another one, or that might be demeaning or insulting. And if all of our science has to flatter all of us all
Starting point is 01:48:22 the time, that's a different criterion from the most explanatory, most accurate view of the world that you could attain. We are seeing a kind of diversion toward goals other than truth and deep explanation. Yep. Right. I agree, and it is terrible. So, final question, because I know we've come up on time. There may be some physical limit to how much we can grow in the universe. So to give an example, the philosopher Will McCaskill, but also other thinkers like, I think, Holden Karnofsky, have written that if we continue our roughly 2% economic growth rate,
Starting point is 01:49:07 within about 10,000 years, we'll be at the point where we have to produce an implausible amount of output per atom that we can reach in order to sustain that growth rate. So if it is true that there is some physical constraint on how much we can continue to grow, should that make us pessimists about the ultimate course of civilization or civilizations in the universe? So, the short answer is no. It is true that if we continue to grow at 2% per year or whatever it is, then in 10,000 or 100,000 years or whatever it is, we will no longer be able to grow exponentially
Starting point is 01:50:00 because we will be occupying a sphere which is growing. And if the outside of the sphere is growing at the speed of light, then the volume of the sphere can only be increasing like the cube of the time and not like the exponential of the time. So that's true. But that assumes all sorts of things, all sorts of ridiculous extrapolations to 10,000 years in the future. So, for example, Feynman said there's plenty of room at the bottom. There's a lot more room. You know, you assume that the number of atoms will be the limiting thing. What if we make computers out of quarks? What if we make new quarks to make computers out of? Okay, the quarks have a certain size. What about energy? Well, as far as we know now, there's no limit. So we can imagine efficiency of computation
Starting point is 01:51:09 increasing without limit. Then when we get past quarks, we'll get to the quantum gravity domain, which is many orders of magnitude smaller than the quark domain. We don't know what that is. We have no idea how gravitons behave at the quantum gravity level. For all we know, there's an infinite amount of space at the bottom. But, you know, we're now talking about a million years in the future, two million years in the future. Our very theories of cosmology are changing on a timescale of a decade. It's absurd to extrapolate our existing theories of cosmology 10,000 years into the future to obtain a pessimistic conclusion, which there's no reason to believe takes into account the science that will exist at that time.
Starting point is 01:52:12 Also, I'll add, and this is a theme that David has explored as well, humans really thrive on information, on knowledge, not just on stuff. So when you talk about growth, it doesn't mean more and more and more stuff. It could mean better and better and better information, more entertaining virtual experiences, more remarkable discoveries or ways of encountering the world that may not actually need more and more energy, but just rearranging
Starting point is 01:52:45 pixels and bits in different combinations of which we know the space of possibilities is unfathomably big and growth could consist of better for cures for disease based on faster search in the space of possible drugs and many other massive advances that don't actually require more joules of energy or more grams of material but could thrive on information which is not limited. And even information, it might largely require replacing existing information rather than adding to information. So we may not need exponentially growing amounts of computer memory if we have better and better, more and more efficient ways of using computer memory. In the long run, maybe we will, but that long run is so long that our scientific knowledge of today
Starting point is 01:53:51 is not going to be relevant to it. Well, I think that's a nice optimistic note to finish on. It has been an honour and fascinating to host this dialogue. I'll thank each of you individually. And if you like, you can leave us with a brief parting comment. So firstly, David Deutsch, thank you so much for joining me. Well, as I said, thank you for having me. And I'm glad you made a pivot to optimism at the last moment. So stick on that tack. And Steven Pinker, thank you so much for joining me. It's been a pleasure. And I'll just add that optimism is not just a matter of temperament in board or other, but a matter of rationally analyzing our history and rationally analyzing what progress would consist of.
Starting point is 01:54:45 Yep. Thanks so much for listening. You can find a full transcript and a video on my website. Go to jnwpod.com. That's jnwpod.com. As always, the main way you can help the show is by sharing it with people who might be interested. Text this episode to a friend, put it in a WhatsApp group, share it on Twitter. The main way I grow is through word of mouth.
Starting point is 01:55:17 Thank you again. Until next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.