Making Sense with Sam Harris - #323 — Science & Survival

Episode Date: June 22, 2023

Sam Harris speaks with Martin Rees about the importance of science and scientific institutions. They discuss the provisionality of science, the paradox of authority, genius, civilizational risks, pand...emic preparedness, artificial intelligence, nuclear weapons, the far future, the Fermi problem, the  prospect of a "Great Filter", the multiverse, string theory, exoplanets, large telescopes, improving scientific institutions, wealth inequality, atheism, the conflict between science and religion, moral realism, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense Podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Today I'm speaking with Martin Rees. Martin is a well-known astronomer and the former president of the Royal Society, a fellow and former master of Trinity College, Cambridge, and emeritus professor of cosmology and astrophysics at Cambridge.
Starting point is 00:01:04 He's also a member of the UK House of Lords, and he's the author of several books, most recently, If Science is to Save Us, which is the principal topic of today's conversation. We talk about the importance of science and scientific institutions, the paradoxical provisionality of science, and the strange relationship we have to scientific authority. We talk about genius as a scientific and sociological phenomenon, civilizational risk, pandemic preparedness, artificial intelligence, nuclear weapons, the far future, the Fermi problem, where is everybody out there in the cosmos? The prospect of a great filter explaining the apparent absence of everybody. The multiverse, string theory, exoplanets,
Starting point is 00:01:55 large telescopes, steps toward improving scientific institutions, wealth inequality, atheism, the conflict between science and religion, and this provokes a bit of a debate between us. Martin was not a fan of what the new atheists were up to, nor is he a fan of my version of moral realism. So we talk about rationality and ethics. Unfortunately, we had a few technical difficulties and ran out of studio time, so the debate didn't go on for as long as it might have, but we got about 30 minutes there where we disagreed about religion and ethics a good bit, and I enjoyed it. And now I bring you Martin Rees. I am here with Martin Rees. Martin, thanks for joining me.
Starting point is 00:02:46 Thank you for having me. So you have a new book, If Science is to Save Us, which brings together many of your concerns about existential risk and the importance of science, the promise of it, along with our failures to fully actualize that promise. And so I want to talk about this. I want to talk about existential risk, which you've written about before, and also just the inability of our politics and our institutions to properly grapple with it. But before we jump into those topics, perhaps you can summarize your intellectual background and your life in science. How would you summarize the kinds of topics you've focused on? Yes. Well, I've been very lucky in that I've worked most of my career in astrophysics. And I'm lucky in that when I started, it was an exciting time when we had
Starting point is 00:03:39 the first evidence for the Big Bang, the first evidence for black holes, etc. And I was lucky to be able to write some of the first papers on those topics. And I always advise students starting now to pick a subject where new things are happening, so that you could be the first person to do new things, rather than just filling in the gaps that the old guys left. And so I was lucky there. And I've been even more lucky in that the subject has remained fruitful. And so I would describe my work as being phenomenology mainly, trying to make sense of all the phenomena discovered through observations on the ground
Starting point is 00:04:17 and in space. So that's been my main work. But when I got to the age of 60, I felt I ought to main work. But when I got to the age of 60, I felt I ought to diversify a bit because in my subject in particular, it was taken over rather by computational modeling. And I knew I would never be adept at doing that. So I felt I ought to do something else. And I therefore took on some other duties outside my academic field, more in politics. I became head of the biggest college in Cambridge. I became president of the Royal Society, which is our National Academy of Sciences. And I even became a member of the House of Lords. So I had a wide experience in my 60s of doing this sort of thing. And that really is the background to why I wrote a book which has this rather broad coverage.
Starting point is 00:05:06 Nice, nice. Well, it's a wonderful career, and it's fantastic to have someone who has seen so much scientific progress as well as its failure, both the successes and failures of it to permeate the culture and affect policy. and failures of it to permeate the culture and affect policy. It's great that you are where you are and spending as much time as you are currently in furthering the public understanding of science, because your most recent books have definitely done that. Before we jump into questions of existential risk and the other topics I outlined, I have a first question and concern that's more foundational with respect to how we do science, how we understand its progress, how we communicate
Starting point is 00:05:54 that progress to non-scientists. And it's around the issue of the provisionality of science and really the perpetual provisionality of it. There are no final, final answers, really. And this goes to the philosophy of science and the Popperian observation that we never really finally prove something true. We simply prove false theories false, and we just hew to the best current explanation. But this does throw up a kind of paradox, because what we have in science, in the culture of science, and in just the epistemology of it, is a fundamental distrust of authority, right? We don't slavishly respect authority in science. And yet, the reality is that, you know, to a first approximation, scientific authority matters. You know, no one has time to run all of the experiments going back to the, you know, the origins of any specific science
Starting point is 00:06:57 themselves. We're constantly relying on colleagues to have done things correctly, to not be perpetuating fraud, to not be lying to us. And yet, the truth is, even a Nobel laureate is only as good as his last sentence. If his last sentence didn't make any sense, well, then a graduate student or anyone else can say, that doesn't make any sense, and everyone's on the same ground, epistemologically speaking. and everyone's on the same ground, epistemologically speaking. So how do you think about how we treat authority and the provisionality of science, both in science and in the communication of it? Well, you're quite right, of course, that science is a progressive enterprise. It's a social and collective enterprise, and we can never be sure we've got the final truth.
Starting point is 00:07:44 But I think we've got to not be too skeptical. We've got to accept that some things are almost incontestable, like Newton's laws of motion, for instance, and also that in many areas of importance socially, it is prudent to listen to the experts rather than to a random person, even though the experts are fallible. And I think people talk about the idea of revolutions overthrowing things. Thomas Kuhn is a famous philosopher who did this. And I think there were one or two revolutions. Quantum theory was one.
Starting point is 00:08:20 But, for instance, it's not true in any sense that Einstein overthrew Newton. Newton is still fine. It's good enough to program all spacecraft going in our solar system. But what Einstein did was got a theory that gave a deeper understanding and had a wider applicability. But Newton's laws within a certain range are still okay. So one can't say that Newton was falsified. We can say that it was a step forward. And if you think of physics again, then our hope would be that there may be some theory unifying all the laws of nature, the four basic forces, and that will incorporate Einstein's theory
Starting point is 00:09:04 as a special case. So it's really a progressive incorporation and broadening of our understanding. How do you think about the quasi-myth of the lone genius in science and what that has done to the perception of science? I say quasi-myth because it's not truly a myth. I mean, you just mentioned Newton, and when you think about the progress he made in about 18 months locked in a garret, avoiding the plague, he seemed to have done about a century at least of normal scientific work. the idea that we should be shining the light of admiration on specific scientists for their breakthroughs and very often ignoring the fact that someone else would have made that breakthrough about 15 minutes later if the first person hadn't. Yes. Well, of course, that is true. And the difference between science and the arts is that if you're an artist, then anything you create is distinctive.
Starting point is 00:10:08 It's your work. It may not last. Whereas in the case of science, if you make a contribution, then it will last probably if you're lucky. But it'll be just one brick in the edifice. So it loses individuality in that in almost all cases, it would have been done by someone else if you hadn't done it. So that's why there is this difference. And it's also why science is a social activity and why those who cut themselves off may be
Starting point is 00:10:39 able to do some kind of work in, say, pure mathematics by themselves. But science involves following developments across a fairly broad field. And in fact, in my book, I discuss this contrast in telling us why, in the case of many artists and composers, their last works are thought their greatest. And that's because once they were influenced when young by whatever the tastes were then, it's just internal development. They don't need to absorb anything else. Whereas no scientist could go on for 40 years just thinking by themselves without having
Starting point is 00:11:16 to absorb new techniques all the time. And it's because scientists and everyone gets less good at absorbing new ideas as they get older that there are very few scientists of whom we would say that their last works are their greatest. Interesting. And that's why I decided to do something else when I was 60. That's why you looked in the mirror at 60 and realized you were not going to start programming. You've met a lot of great scientists over the course of many decades. Have you ever met someone who you would unhesitatingly call a genius? I mean, someone who's just seemed in their scientific abilities or their intellectual abilities generally just to be a
Starting point is 00:12:01 standard deviation beyond all the other smart people you've had the pleasure of knowing? Yes, I think I've met some, but of course, I have a chapter in my book saying that Nobel Prizes may do more harm than good. And that's because the people who make the great discoveries aren't the same people necessarily as those who have the deepest intellects. Many of the great discoveries are made serendipitously. I think in the case of astronomy, discovery of neutron stars and of the radiation for the Big Bang, those were both discovered by people by accident, by people who are not of any special intellectual eminence.
Starting point is 00:12:41 But nonetheless, I think we would accept that there are some people who do have special intellectual qualities. Of the people who I've known in my field, I would put Steven Weinberg in that class as someone who obviously had very broad intellectual interests and the ability to do a great deal of work at greater variety and at greater speed than most other people. So there are people clearly in every field who have special talents, but they are not necessarily the people who make the great discoveries, which may be partly accidental or opportunistic. And also, of course, they're not always the people who you want to listen to in a general context. And that's why
Starting point is 00:13:25 it's a mistake if Nobel Prize winners are asked to pontificate on any subject, because they may not be an expert on. Yeah, yeah. Yeah, Weinberg was wonderful. He died a few years ago, but he was really an impressive person and a beautiful writer too. Yes. Did you know Feynman or any of these other bright lights of physics? Well, I knew Feynman slightly, but I knew some of these other people who were exceptional in their abilities and, of course, did keep going and didn't do just one thing because, and I also knew Francis Crick, for instance, who clearly was a rather special intellect, and mathematicians like Andrew Wiles, who incidentally, he did shut himself away for seven years to do his work, but that was exceptional. Yeah, talk about a solitary effort. That was incredible. Okay, well, let's talk about the fate of our species, which I think relies less on the lone genius and much more on our failure or hopefully success in solving a variety of coordination problems and getting our priorities straight and actually using what we know in a way that is cooperative and global. We face many problems that are global in character and seem to
Starting point is 00:14:47 cry out for global solutions, and yet we have a global politics, we even have a domestic politics in every country that is tied to short-term considerations of a sort that really, even if the existential concerns are perfectly clear, we seem unable to take them seriously because there is no political incentive to do that. What are your, if you were going to list your concerns that go by the name of existential risk, maybe we should be a little more capacious than existential. I mean, you know, just enormous risk. I mean, you know, there can still be a few of us left standing to suffer the consequences of our stupidity. What are you worried about? Well, I think I do worry about global setbacks. And the way I like to put it is, in a cosmic context, the Earth's been around for
Starting point is 00:15:45 45 million centuries. But this century is the first when one species, namely our species, can destroy its future or set back its future in a serious way, because we are empowered by technology. And we are having a heavier footprint collectively on the world than was ever the case before. And I think there are two kinds of things we worry about. One kind is the consequences of our heavier impact on nature, and this is climate change, loss of biodiversity, and issues like that, which are long-term concerns. And the other is the fact that our technology
Starting point is 00:16:27 is capable of destroying a large fraction of humanity. Well, that's been true ever since the invention of the H-bomb about 70 years ago. But what worries me even more is that new technologies, bio and cyber, etc., can have a similar effect. We know that a pandemic like COVID-19 can spread globally because we are interconnected in a way we weren't in the past. But what is even more scary is that it's possible now to engineer viruses, which would be even more virulent or more transmissible than the natural ones. And this is my number one nightmare, actually, that this may happen.
Starting point is 00:17:10 And it's my number one nightmare because it's very hard to ensure how we can actually rule out this possibility. In the case of nuclear weapons, we know it needs large special purpose facilities to build them. needs large special purpose facilities to build them. And so the kind of monitoring and inspection which we have for the International Atomic Energy Agency can be fairly effective. But even if we try hard to regulate what's done in biological laboratories, even the stage four ones, which are supposed to be the most secure ones, enforcing those regulations globally is almost as hopeless as enforcing the drug laws globally or the tax laws globally, because the delinquents can be just a few individuals or a small company.
Starting point is 00:17:55 And this is a big worry I have, which is that I think if we want to make the world safe against that sort of concern, we've got to be aware of a growing tension between three things we'd like to preserve, namely privacy, security, and freedom. And I think that privacy is going to have to go if we want to ensure that someone is not clandestinely
Starting point is 00:18:18 plotting something that could kill us all. So, there's one class of threats. I want to talk about others, but can you say more on how you imagine the infringement of privacy being implemented here? What would actually help mitigate this risk? Well, obviously, we've given up a lot of our privacy with CCTV cameras and all that sort of thing. And lots of what we have on the internet is probably accessible for surveillance groups. And I think we probably have to accept something like that to a greater extent than certainly in the US.
Starting point is 00:18:58 It will be acceptable now. But I think we've got to accept that these risks are very, very high. And we may have to modify our behavior in that way. Yeah, well, I think there's one infringement of privacy that I don't think anyone would care about, which is for us to be monitoring the spread of pathogens increasingly closely, right? Actually just sampling the air and water and waste and sampling everything we can get our hands on so as to detect something novel and dangerous as early as possible, given that our ability to vaccinate against pathogens seems to have gotten much faster, if not uniformly better. Yes.
Starting point is 00:19:41 Well, of course, the hope is that the technology of vaccine development will accelerate and that will counteract some of these concerns. But I do think that we are going to have to worry very much about the spread of not just natural pandemics that might have a much higher fatality rate than COVID-19, but also these engineered pandemics, which could be even worse. And I think we've got to have some sort of surveillance in order to minimize that. And of course, the other way in which small groups are empowered is through cyber attacks. In fact, I quote in my book from a US Defense Department document from 2012, where they point out that a state-level cyber attack could knock out the electricity grid on the eastern coast of the United States.
Starting point is 00:20:37 And if that happened, they say, I quote, it would merit a nuclear response. It would be catastrophic, obviously, if the electricity grid shut down even for a nuclear response. It would be catastrophic, obviously, if the electricity grid shut down even for a few days. And what worries me now is that it may not need a state-level actor to do that sort of thing, because there's an arms race, as it were, between the empowerment of the cyber attackers and the empowerment of the cybersecurity people. One doesn't know which side it's going to gain.
Starting point is 00:21:05 Yeah, we can add AI to this picture, which I know you've been concerned about. I think the group you helped found, the Center for Existential Risk, was one of the sponsors of that initial conference in Puerto Rico in 2015 that I was happy to go to that first brought everyone into the same room to talk about the threat or lack thereof of general AGI. And you know, we've obviously seen a ton of progress in recent months on narrow AI of the sort that could be presumably useful to anyone who wanted to make a mess with cyber attacks.
Starting point is 00:21:46 Indeed, yes. Yeah. There is an asymmetry here which is intuitive. I don't know if it holds across all classes of risk, but it's easy to assume, and it seems like it must generally always be accurate to assume, that it's easier to break things than to fix them, or easier to make a mess than it is to clean it up. I mean, there's probably something relating to entropy here that we could generalize, but how do you view these asymmetric risks? Because as you point out, nuclear risk, that the one fortuitous thing about the technology required to make big bombs is that
Starting point is 00:22:27 there are certain steps in the process that are hard for a single person or even a small number of people to accomplish on their own. I mean, they're just rare materials, they're hard to acquire, etc. And it's more of an engineering challenge than one person can reliably take on, but not so with DNA synthesis, if we fully democratize all those tools and you can just order nucleotides in the mail, and not so at all with cyber and now AI, which is a bit of a surprise. I mean, most of us who are worried about the development of truly powerful AI were assuming that the most powerful versions of it would be inaccessible to almost everyone for the longest time. And you'd have a bunch of researchers
Starting point is 00:23:22 making the decision as to whether or not a system was safe. But now it's seeming that our most powerful AI is being developed already in the wild with everyone, literally millions and millions of people given access to it on a moment-by-moment basis. Yes, that's right. That is scary. And I think we do need to have some sort of regulation, rather like in the case of drugs. We encourage the R&D, but intensive testing is expected before something is released on the market, and we haven't had that in the case of chat GBT and things of that kind. And I think there needs to be some discussion, some international
Starting point is 00:24:06 agreement about how one does somehow regulate these things so that the worst bugs can be erased before they are released to a large public. This is of course especially difficult in the case of AI because the field is dominated to a large extent by a few multinational conglomerates. And of course, they can, as we know, evade paying proper taxation, and they can evade regulations by moving their country of residence around and all that. And for that reason, it's going to be very hard to enforce regulations globally on those companies. But we've got to try. And indeed, in the last few months,
Starting point is 00:24:55 there have been discussions about how this can be done. It's not just academies, but bodies like the G20 and the UN and other bodies must try to think of some way in which we can regulate these. But of course, we can't really regulate them completely because 100 million people have used this software within a month. So it's going to spread very, very widely. And I think the only point I would make to perhaps be an antidote to the most scary stuff, I think the idea of a machine taking over general superintelligence is still far in the future.
Starting point is 00:25:37 I mean, I'm with those people who think that for a long time we got to worry far more about human stupidity than artificial intelligence. And I think that for a long time we've got to worry far more about human stupidity than artificial intelligence. And I think that's the case. But on the other hand, we do have to worry about bugs and breakdowns in these programs. And that's a problem if you become too dependent on them. If we become dependent globally on something which runs GPS or the internet or the electricity grid network over large areas, then I worry more about the vulnerability if something breaks down and it's hard to repair than I do about an intentional attack.
Starting point is 00:26:27 an intentional attack. Yeah. The scary thing is it's easy to think about the harm that bad actors with various technologies can commit, but so much of our risk is the result of what can happen by accident or just inadvertently, just based on human stupidity or just the failure of antiquated systems to function properly. I mean, when you think about the risk of nuclear war, yes, it's scary that there are people like Vladimir Putin, of whom we can reasonably worry whether he may use nuclear weapons to prosecute his own very narrow aims. But the bigger risk, at least in my view, is that we have a system with truly antiquated technology, and it's just easy to see how we could stumble into a full-scale nuclear war with Russia by accident, by just misinformation.
Starting point is 00:27:18 No, indeed. The addition of AI to this picture is terrifying. Yes, I think it's very scary indeed. And I think at least this hype in the last few months has raised these issues on the agenda. And that's a very good thing because one point about getting political action
Starting point is 00:27:39 or getting these things on the political agenda is that politicians have to realize that the public care and everyone now is scared about these threats. And so it will at least motivate the public, sorry, motivate politicians to do what they can to achieve some sort of regulation or ensure that the greater safety of these complex systems. And this is, I think, something which the public doesn't recognize really, that politicians, they have scientific advisors,
Starting point is 00:28:11 but those advisors have rather little traction, except when there's an emergency. After COVID-19, they did, but otherwise they don't. And incidentally, to slightly shift gears, that's one of the problems, getting serious action to deal with climate change and similar environmental catastrophes, because they're slow to develop and long range. And therefore, politicians don't have the incentive to deal with them urgently, because they will happen on a timescale longer than the electoral cycle. In some cases, longer than the normal cycle of business investment. But nonetheless, if we want to ensure that we don't get some catastrophic changes in the second half of the
Starting point is 00:28:54 century, they do have to be prioritized. And if that's to happen, then the public has to be aware because the politicians, if voters care, will take action. And that's why in my book, I point out that we scientists are on the whole not very charismatic or influential in general. So we depend very much on individuals who do have a much larger following. who do have a much larger following. And in my book, I quote four people of a disparate quartet who have had this effect in the climate context. The first is Pope Francis, who's encyclical in 2015, got him a standing ovation at the UN, energized his billion followers,
Starting point is 00:29:41 and made it easy to get the consensus at the Paris Climate Conference in 2015. So he's number one. Number two is our secular Pope David Attenborough, who certainly in many parts of the world has made people aware of environmental damage, ocean pollution, and climate change. The third I would put is Bill Gates, who has a large following and talks a great deal of sense about technological opportunities and what's realistic and what isn't. So I think he's a positive influence. And four, we should think of Greta Thunberg, who has energized the younger generation. between them, having the last five years raised these issues on the agenda so that governments are starting to act about how to cut carbon emissions and even businesses change its rhetoric,
Starting point is 00:30:34 even if not changing its actions very much. Well, it is a difficult tangle to resolve with this challenge of public messaging and leveraging the attention of the wider world against the short-term incentives that everyone feels very directly. I mean, the thing that is going to move someone through their day from the moment they get out of bed in the morning tends to be what they're truly incentivized to do in the near term. And even if you were going to live by the light of the most rank selfishness, everyone seems to hyperbolically discount their own interests over the course of time. So that, which is to say, it's even hard to care about one's own far future
Starting point is 00:31:26 or the future of one's children to say nothing of the abstract future of humanity and the long-term prospects of the species. So it's amazing to me, even as someone who deals with these issues and fancies himself a clear-eyed ethical voice on many of these topics, I'm amazed at how little time I spend really thinking about the world that my children will inhabit when they're my age and trying to prioritize my resources so as to ensure that that is the best possible world it can be. I mean, you know, so much of what I'm doing is, you know, loosely coupled to that outcome, but it's not felt as a moral imperative in the way that responding to near-term challenges is. So, maybe you can say something about the ethical importance of the future and how we should respond to these kinds of long-tail risks that in any given month, in any given year, are not...
Starting point is 00:32:32 It's hard to argue that they're priorities because each month tends to look like the last. just influence the trajectory of our progress by 1% a year, you know, 50 years from now will be totally different than if we degrade it by 1% a year. Yes, that's right. That is the problem. And of course, most people do really care about the life chances of their children and grandchildren who may be alive at the end of the century. And I think most of us would agree, despite all the uncertainties in climate modeling,
Starting point is 00:33:11 that there is a serious risk of a catastrophic change by the end of a century, if not by 2050. And this is something which we need to try and plan to avoid now. And it is a big ask, of course, and that's why I think you've got to appeal to people's concern about future generations. But of course, if we ask about how far ahead one should look, how much so-called long-termism one should go for, one then has the legitimate concern that if you don't know what the future is like, and don't know what the preferences and tastes are going to be of people 50 years from now,
Starting point is 00:33:47 then, of course, we can't expect to make sacrifices because they may be inappropriate for what actually turns out. So I think, in the case of climate, we can fairly well predict what will happen if we go on as we are now. But in other contexts, things are changing so fast that we can't make these predictions. And so the idea that we should make sacrifice for people a thousand years from now doesn't make much sense. And in fact, in my book, I present an interesting paradox.
Starting point is 00:34:16 I think about those who built cathedrals in the 12th century, amazing artifacts that were built over a century. And people invested in them and knew they would be finished in their lifetime. And they planned ahead, even though they thought the world would end in a thousand years, and their spatial horizons were limited to Europe. On the other hand, today, when our time horizons are billions of years and our spatial horizons vast too, we don't plan ahead 50 years from now. That may seem a paradox, but there is a reason for it. The reason is that back in the Middle Ages, although the overall horizon was constricted,
Starting point is 00:34:58 they didn't think things would change very much. They thought the life chances of their children and grandchildren would be the same, so they were confident that their grandchildren would appreciate the Finnish cathedral. Whereas I think, apart from, I would guess, on climate change, perhaps, and biodiversity,
Starting point is 00:35:16 where we don't want to leave a depleted world for our descendants, we can't really predict what people's preferences would be, what would be the key technologies, and therefore it's perhaps inappropriate to plan in too much detail for them. So when things are changing unpredictably, then of course you have a good reason for discounting in the future.
Starting point is 00:35:37 But we mustn't discount it too much, especially in cases when we can be fairly confident of the risks of the status quo. Well, I would add some of the risks we've already mentioned. I mean, we know that living year after year with these invisible dice rolling with respect to the threat of accidental nuclear war, that's just a game we shouldn't be playing, right? So if we can dial back that risk in any given year, that would be a very good thing. And so it is with the spread of pandemics, engineered or natural. of the future that you're saying, I think, someplace that if you go out far enough,
Starting point is 00:36:31 our descendants will not only not be recognizably human, but they will just be unimaginably different from what we are now. What do you actually expect and what sort of time horizon would you give that? I mean, if I could drop you back on Earth 10,000 years from now, what would you expect with respect to our descendants, provided obviously that we don't destroy the possibility of survival in this century? Well, I'd expect significant differences, but let me put this in the cosmic context. We know it's taken four billion years or so for the biosphere of which we are a part today to evolve from the simple beginnings in the primordial slime in the young earth and some people tend to feel that we are the culmination of evolution the top of the tree but no astronomer can believe that because we know that the sun is less than
Starting point is 00:37:19 halfway through his life it's been shining for four and a half billion years, but it's got five or six more before it flares up and engulfs the inner planets. And of course, the universe has far longer still, maybe going on forever. And I like to quote Woody Allen, eternity is very long, especially towards the end. We are maybe not even a halfway stage in the emergence of progressively greater complexity. And I think this century is going to be crucial in that context too, because it may be the stage when, indeed, genetic modification can redesign humans, and maybe cyborgs who are partly electronic will develop, and maybe cyborgs who are partly electronic will develop.
Starting point is 00:38:12 And that future evolution will be much faster than Darwinian natural selection. It'll be what I like to call secular intelligent design. It'll be us on machines aiding us designing better next generation. So the future changes, intelligence, are going to be faster than the slow Darwinian ones, which have led to the emergence of humans over a few hundred thousand years. So it'd be much faster. And so it's completely unimaginable what there will be in billions of years, because there can be rapid changes on this times time scale which is fast compared to the Darwinian time scale. If I could be slightly more specific about my scenario and discuss a
Starting point is 00:38:49 recent article I wrote for Mario Livio and some other things I've written I think that the first developments of post-humans may happen on Mars and let me explain this I wrote another book last year
Starting point is 00:39:05 with Don Goldsmith called The End of Astronauts. And we made the point that as robots get better, the need for sending humans into space is getting weaker all the time. And so I think many of us feel that NASA or other public agencies
Starting point is 00:39:23 shouldn't spend taxpayers' money on human spaceflight, especially something as expensive as trying to send people to Mars, which is hugely expensive if you want to make it almost risk-free. Fuel people and feed them for six months on the journey and give them stuff for the return journey, etc. That's very, very dangerous, and the public probably won't accept the cost or the risk. So, my story is that we should leave human space
Starting point is 00:39:54 flight to adventurers prepared to accept high risks funded by the billionaires. Musk and Bezos, people like that, because there are people who would be prepared to go to Mars on a one way trip, in fact Musk himself has said that he'd like
Starting point is 00:40:10 to die on Mars but doesn't impact and he's now I think 51 or 52 years old so 40 years from now good luck to him and there are other people like that who will go and they will go on a mission which is very risky and therefore far cheaper than anything that NASA would do.
Starting point is 00:40:28 Right. Because NASA's risk-averse, and it's not our taxpayers' money anyway. So my scenario is that there may well be a small colony of people living on Mars by the end of a century. Probably adventurers rather like Captain Scott and Abinson and people like that. And they'll be trying to live in this very hostile environment. And I think this will happen.
Starting point is 00:40:52 But incidentally, I don't agree with Musk that that'll be followed by mass emigration of humans because living on Mars is much worse than living at the bottom of the ocean at the South Pole. And dealing with climate change on Earth is a doddle compared to terraforming Mars. So there's no planet before all the risk of those people. But the reason I've digressed into this
Starting point is 00:41:14 topic is that if you think of these crazy pioneers on Mars, they'd be inadapted, but they'd be away from the regulators, and so they will use all the techniques of cyborg and genetic modification to design their progeny
Starting point is 00:41:31 to be better suited to that environment and they will become a different species within a few hundred years and the key question then is will they still be flesh and blood or could it be that the human brain is about the limit of what could be done by flesh and blood, and therefore they will
Starting point is 00:41:49 become electronic? And if they become electronic, then of course, they won't need an atmosphere, they may prefer zero-g, and they'll be near immortal, so then they will go off into stellar space. And so, the far future would be one in which our descendants, our remote descendants,
Starting point is 00:42:07 mediated by these crazy adventures on Mars, will start spreading through the Milky Way. And that raises the other question, are we the first? Or are there some others? And of course, this leads to SETI and all that and the relevance to SETI is that if we ask what will be the evidence for anything intelligent it will be in my opinion far more likely to be some electronic artifacts
Starting point is 00:42:36 than a flesh and blood civilization like ours because if you think of the track that our civilization has taken it's lasting a few thousand years at most. Then these electronic progeny will last for billions of years. And so if we had another planet, it's unlikely to be synchronized within a few thousand years in its evolution with ours. So if it's got a head start, then it'll have gone past the flesh and blood civilization stage and would have left electronic progeny.
Starting point is 00:43:06 So the most likely evidence we would find of intelligence would be electronic entities produced by some civilization which had evolved rather like I think may happen here on our solar system, but with a head start. That's a long answer to say that that's a future evolution. What do you make of the fact, and this is your this is the Fermi problem question, what do you make of the fact that we don't
Starting point is 00:43:32 see evidence of any of that technology out there when we look up in all our ways of looking up? I'm glad you asked that because I think this also eases that problem too because Darwinian evolution favors intelligence maybe, but also aggression.
Starting point is 00:43:50 But these electronic entities may evolve to greater intelligence, deeper and deeper thoughts, but there's no reason why they should be aggressive. So they could be out there just thinking deep thoughts. The idea that they'd all be expansionist and come to eat us, as it were, doesn't really make sense. So I think they could be out there and not as conspicuous as a flesh and blood civilization, but they could still be out there. But given the mismatch in timing of the birth of intelligence and technology on any planet that you just referenced. I mean, the fact that, you know, in our case, you know, all of the gains we've made that could possibly show up and announce our presence to the rest of the cosmos have been made in a couple of hundred
Starting point is 00:44:37 years. And we're now envisioning a situation where if life is common, if intelligent life is common in the galaxy, you know, there are planets that could be 20 million years ahead of us or more. So if you shift, if you acknowledge the likely shifts in time in that way, wouldn't you expect to see, and leaving antagonism aside, just the curiosity to explore, wouldn't you expect to see the galaxy teeming with some signs of technological life elsewhere, if in fact it exists? Well, we don't know what their motives would be, and we've no idea what their technology would be. It'd be so different to be able to recognize it. But the point I would make is that even if life is already common in our galaxy or had origination in many places,
Starting point is 00:45:32 then in the Drake equations, this term, the lifetime of the civilization, that's what it is. If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense Podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app. The Making Sense Podcast is ad-free and relies entirely on listener support, and you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.