Making Sense with Sam Harris - #8 — Ask Me Anything 1

Episode Date: April 24, 2015

Sam Harris talks about atheism, artificial intelligence, rape, public speaking, meditation, consciousness, free will, intellectual honesty, and other topics. If the Making Sense podcast logo in your p...layer is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming For today's episode, I've decided to do an Ask Me Anything podcast. So I solicited questions on Twitter and got some hundreds of them, and I will do my best to answer as many as I can over the next hour or so. And these will, by definition, not be on a theme. How does the struggle of atheists for acceptance compare with that of women, blacks, gays, etc.? How long until true equality arrives? Well, I'm not sure
Starting point is 00:01:14 I would want to draw any strict analogy between the civil rights struggles of blacks and gays and women and that of atheists, because while atheism is, as a political identity, more or less a non-starter in American politics at the moment, which is to say that you just cannot have a political career, or certainly no reasonable expectation of one, while being out of the closet as an atheist, nevertheless, atheists are disproportionately well-educated and well-off financially and powerful. Far more than 5% of the people you meet in journalism or academia or Silicon Valley are atheists. This is just my anecdotal impression, and I don't know of any scientific polling that has been done on this question, apart from among scientists, where the vast majority are non-believers,
Starting point is 00:02:06 apart from among scientists where the vast majority are non-believers and the proportion of non-believers only increases among the most successful and influential scientists. But I'm reasonably confident that when you are in the company of wealthy, connected, powerful people, internet billionaires and movie stars and the people who are running major financial and academic institutions, you are disproportionately in the presence of people who are atheists. So while I'm as eager as anyone to see atheism get its due, or rather to see reason and common sense get their due in our political discourse. I don't think it's fair to say that atheists have the same kind of civil rights problem that blacks and gays and women traditionally have had in our society. Now, in the Muslim world, things reverse entirely, because, of course, to be exposed as an atheist is in many places to
Starting point is 00:03:06 live under a death sentence. And that's a problem that the civilized world really has to address. What is your view on laws that prevent people from not hiring on the basis of religion? Well, here I'm sure I'm going to stumble into another controversy. I tend to take a libertarian view of questions of this kind. So I think people should be free to embarrass themselves publicly, to destroy their reputations, to be boycotted. So if you want to open a restaurant that only serves redheaded people, I think you should be free to do that. If you only want to serve people over six feet tall, you should be free to do that. And by definition, if you only want to serve people over six feet tall, you should be free to do that. And by definition, if you only want to serve Muslims or you only want to serve whites or if you only want to serve Jews, if you want a club that excludes everyone but yourself, I think you should be free to do all these things. And people should be free to write about you, pick it in front of your store or clubhouse or restaurant.
Starting point is 00:04:06 of your store or clubhouse or restaurant. But I think law is too blunt an instrument, and this is not to disregard all of the gains we've made for civil rights based on the laws. But at this point, I think we should probably handle these things through conversation and reputation management rather than legislate who businesses have to hire or serve. I think if the social attitudes of a business are egregious and truly out of step with those of the community, well, then they will suffer a penalty. And it's only because 50 years ago, the attitudes of the community were so unenlightened that we needed rather heavy-handed laws to ram through a sane and compassionate social agenda. heavy-handed laws to ram through a sane and compassionate social agenda. And some might argue that we're still in that situation. I think less so by the hour. And at a certain point, I think law is the wrong mechanism to enforce positive social attitudes. And of course,
Starting point is 00:04:57 my enemies will summarize this as Sam Harris thinks that it should be legal to discriminate against blacks and gays and women. Can you say something about artificial intelligence, AI, and your concerns about it? Yeah, well, this is a very interesting topic. The question of how to build artificial intelligence that isn't going to destroy us is something that I've only begun to pay attention to, and it is a rather deep and consequential problem. I went to a conference in Puerto Rico focused on this issue, organized by the Future of Life Institute, and I was brought there by a friend, Elon Musk, who no doubt many of you have heard of. And Elon had recently said publicly that he thought AI was the greatest threat to human
Starting point is 00:05:46 survival, perhaps greater than nuclear weapons. And many people took that as an incredibly hyperbolic statement. Now, knowing Elon and knowing how close to the details he's apt to be, I took it as a very interesting diagnosis of a problem. But I wasn't quite sure what I thought about it because I hadn't really spent much time focusing on the progress we've been making in AI and its implications. So I went to this conference in San Juan, held by and for the people who are closest to doing this work. This was not open to the public. I think I was one of maybe two or three interlopers there who just hadn't been invited, but sort of got himself invited. And
Starting point is 00:06:29 what was fascinating about that was that this was a collection of people who were very worried, like Elon and others who felt that we have to find some way to pull the brakes, even though that seems somewhat hopeless, to the people who were doing the work most energetically and most wanted to convince others not to worry about having to pull the brakes. And what was interesting there is that what I heard outside this conference and what you hear, let's say, on edge.org or in general discussions about the prospects of making real breakthroughs in artificial intelligence, you hear a time frame of 50 to 100 years before anything terribly scary
Starting point is 00:07:12 or terribly interesting is going to happen. In this conference, that was almost never the case. Everyone who was still trying to ensure that they were doing this as safely as possible was still conceding that a time frame of five or ten years admitted of rather alarming progress. And so when I came back from that conference, the edge question for 2015 just happened to be on the topic of AI, so I wrote a short piece distilling what my view now was. Perhaps I'll just read that. It won't take too long, and hopefully it won't bore you. Can we avoid a digital apocalypse? It seems increasingly likely that we will one day build machines that possess superhuman intelligence. We need only continue to produce better computers,
Starting point is 00:07:56 which we will unless we destroy ourselves or meet our end some other way. We already know that it's possible for mere matter to acquire, quote, general intelligence, the ability to learn new concepts and employ them in unfamiliar contexts, because the 1200 cc's of salty porridge inside our heads has managed it. There's no reason to believe that a suitably advanced digital computer couldn't do the same. It's often said that the near-term goal is to build a machine that possesses, quote, human-level intelligence.
Starting point is 00:08:30 But unless we specifically emulate a human brain, with all its limitations, this is a false goal. The computer on which I'm writing these words already possesses superhuman powers of memory and calculation. It also has potential access to most of the world's information. Unless we take extraordinary steps to hobble it, any future artificial general intelligence, known as AGI, will exceed human performance on every task for which it is considered a source of intelligence in the first place. Whether such a machine would necessarily be conscious is an open question.
Starting point is 00:08:57 But conscious or not, an AGI might very well develop goals incompatible with our own. Just how sudden and lethal this parting of the ways might be is now a subject of much colorful speculation. So just to make things perfectly clear here, all you have to grant to get your fears up and running is that we will continue to make progress in hardware and software design unless we destroy ourselves some other way, and that there's nothing magical about the wetware we have running inside our heads, and that an intelligent machine could be built of other material.
Starting point is 00:09:34 Once you grant those two things, which I think everyone who has thought about the problem will grant, I can't imagine a scientist not granting that, one, we're going to make progress in computer design unless something terrible happens, and two, that there's nothing magical about biological material where intelligence is concerned. Once you've granted those two propositions, you now will be hard-pressed to find some handhold with which to resist your slide into real concern about where this is all going. So back to the text. One way of glimpsing the coming risk is to imagine what might happen if we accomplished our aims and built a superhuman AGI that behaved exactly as intended. Such a
Starting point is 00:10:18 machine would quickly free us from drudgery and even from the inconvenience of doing most intellectual work. What would follow under our current political order? There's no law of economics that guarantees that human beings will find jobs in the presence of every possible technological advance. Once we built the perfect labor-saving device, the cost of manufacturing new devices would approach the cost of raw materials. Absent a willingness to immediately put this new capital at the service of all humanity, a few of us would enjoy unimaginable wealth and the rest would be free to starve. Even in the presence of a truly benign AGI, we could find ourselves slipping back to a
Starting point is 00:10:55 state of nature, policed by drones. And what would the Russians or the Chinese do if they learned that some company in Silicon Valley was about to develop a superintelligent AGI? This machine would, by definition, be capable of waging war, terrestrial and cyber, with unprecedented power. How would our adversaries behave on the brink of such a winner-take-all scenario? Mere rumors of an AGI might cause our species to go berserk. It is sobering to admit that chaos seems a probable outcome, even in the best-case scenario, in which the AGI remain perfectly obedient. But of course, we cannot assume the best-case scenario. In fact, quote, the control problem, the solution to which would guarantee obedience in any advanced AGI, appears quite difficult to solve. Imagine, for instance,
Starting point is 00:11:43 that we build a computer that is no more intelligent than the average team of researchers at Stanford or MIT. But because it functions on a digital timescale, it runs a million times faster than the minds that built it. Set at humming for a week and it would perform 20,000 years of human-level intellectual work. What are the chances that such an entity would remain content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present, and future than we do? The fact that we seem to be hastening towards some sort of digital apocalypse poses several
Starting point is 00:12:18 intellectual and ethical challenges. For instance, in order to have any hope that a superintelligent AGI would have values commensurate with our own, we would have to instill those values in it, or otherwise get it to emulate us. But whose values should count? Should everyone get a vote in creating the utility function of our new colossus? If nothing else, the invention of an AGI would force us to resolve some very old and boring arguments in moral philosophy.
Starting point is 00:12:49 And perhaps I don't need to spell this out any further, but it's interesting that once you imagine having to build values into a super intelligent AGI, you then realize that you need to get straight about what you think is good. And I think this, the advent of this technology would cut through moral relativism like a laser. Who is going to want to engineer into this thing the values toward free speech into a machine that makes tens of thousands of years of human-level intellectual progress every time it cycles? I don't think so. Even designing self-driving cars presents potential ethical problems that we need to get straight about. Any self-driving car needs some algorithm by which to rank order bad outcomes. So if you want a car that will avoid a child who dashes in front of it in the road, perhaps by driving up on the sidewalk, you also want a car that will avoid the people on the
Starting point is 00:14:00 sidewalk or preferentially hit a mailbox instead of a baby carriage, right? So you need some intelligent sorting of outcomes here. Well, these are moral decisions. Do you want a car that is unbiased with respect to the age and size of people or the color of their skin? Would you like a car that was more likely to run over white people than people of color? That may seem like a peculiar question, but if you do psychological tests, a trolley problem tests on liberals, and this is the one psychological experiment that I'm aware of where liberals come out looking worse than conservatives reliably. If you test them on whether or not they would be willing to sacrifice one life to save five or one life to save a hundred, and you give subtle clues as to the color of the people
Starting point is 00:14:53 involved, if you say that LeBron belongs to the Harlem Boys Choir and there's some scenario under which he can be sacrificed to save Chip and his friends who study music at Juilliard, they simply won't take a consequentialist approach to the problem. They will not sacrifice a black life to save any number of white lives. Whereas if you reverse the variables, they will sacrifice a white life to save black lives rather reliably. Now conservatives strangely are unbiased in this paradigm, which is to say colorblind. Well, do we like bias here? Do you want a self-driving car that preferentially avoids people of color? You have to decide. We either build it one way or the other. So this is an interesting phenomenon where technology is going to force us to admit to
Starting point is 00:15:47 ourselves that we know right from wrong in a way that many people imagine isn't possible. Okay, back to the text. However, a true AGI would probably acquire new values or at least develop novel and perhaps dangerous near-term goals. What steps might a superintelligence take to ensure its continued survival or access to computational resources? Whether the behavior of such a machine would remain compatible with human flourishing might be the most important question our species ever asks. The problem, however, is that only a few of us seem to be in a position to think this question through. Indeed, the moment of truth might arrive amid circumstances that are disconcertingly informal and inauspicious.
Starting point is 00:16:29 Picture ten young men in a room, several of them with undiagnosed Asperger's, drinking Red Bull, and wondering whether to flip a switch. Should any single company or research group be able to decide the fate of humanity? The question nearly answers itself. And yet it is beginning to seem likely that some small number of smart people will one day roll these dice, and the temptation will be understandable. We confront problems, Alzheimer's disease, climate change, economic instability, for which superhuman intelligence could offer a solution. In fact, the only thing nearly as scary as building an AGI is the prospect of not building one.
Starting point is 00:17:06 Nevertheless, those who are closest to doing this work have the greatest responsibility to anticipate its dangers. Yes, other fields pose extraordinary risks. But the difference between AGI and something like synthetic biology is that in the latter, the most dangerous innovations, such as germline mutation, are not the most tempting, commercially or ethically. With AGI, the most powerful methods, such as recursive self-improvement, are precisely those that entail the most risk. We seem to be in the process of building a god. Now would be a good time to wonder whether it will or even can be a good one.
Starting point is 00:17:42 I guess I should probably explain this final notion of recursive self-improvement. The idea is that once you build an AGI that is superhuman, well then the way that it will truly take off is if it is given or develops an ability to improve its own code. Just imagine something, again, that could make literally tens of thousands of years of human-level intellectual progress in relation to us intellectually, the way we stand in relation to chickens and sea urchins and snails. Now, this may sound like a crazy thing to worry about. It isn't. Again, the only assumptions are that we will continue to make progress and that there's nothing magical about biological substrate where intelligence is concerned. And again, I'm agnostic as to whether or not such a machine would by
Starting point is 00:18:52 definition be conscious. So let's assume it's not conscious. So what? You're still talking about something that will have the functional power of a god, whether or not the lights are on. So perhaps you got more than you wanted from me on that topic. I like you, but as an atheist, I find statism to be a dangerous form of religion, and I won't paint a billion people as barbarians. Okay, well, there are two axes to grind there. Well, this whole business about statism I find profoundly uninteresting. This is a separate conversation about the problems of U.S. foreign policy, the problems of bureaucracy, the problems of the tyranny of the majority or the tyranny of empowered minorities, oligarchy. These are all topics that can be spoken about. To compare a powerful state per se with the problem of religion
Starting point is 00:19:47 is just to make a hash of everything that's important to talk about here. And the idea that we could do without a powerful state at this point is just preposterous. So if you're an anarchist, you're either 50 or 100 years before your time, notwithstanding what I just said about artificial intelligence, or you're an imbecile. We need the police. We need the fire department. We need people to pave our roads. We can't privatize all of this stuff. And privatizing it would beget its own problems. So whenever I hear someone say, you worship the religion of the state, I know I'm in the presence of someone who just isn't ready for a conversation about religion and isn't ready to honestly talk about the degree
Starting point is 00:20:31 to which we rely and are wise to rely on the powers of a well-functioning government. Now, insofar as our government doesn't function well, well, then we have to change it. We have to resist its overreaching into our lives. But behind this concern about statism is always some confusion about the problem of religion. And again, this person ends his almost question with, I won't paint a billion people as barbarians. Well, neither will I. And again, when I criticize Islam, I'm criticizing the doctrine of Islam, and insofar as people adhere to it, to the letter, then I get worried. But there'll be much more on this topic when I publish my book with Majid Nawaz. I originally said that was happening in
Starting point is 00:21:16 June. That's unfortunately been pushed back to October because it is still hard to publish a physical book, apparently. But you will have your fill of my thoughts about how to reform Islam when that comes out. What do you think of Cenk Uygur's The Young Turks attack on you and Ayan recently? Well, I guess I've ceased to think about it. I push back against it briefly, saying on Twitter, obviously, my three hours with Cenk had been a waste of time. It appears to have been a waste of time, at least for him. I think many people got some benefit from listening to us go round and round and get wrapped around the same axle for three hours. It actually wasn't a waste of time for him because I heard from a former employee there that that was literally
Starting point is 00:22:03 the most profitable interview they've ever put on their show. I don't know what he made off of that interview, and I don't begrudge him making money off his show, obviously, but I feel that Cenk now systematically acts in bad faith on this topic. He has made no effort to accurately represent my views. Again, it's child's play to pick a single sentence from something that I've said or written and to hew to a misinterpretation of that sentence and attack me. I think that the thing I finally realized here, and this is not just a problem with Cenk, it's with all the usual suspects and all of their followers on Twitter. I've just reluctantly begun to accept the fact that when someone hates you, they take so much pleasure from hating you that it's impossible to correct a misunderstanding. That would force your
Starting point is 00:22:58 opponent to relinquish some of the pleasure he's taking in hating you. This is an attitude that I think we're all familiar with to some degree. Once you're convinced that somebody is a total asshole, where you've lost any sense that you should give them the benefit of the doubt, and then you see one more transgression from them, another thing that confirms whatever attitude in them you hate, whether they're homophobic or they're racist or they don't believe in climate change or whatever it is. And once that has calcified, that view of that person has calcified in you and you see yet one further iteration of this thing, well, then you're not inclined to second guess it. You're not inclined to try to read between the
Starting point is 00:23:43 lines. And in fact, if someone shows you that transgression isn't what it seemed, well, then you can be slow to admit that. This is not totally foreign to me. I noticed this in myself. It's something that I do my best to shed. I think it's an extremely unflattering quality of mind. This is not where I want to be caught standing. But my opponents seem to be always standing here, and that makes conversation impossible. Okay. How did you become such a good public speaker? I have a speech class this fall, and I'm sick about it. Well, I certainly wouldn't claim to think that I am such a good public speaker. I think at best I'm an adequate one.
Starting point is 00:24:29 And as I wrote on my blog a couple years ago in an article entitled The Silent Crowd, I really did have a problem with this. I was really terrified to speak publicly early in life and overcame it, and overcame it rather quickly just by doing it. Meditation was helpful, but meditation is insufficient for this kind of thing. You really, you have to do the thing you're afraid of. You can't just get yourself into some position of confidence beforehand and hope to then do it without any anxiety. No, you have to be willing to feel the anxiety. And what is anxiety? Anxiety is just a sensation of energy in the body. It has no content, really. It has no philosophical content. It need not have any psychological content. It's like indigestion.
Starting point is 00:25:20 You wouldn't read a pattern of painful sensation in your abdomen after a bad meal and imagine that it says something negative about you as a person. This is a negative experience that is peripheral to your identity, but something about anxiety suggests that it lands more at the core of who we are. You're a fearful person, but you need not have this relationship to anxiety. Anxiety is a hormonal cascade that you can just become willing to feel and even interested in. And it need not be the impediment to doing the thing that you are anxious about doing. Not at all.
Starting point is 00:26:03 And so I go into this in more detail on my blog, but this is just something to get over. It's worth pushing past this and not caring whether you appear anxious while doing it. Just do your thing, and you will eventually realize that you can do it happily. But, you know, some people are natural speakers, they're natural performers. This is what they are comfortable doing, they love to do it, they're loose, they have access to the full bandwidth of their personality in that space. And, you know, I am not that way. And even being comfortable doing it, I'm not that way. It doesn't come naturally, and I'm happy I've fooled at least you. It doesn't come naturally, and I'm happy I've fooled at least you. If I'm a good public speaker, it's a statement that I have something interesting to say. If you pay close attention, you'll see that I just kind of drone on in a monotone, and my lack of style is to some degree a necessity because I want to approach public speaking very much as a conversation. I get uncomfortable
Starting point is 00:27:05 whenever my pattern of speech departs too much from what it would be in a conversation with one person at a dinner table. Now, if you're standing in front of a thousand people, it's going to depart somewhat. It's just the nature of the situation. But I try to be as conversational as possible. And when I'm not, and when someone else isn't, it begins to strike me as dishonest. Yet, I will grant you that the performance aspect of public speaking allows for what many people appreciate as the best examples of oratory. So you just listen to, you know, Martin Luther King Jr. He is so far from a natural speech pattern. It is pure performance. Just imagine being seated at a table at a dinner party across from someone
Starting point is 00:27:55 who was speaking to you the way MLK spoke in his speeches. You would know that you were in the presence of a madman. It would be intolerable, right? It would be terrifying. So that distance between what is normal in conversation and what is dramaturgical in a public speech, I don't want to traverse that too far. I'm not comfortable doing it, and I actually tend to find it suspect as a member of the audience. What is really entailed in Dzogchen meditation? Is it the loss of I, that is the self, or does it go beyond that? Well, traditionally speaking, it goes beyond that in certain ways, but I think the core point is what's called non-dual awareness, to lose the sense of subject-object awareness in the present moment, and to just rest as open, centerless consciousness, and
Starting point is 00:28:54 just fully relax into whatever is arising without hope and fear, without praise and blame, without grasping at the pleasant or pushing away the unpleasant. So it's a kind of mindfulness, but it's a mindfulness of there being nothing at all to grasp at as self. So it's, yes, selflessness is the core insight. They don't tend to talk about selflessness. They talk about non-duality. Any suggestions or advice if I want to do two years of silent meditation on retreat? Yeah, well, just don't do it by yourself. You really need guidance if you're going to go into a retreat of any significant length. So find a meditation center where they're doing a practice that you really want to do
Starting point is 00:29:39 and find a teacher you really admire and who you trust, and then follow their instructions. A couple more questions about meditation. Why do we do it sitting up? If having a straight back is valuable, why not do it lying down? Well, you can do it lying down. It's just harder. We're so deeply conditioned to fall asleep lying down
Starting point is 00:30:01 that most people find that meditation is just a precursor to a nap in that case, but it can be a very nice nap. And if you're injured or if you're just tired of sitting, lying down is certainly a reasonable thing to attempt. Most people find that it is harder to stay awake, and people often have a problem with sleepiness while sitting up, so that's the reason. I haven't read any of your books, but one too soon. Does your view that there's no free will give you sympathy for your enemies? Yes, it does. I've talked about this a little bit. It is an antidote to hatred. I have a long list of people who I really would hate if I thought they could behave differently than they do.
Starting point is 00:30:46 Now, occasionally I'm taken in by the illusion that they could and should be behaving differently, but when I have my wits about me, I realize that I am dealing with people who are going to do what they're going to do, and my efforts to talk sense into them are going to be as ineffectual as they will be, and there's really no place to stand where this was going to be as ineffectual as they will be, and there's really no place to stand where this was going to be other than it is. And so it really is an antidote to hating some of the dangerously deluded and impossibly smug people I have the misfortune of colliding with on a regular basis. Can the form of human consciousness be distinguished from its contents, or are the two identical? That's an interesting question. I think it's,
Starting point is 00:31:32 insofar as I understand it, there are a couple different ways I can interpret what you've said there, but I think human consciousness clearly has a form both conscious and unconscious. When you're talking about the contents of consciousness consciousness you're talking about what is actually appearing before the light of consciousness, what is available to attention in each moment, what can be noticed. But there's much that can't be noticed which is structuring what can. So the contents are dependent upon unconscious processes, which are noticeably human in that the contents they deliver are human. So, for instance, an example I often cite is our ability to understand and produce language. The ability to follow grammatical rules, to notice when they're broken. All of these processes are unconscious, and yet this is not something that dogs do. It's not something that chimps do. We're
Starting point is 00:32:32 the only ones we know to do it, and all of this gets tuned in a very particular way in each person's case. For instance, I'm totally insensitive to the grammatical rules of Japanese. When Japanese is spoken in my presence, I don't hear much of anything linguistic. So the difference between being an effortless parser of meaning and syntax in English and being little better than a chimpanzee in the presence of Japanese, that difference is, again, unconscious, yet determining the contents of consciousness. So there are both unconscious and conscious ways in which consciousness, in our case, is demonstrably human. And I don't really think you can talk about the humanness of consciousness beyond that. Because for me, consciousness is simply the fact that it's like something to have an experience
Starting point is 00:33:23 of the world. The fact that there's a qualitative character to anything, that's consciousness. And if our computers ever acquire that, well, then our computers will be conscious. What's your opinion of the rise of the new nationalist right in Europe and the issue of Islam there? There's a very unhappy linkage there. The nationalist right has an agenda beyond resisting the immigration of Muslims, but clearly we have a kind of fascism playing both sides of the board here, and that's a very unhappy situation and a recipe for disaster at a certain point. I think the problem of Islam in Europe is of deep concern now, and especially so probably in France, although it's bad in
Starting point is 00:34:07 many countries. You have a level of radicalization and a disinclination to assimilate on the part of far too many people. And it's a problem unlike the situation in the United States for reasons that are purely a matter of historical accident. But I think it's a cause of great concern. And it is, as I said in that article on fascism, it is a double concern that liberals are sleepwalking on this issue. on this issue, and that to express a concern about Islam in Europe gets you branded as a right-winger or a nationalist or a xenophobe, because these are the only people who have been articulating the problem up to now, with a few notable exceptions like Ayaan Hirsi Ali and
Starting point is 00:34:58 Douglas Murray in the UK and Majid Nawaz, who I've mentioned a lot recently. So it's not all fascists who are talking about the problem of Islamism and jihadism in Europe, but for the most part, liberals have been totally out to lunch on this topic. And one wonders what it will take to get them to come around. Lots of questions here. Apologies for not getting to the tiniest fraction of them. There appear to be now hundreds. What charity organization do you think is doing the best work? There are two charities unrelated to anything that I'm involved in that I, by default, give money to. Doctors Without Borders and St. Jude's Children's Hospital. Both do amazing work and work for which there really is no substitute. So for instance, when people use any of the affiliate links
Starting point is 00:35:53 on my website or you see in a blog post where I link to a book, let's say I'm interviewing an author and I link to his book. If you buy his book or anything else on Amazon through that affiliate link, well, then 50% of that royalty goes to charity, and rather often it's Doctors Without Borders or St. Jude's. I just think when you're helping people in refugee camps in Africa or close to the site of a famine or natural disaster or civil war, we're doing pioneering research on pediatric cancer and never turning any child away at your hospital for want of funds. It's hard to see a better allocation of money than either of those two projects. I reject religion entirely, but I'm curious how you with complete certainty know there is no God.
Starting point is 00:36:45 What proof do you have? Well, this has the burden of proof reversed. It's not that I have proof that there is no God. I can't prove that there's no Apollo or Zeus or Isis or Shiva. These are all gods who might exist, but of course there's no good evidence that they do. And there are many things that suggest that these are all the products of literature. When you're looking on the mythology shelf in a bookstore, you are essentially perusing the graveyard of dead gods. If you'd like to continue listening to this conversation, you'll need to subscribe
Starting point is 00:37:26 at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes, and AMAs, and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free
Starting point is 00:37:41 and relies entirely on listener support. And you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.