Making Sense with Sam Harris - #427 — AI Friends & Enemies

Episode Date: July 25, 2025

Sam Harris speaks with Paul Bloom about AI and current events. They discuss the state of LLMs, the risks and benefits of AI companionship, what it means to attribute consciousness to AI, relationships... with AI as a form of psychosis, Trump’s attacks on science and academia, what Trump may be trying to hide in the Epstein files, how damaging Trump’s connections to Epstein might be, MAGA’s obsession with conspiracy theories, questions surrounding Epstein’s death, Paul’s research on revenge, Sam’s falling out with Elon Musk, where things went wrong for Elon, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Making Sense podcast. This is Sam Harris. Just a note to say that if you're hearing this, you're not currently on our subscriber feed. And we'll only be hearing the first part of this conversation. In order to access full episodes of the Making Sense podcast, you'll need to subscribe at samharris.org. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. I am here with Paul Bloom.
Starting point is 00:00:38 Paul, thanks for joining me. Sam, it's good to talk to you, as always. Yeah, great to see you. It's been, we were just talking off Mike. It was, it's been years. It's crazy how that happens. But it has been. I've been following you, though.
Starting point is 00:00:51 You've been doing well. Yeah, nice. Yeah, well, I've been following you. I've read you in the New Yorker recently. writing about AI. I think you've written at least two articles there since we actually wrote a joint article for the New York Times seven years ago. You can believe that. Oh, yeah. About Westworld and the philosophical import of watching it and realizing that only a psychopath could go to such a theme park and rape Dolores and kill children, et cetera. And I think we predicted no such theme park will ever exist because it'll just have, it'll be a bug light for psychopaths and normal. people will come back and if they do anything like that, they'll scandalize their friends and loved ones and be treated like psychopaths appropriately. We'll see. We may be proven wrong. Who knows? Who knows in this crazy time? But yeah, that was a fun article to write. And I think
Starting point is 00:01:45 we wrestled with the dilemmas of dealing with, you know, entities that at least appear to be conscious and the moral struggles that leads to. Yeah. Well, so I think we'll start with AI, but remind people what you're doing and what kinds of problems you focus on. I think, though we haven't spoken for several years, I think you still probably hold the title of most repeat podcast guest at this point. I haven't counted the episodes, but you've been on a bunch, but it's been a while. So remind people what you're, what you focus on as a psychologist. Yeah, I'm a psychology professor. I have positions at Yale, but I'm located at University of Toronto when I study largely moral psychology, but I'm interested in development, in issues of consciousness, issues of
Starting point is 00:02:32 learning, notions of fairness and kindness, compassion, empathy. I'm actually been thinking, it's funny to be talking to you because my newest project, which I've just been getting excited by as to do with the origins of morality, and the spark was a podcast with you talking to Tom Holland, author of Dominion. Oh, nice. How so? What was the actual spark? Yeah, I found this a great conversation, and he made the case that a lot of the sort of morality that maybe you and I would agree with that, you know, an idea of respecting the right to universal rights and a morality that isn't based on the powerful, but instead maybe in some way respects the weak.
Starting point is 00:03:12 It's not the product of rational thought, not the product of secular reasoning, but instead of the product of Christianity. And he makes this argument in dominion in several other places. And I've been thinking about it. I mean, it's a serious point. He's not the only one to make it. It deserves consideration, but I think it's mistaken. Yeah. And so I think my next project has been thinking about it,
Starting point is 00:03:34 and I thank you for the podcast, getting me going on this, is to make the alternative case, to argue that a lot of our morality is inborn and innate. I'm a developmental psychologist as part of what I do. And a lot of morality is a product of reasoned and rational thought. I don't deny that culture plays a role, including Christianity. But I don't think that's the big story. Nice. Well, I look forward to you making that argument because that's a, that's one of these shibboleths of organized religion that I think needs to be retired. And so you're just the guy to do it. So let me know when you produce that book and we'll talk about it. I mean, I got to say, I heard you guys talk and you sort of, you know, you guys engaged properly on an idea. And Holland, I've never met, but he seems like a really smart and quite a scholar. So it's not, it's not a, you know, and he gets, I think, credence by the fact.
Starting point is 00:04:24 fact that he's in some way arguing against his own side. He himself isn't a devout Christian, is secular. Yeah. But that doesn't make him right. So I'm very interesting engaged in these ideas. So how have you been thinking about AI of late? What has surprised you in the seven years or so that we, since we first started talking about it? A mixture of awe and horror. I'm not a dumer. I'm not as much of the duma as some people. I don't know. I don't know when And the last time I checked with you, Sam, what would you say your P-Doom is? I've never actually quantified it for myself, but I think it's non-negligible. I mean, it's very hard for me to imagine it actually occurring in the most spectacularly bad way of a very fast takeoff, you know, an intelligence explosion that ruins everything almost immediately.
Starting point is 00:05:16 But, you know, fast or slow, I think it's well worth worrying about because I don't think the probability is. tiny. I mean, I, you know, I would put it in double digits. I don't know what those double digits are, but it's, I wouldn't put it in single digits, given the kind of the logic of the situation. I think we're kind of in the same place. Yeah. I mean, people always talk about, you know, you have a benevolent, superintelligent AI, and you tell it to make paper clips and it destroys the world and turns us all in the paper clips. But there's another, another vision of, of malevolent AI. I've been thinking about what's the rise of, what's called mech Hitler. Yeah, mechihler. You know, so, you know, it's a simpler thing. Some deranged billionaire creates an AI that fashions himself on Hitler. The Trump Defense Department purchases it and connects it to all of his weaponry systems. And hijinks and zoo. How could you come up with such an outlandish scenario that could never happen? There's just no way.
Starting point is 00:06:12 It's a bizarre fantasy. And by the way, it also makes porn. So just to get to the trifecta. So anyway, like a lot of people, I worry about that. I also find at the same time, I find AI an increasingly indispensable part of my intellectual life. Oh, interesting. I, you know, I have a question, but not just a specific. I use it as a substitute for Google. I, you know, say, you know, where's a good Indonesian restaurant in my neighborhood?
Starting point is 00:06:38 And, you know, how do you convert these euros into dollars? But I also have a question like, I don't know, I got into argument with somebody about revenge. So what is the cross-cultural evidence for revenge being a universal and who would argue against that? And three seconds later, boom, a bunch of papers, books, thoughtful discussion, thoughtful argument. Mixed in our hallucinations. Occasionally, I find it cites a paper written by me that I've never written. But it's astonishing. I've been credited with things. Yeah. Yeah. It's told me I've interviewed people I've never interviewed. And it's amazingly apologetic when you correct it. But I'm so sorry. Yes, you are totally right. Let me now give you citations
Starting point is 00:07:18 of papers that actually exist. Yeah. And you don't work. day when you when you write when you prepare for interviews how much you use it well i've just started experimenting with it in in a way that's um probably not the usual use case so we have we we have fed everything i've written and everything i've said on the podcast into chat chbt right and right yeah yeah so so we have well actually we have two things we have um we've created a chat bot that is me that's that's model agnostic so it can be run on chat gpt or or claude or whatever the best model is we can You can swap in whatever model seems best. But so this is like at the, you know, a layer above at the system prompt level.
Starting point is 00:08:00 And it has, again, access to everything. It's something like 12 million words of me, right? So it's a lot. You know, that would be a lot of books. We've just begun playing with this thing. But it's, um, it's impressive because it also is, it's using a professional voice clone. So it sounds like me. And it's every bit as monotonous as me.
Starting point is 00:08:21 in its delivery. I mean, so I'm almost tailor-made to be cloned because I already speak like a robot. Must be agonizing to listen to sound. It is, but it hallucinates, and it's capable of being weird, so I don't know that we're ever going to unleash this thing on the world, but it's interesting
Starting point is 00:08:37 because it's like, so even having access to every episode of my podcast, it still will hallucinate an interview that never happened. You know, it will still tell me that I interviewed somebody that I never interviewed. And so it's, that part's, It's still strange, but presumably the, you know, this is as bad as it'll ever be, and the general models will get better and better one imagines.
Starting point is 00:08:59 We're looking at each other on video, and I imagine it doesn't do video yet. But if we were talking on the phone or without video, would I be able to tell quickly that I was talking to an AI and not to you? Only because it would be able to produce far more coherent and comprehensive responses to any question. I mean, if you said, because it's hooked up to whatever the best LLM is at the moment, you know, if you said, give me 17 points as to the, you know, the cause of the Great Depression, it would give you exactly 17 points detail in the cause of the Great Depression. And I could not do that. So you would, it would fail the Turing test as all these LLMs do by passing it so spectacularly and instantaneously. Actually. Actually, that's a surprise. I want to ask you about that. That's a surprise for me how not a thing the Turing test turned out to be. It's like the turning test was the staple of our, you know, the cognitive science literature and just our imaginings in advance of credible AI. We just thought, okay, there's going to be this moment where we're just not going to be able to tell whether it's a human or whether it's a bot. And that's going to be somehow philosophically. important and culturally interesting. But, you know, overnight we were given LLMs that fail the Turing test because of how well they pass it. Yeah. And this is no longer a thing. It's like it was
Starting point is 00:10:24 never a Turing test moment that I caught. All of a sudden, you and I, we have, I have a super intelligent being I to talk to on my iPhone. And I'll be honest. And I think, I think other psychologists should be honest about if you had asked me a month before this thing came out, how far away we were from such a machine, I would say 10, 20, 30 years, maybe 100 years. Right. And now we have it. Now we have a machine we could talk to and sounds like a person. And except, like you say, it's just a lot smarter.
Starting point is 00:10:52 Yeah. And it is mind-boggling. I mean, it's an interesting psychological fact how quickly we get used to it. It's as if, you know, aliens landed, you know, next to the Washington Monument and now they walk among us. Oh, yeah. Well, that's the way things go. Yeah.
Starting point is 00:11:06 Oh, you develop teleporters. Then we got teleporters. and we just take it for granted now. Yeah. Well, so now what are your thoughts about the implications, you know, psychological and otherwise, of, of AI companionship? I mean, so at the time we're recording this, has been recently in the news,
Starting point is 00:11:25 stories of AI-induced psychosis. I mean, people get led down the primrose path of their delusion by this amazingly sycophantic AI that just encourages them in their Messiah complex or whatever flavor it is. And I think literally an hour before we came on here, I saw an article about chat TPT encouraging people to pursue various satanic rituals and telling them how to do a proper blood offering that entailed slitting their wrists. And as one does in the presence of a superintelligence, I know you just wrote a piece in the
Starting point is 00:12:02 New Yorker that people should read on this. But give me your sense of what we're on the cusp of here. I have a mild form of that delusion in that I think every one of my substack drafts is brilliant. I'm told it's just, you know, Paul, you have outdone yourself. This is sensitive, humane, as always with you. And no matter what I tell it to, I say, you don't have to suck up to me so much. A little bit, but not so much. It just, and now I kind of believe that I'm much smarter than I used to be.
Starting point is 00:12:28 I have somebody very smart telling me. What I, my article is kind of nuanced in that I argue two things. One thing is, there's a lot of lonely people in the world, and a lot of people suffer from loneliness, and particularly old people, depending on how you count it, you know, under some surveys, about half of people over 65, say they're lonely. And then you get to people in, like, a nursing home, maybe they have dementia, maybe they have some sort of problem that makes them really difficult to talk with. And maybe they don't have doting grandchildren surrounding them every hour of the day. Maybe they don't have any family at all. And maybe they're not millionaires, so they can't afford to pay some Hormo to listen to. to them. And if chat GPT or Claude, one of his AI companions could make their lives happier,
Starting point is 00:13:12 make them feel loved, one that's respected. That's nothing but good. I think, you know, I think in some ways it's like powerful painkillers are powerful opiates, which is, I'm not sure. I don't think 15-year-old should get them, but somebody who's 90 in a lot of pain, sure, lay it on. And I feel the same way with this. So that's the, that's the pro side. The con side is, I am worried, and you're touching on it. It was this illusion talk. I'm worried about the long-term effects of these syncopathic sucking up AIs where every joke you make is hilarious.
Starting point is 00:13:48 Every story you tell is interesting. You know, I mean, the way I put it is, if I ever ask, am I the asshole? The answer is, you know, affirm, no, not you, they're an asshole. And I think, you know, I'm an evolutionary theorist through and through. And loneliness is awful, but loneliness is a valuable signal. It's a signal that you're messing up. It's a signal that says, you've got to get out of your house. You've got to talk to people.
Starting point is 00:14:11 You've got to open up to apps. You've got to say yes to the brunch invitations. And if you're lonely when you interact with people, you feel not understood, not respect, not love. You've got to up your game. It's a signal. And like a lot of signals like pain, sometimes is a signal that where people are in a situation where it's not going to do many good. But often for the rest of us, it's a signal that makes us better. Yeah. I think I'd be happier if I could shut off, as a teenager, I'd be happier if I could shut off the switch of loneliness and embarrassment, shame, and all of those things. But they're useful. And so the second part of the article argues that continuous exposure to these AI companions could have a negative effect because, well, for one thing, you're not going to want to talk to people who are far less positive than AI's. And for another, when you do talk to them, you have not been socially entrained to do so properly.
Starting point is 00:15:02 Yeah, it's interesting. So I'll be leaving the dementia case aside me, I totally agree with you that that is a very strongly paternalistic moment where anything that helps is fine. It doesn't matter that it's imaginary or that it's that it's encouraging of delusion. I mean, this, we're talking about somebody with dementia. But so just imagine in the normal, healthy case of people who just get enraptured by increasingly compelling relationships with AI. I mean, you can imagine. So right now we're, we're, we're We've got chatbots that are still fairly wonky. They hallucinate. They're obviously sycophantic. But I just imagine it gets to the place where, I mean, forget about Westworld and perfectly humanoid robots, but very shortly, I mean, it might already exist in some quarters already. We're going to have video avatars, you know, like a Zoom call with an AI that is going to be out of the uncanny valley, I would imagine immediately.
Starting point is 00:15:59 I mean, I've seen some of this, the video products. which, like sci-fi movie trailers, which are, they don't, they don't look perfectly photorealistic, but they're, they're getting close. And you can imagine six months from now, it's just going to look like a gorgeous actor or actress talking to you. That's going to be, imagine that becomes your assistant who knows everything about you. He or she has read all your email and kept your schedule and is advising you and helping you write your books, et cetera. And not making errors and. seeming increasingly indistinguishable from just a super-intelligent locus of conscious life, right? I mean, it might even seem, it might even claim to be conscious if we build it that way. And let's just stipulate that, at least for this part of the conversation, that it won't be conscious, right? That this is all an illusion, right? It's just a, it's no more conscious than your iPad is currently. And yet it becomes such a powerful illusion that, people just, most people, I mean, the people, I guess philosophers of mind might still be clinging
Starting point is 00:17:09 to their agnosticism or skepticism by their fingernails, but most people will just get lulled into the presumption of a relationship with this thing. And the truth is, it could become the most important relationship many people have. Again, it's so useful, so knowledgeable, is always present, right? They might spend six hours a day talking to their assistant. And, what does it mean if they spend years like that, basically just gazing into a fun house mirror of fake cognition and fake relationship? We've seen, I mean, I'm a believer that sometimes the best philosophy, our movies do excellent philosophy, and the movie, Her came out, I think, in 2013, is an example of
Starting point is 00:17:57 this guy, you know, a lonely guy, normal, lonely guy, but gets connected in AI. assistant named Samantha, played by Scarlett Johansson, and falls in love of her. And she does all, the first thing she says to him. And what a, what a meat-cute is, I see you have like 3,000 emails that haven't been answered. You want me to answer them all for you. Yeah. Yeah. I fell in love of her there. But, you know, but the thing is, you're watching a movie and you're listening to her, talk to him, and you fall in love of her, too. I think we've evolved in a world where when somebody talks to you and acts normal and seems to have emotions, you assume there's a consciousness behind it.
Starting point is 00:18:37 You know, evolution has not prepared us for these, you know, extraordinary fakes, these extraordinary, you know, golems that elude all of the behavior associated with conscience and don't have it. So we will think of it as conscious. There will be some, you know, philosophers who insist that they're not conscious, but, But even they will, you know, sneak back from their classes and then in the middle of the night, you know, turn on their phones and start saying, you know, I'm lonely. Let's talk. Yeah. And then the effects of it, well, one effect is real people can't give you that, you know, married, very happily married, but so no way forgets about things that I told her. And sometimes she doesn't want to hear my long, boring story. She wants to tell her story instead. And sometimes it's three in the morning. And I could shake her away because I have this really interesting idea I want to share,
Starting point is 00:19:30 but maybe that's not for the best. She'll be grumpy at me. And because the thing is, she's a person. And so she has her own set of priorities and interests. So too, of all my friends, they're just people. And they have other priorities in her life besides me. Now, in some way, this is, I think, what makes when you reflect upon it, the AI companion have less value.
Starting point is 00:19:53 You know, here you and I are. And what that means is that you decided to take. your time to talk to me and I decide take my time to talk to you. And that's the value. When I switch on my lamp, I don't feel, oh my gosh, this is great. It decided to light up for me. It didn't have a choice at all. Dei has no choice at all. So I think in some part of our mind, we realize there's a lot less value here. But I do think in the end, the scenario you paint has got to become very compelling and real people are going to fall short. And it's not clear what to do with that. Now, there's something I think you've just, I think you've just come up with a
Starting point is 00:20:29 fairly brilliant product update to some future AI companion, which is a kind of Pavlovian reinforcement schedule of attention where it's like the, the AI could say, listen, I, you know, you're, I want you to think a little bit more about this topic and then get back to me because you're really not up to talking about it right now. You know, come back tomorrow, right? And I wonder that. Yeah, that would be an interesting. experience to have with your AI that you have subscribed to. I've wondered at. Like, you asked the AI a question and says, is that really like a good question?
Starting point is 00:21:03 Does it seem like a question you couldn't just figure out just by thinking for a minute? I know everything. That's really what you want to ask me? Don't you have something deeper? You were talking to super intelligent, you know, God and you want to know how to end a limerick. Right. Really? Yeah.
Starting point is 00:21:19 I would wonder if these things how people would react if these things came with dials. Obviously not, maybe a big physical dial. You went on a big physical dial. And the dial is pushback. So when it's set at zero,
Starting point is 00:21:34 it's just everything you say is wonderful. And just, and but I think we do want some pushback. Now, I think in some way, we really want less pushback than we say we do. And it's this way of real people too. So everybody says, yeah, Oh, I like when people argue with me. I like when people call me out of my bullshit.
Starting point is 00:21:53 But what we really mean is we want people to push back a little bit and then say, ah, you convinced me. Right. You know, you really showed me. You know, I thought you were full of it. But now upon reflection, you've really out-argued me. We want them to fight and then acquiesce. But you turn the dial even further.
Starting point is 00:22:11 We'll let AI and say, you know, we've been talking about us for a long time. I feel you are not very smart. You're just not getting it. I'm going to take a little break. And you mull over your stupidity, called for human. The recalcitrant style. That's what we could build in.
Starting point is 00:22:27 All right, we're going to... I feel it's going to be the worst business idea. We're going to get rich, Paul. This is an old business with AI that calls you on your bullshit. That's really the business they do this century. But so what are we to think about this prospect of spending more and more time in dialogue under the sway of a pervasive illusion of relationship wherein there is actually no relationship
Starting point is 00:22:52 because there's nothing that it's like to be Shatch E.P.T.6 is talking to us perfectly if you'd like to continue listening to this conversation, you'll need to subscribe at samharis.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast. The Making Sense podcast is ad-free and relies entirely on listener support,
Starting point is 00:23:13 and you can subscribe now at Sam Harris. harris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.