Making Sense with Sam Harris - #466 — What Is Technology Doing to Us?

Episode Date: March 24, 2026

Sam Harris speaks with Nicholas Christakis about technology, society, and human nature. They discuss the harms of modern communication technology, polarization and anomie, how AI agents can improve hu...man cooperation, the social implications of humanoid robots, Christakis's experience at the center of the woke moral panic at Yale, the Trump administration's assault on American universities and science, the collapse of public trust in institutions, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:06 Welcome to the Making Sense podcast. This is Sam Harris. Just a note to say that if you're hearing this, you're not currently on our subscriber feed, and we'll only be hearing the first part of this conversation. In order to access full episodes of the Making Sense podcast, you'll need to subscribe at samharris.org. We don't run ads on the podcast,
Starting point is 00:00:25 and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. I am here with Nicholas Christakis. Nicholas, thanks for joining me again. Sam, it's so good to see you again. Yeah, great to see you.
Starting point is 00:00:42 Yeah, we don't see each other in person enough or even on the internet enough, but I always love talking to you. So let's just jump right into it. I'll remind people you are the director of the human nature lab at Yale. You are both an MD and a sociologist and have studied many interesting topics related to, I guess, how human beings and now technology affect one another. and we have too much to talk about. I think I want to start with the question of,
Starting point is 00:01:11 I guess I just want your post-mortem on the present. This last decade, what has technology, specifically information technology, done to us? Yeah, so I think we are going to see the other side of our present dilemma. I think it is going to take half a generation to really be on the other side of it, because I think we've dug ourselves into quite a whole.
Starting point is 00:01:32 I share the opinion, I suspect with you, and certainly with people like John Haidt and others, that kind of technology that we've invented, or the turns that our technology has taken, our communication technology has taken in the last 10 years, have so far been quite harmful to us, whatever other benefits they've had. I think they've contributed to this polarization,
Starting point is 00:01:53 they've contributed to Anomi, they've contributed to some of the mental health crises we've had. I think they've also led to a surveillance state, not just abroad, but shockingly in our own country, where these technologies are being used. in ways that I would regard as, you know, quasi totalitarian or at least pose the threat of that. I had a friend long ago, I still have him, he's still a friend of mine. And years ago, he told me he didn't use credit cards and, you know, he refused to get a
Starting point is 00:02:17 cell phone and he wanted, you know, he was trying to be off the grid because he didn't want to be surveyed. And I thought he was like a Luddite nut. Yet now, you know, you know, worry that like my every move is being tracked by someone. So if to the extent that you are arguing, and I think you are, that some of what ails us at present is due to some of these communication technologies and the ways they've been grafted onto very fundamental human desires and exploit those desires. To the extent that we grow as a society to cope with those threats,
Starting point is 00:02:51 I think we will look back at this period as just that, one in which we yield it to and were adversely affected by and ultimately, let's say, overcame some of these threats. not dissimilar, you and I remember, when you couldn't swim in the Boston Harbor, you know, the Charles was polluted, the air was polluted, and we sort of cleaned everything up in some sense. So maybe we'll clean everything up in that way, but it'll take some time. So what is your personal engagement with social media these days? How do you use it if you use it? Well, I got very disgusted with Twitter. And I didn't abandon my account because I didn't want anyone to squat on it.
Starting point is 00:03:32 And I found that I was the reason I went to Twitter was that I used it as a source of information. Like it was access to experts in a way that was really, really helpful to me. And I found that a lot of the knowledge that I was a question. acquiring. I was acquiring. I curated a list of people with diverse expertise and beliefs and followed them and I really enjoyed it. And then I felt like I had to, wasn't just appropriate for me to take from the commons. I had to give to the commons. So I tried to generate content that would, you know, reflect my expertise or my ideas and be useful to others. But in the last few years, I found it to be just incredibly toxic. And my, the feed became even when I just tried to follow only,
Starting point is 00:04:15 my own people became full of garbage, a lot of trolling, a lot of mostly far right conspiracy theories, also some left craziness, of course, too. I just couldn't use it anymore. So I basically now I stopped using Twitter and I moved to Blue Sky a couple of years ago where I get mostly a, I mean, the politics are another issue, but in terms of the science, you know, I follow about 600 accounts, mostly scientists. And I get good scientific content. And I have, you know, reasonable interactions. I have a tenth of the followers I used to have. That's fine. Facebook I don't really use. LinkedIn, I don't really use. I just started a YouTube channel on trying to advance the public understanding of science called For the Love of Science, but I don't really know how to
Starting point is 00:05:00 use YouTube. So we're just doing videos, you know, once a week. So I'm really just basically blue sky for science. That's all I'm doing nowadays. Well, I want to get back to the reputation of science and to your efforts on YouTube at the moment. But so it just takes, again, social media and what it's doing to us and the toxicity and conspiracism and trolling that you are familiar with and that everyone listening to this will be familiar with. Do you have any sense of what the remedy is? I mean, you know, my personal remedy was to just delete my Twitter account and to now,
Starting point is 00:05:35 you know, only in extremis, look at a Twitter feed just be. because there's some breaking news that is best captured, you know, there. But even that, Sam, do you remember, do you remember that guy who was an expert on military tires? Do you remember that call? No. No. It was, I think it was during, I can't remember if it was when the Ukraine War started, I think. And there was some guy who was an expert in the maintenance of military vehicles. And he sent a long thread out about, like, how the tires hadn't, the trucks hadn't been moved around properly. The tires hadn't been rotated, how all the tires were exploding. I had no idea there was such a person. And I read his whole thread, and I was like, oh my God, it's so interesting.
Starting point is 00:06:14 All of that content, that expertise, as far as I can tell, is gone from Twitter. It should be invitiated by AI slop, or how is it gone? Well, first of all, whatever the algorithm is, I don't get that content. The AI slop is a serious problem. And one of my family teases me, I'm known to be particularly gullible.
Starting point is 00:06:35 Right. And actually, my re-narration of this is that I'm not stupid and naive and gullible. I'm trusting. You know, I'm trusting that. You're a good person, in other words. Exactly. Exactly.
Starting point is 00:06:48 That's my story and I'm sticking with it. But the thing is, somehow these algorithms figured out that I like to look at like baby elephants. And initially, I got like real, I think, you know, like BBC photos of like baby elephants. And then I think the algorithm started feeding me slop, like, you know, like a hippopotamus, a crocodile attacks a baby elephant. Yeah, yeah. Yeah, saved by a rhinoceros. Yeah, exactly. The bombing elephant comes and stomps on the upper ground.
Starting point is 00:07:16 The whole thing is totally, it's all fiction. And initially, I was like, really taken in by this stuff. So there's a ton of AI slop. That's a problem. There is, I mean, it's just that it's useless, honestly, to me at least. So, I mean, I have nothing particularly good to say about the environment on Twitter right now. And it's a multiplicity, you know, profusion of problems from my person. perspective. Plus, plus, I wasn't so happy with all of my, I understand that all of our personal
Starting point is 00:07:46 DMing and stuff on Twitter basically belongs to X and could be used to train AI algorithms and so on. So none of that is appealing to me. Well, I think as we're speaking, there's a lawsuit. I think the first of its kind against social media companies in California. You mentioned John Haidt, he's been obviously instrumental in bringing awareness to this issue, especially the harm done to teenagers by social media. What is the path forward? Do you think it's a successful series of lawsuits, a revocation of Section 230, just a virtuous cycle of social contagion, where we all begin to change our minds at once and influence the norms around using social media? Or is it just that AI slop itself will provide some cure because, Every video you see, your first question here until the end of the world is, you know, is this even real? And it will begin to no longer care what's being presented in these non-gate kept channels.
Starting point is 00:08:49 So I have a few things to say about that. First of all, it's known that, as you everyone listening, knows that anonymity contributes to a lot of the problems. And, you know, this is why people used to, you know, torturers used to wear masks, you know, and people would be disinhibited when they went to masks. balls, for example, that, you know, these fancy past balls we imagine from hundreds of years ago, you know, that the aristocracy had, you know, it's disinhibiting to hide your, and this is also why people in mobs behave awfully. They have a kind of practical anonymity while you get riots. It's a sort of well-end process. So I think that humans, of course, behave worse when they're anonymous or pseudonymous. And now I have a hard time arguing. My problem is that I think that any entity where you can't be anonymous behavior is going to be better.
Starting point is 00:09:35 other hand, I don't necessarily want to abolish anonymity either because I think that's a tool for totalitarianism. So I think there will be social media companies which require or where people who use them, which afford people the opportunity to be non-anonymous and which people then privilege non-anonymous accounts, which I think will help. So I think tools to afford people the option and also to exploit non- anonymity will help. So like the old blue checkmark on Twitter was a good idea. Yeah. Another thing, you said 230, like I struggle with this as well because on the one hand,
Starting point is 00:10:13 I do think that 230 was crucial actually for the emergence of the internet. I do think that there is an argument to be made that these social media companies are just carriers and shouldn't be responsible for their content. On the other hand, I also think, you know, washing their hands of the content entirely doesn't make much sense either. It allows them to sort of wink, wink, and just ignore horrible abuses taking place on their platform. So I actually don't have an answer to that struggle either. But what I do think is going to happen, just as you said, is I think people, and maybe this will be accelerated by AI and AI slot. I think people will learn. And I think,
Starting point is 00:10:51 ironically, we may have a kind of return to a privileging of reputable sources. Like, you know, we've we've migrated so far away from, you know, the evening news with Dan Rather kind of, you know, thing to everyone is an expert and, you know, there's all this kind of good stuff, but also crap online. I think we may, ironically, people may be willing to pay a bit more for reliability. You may not believe it unless you read it in The Economist, you know, then you'll believe it. You're not going to believe whatever you see otherwise online. So it may reprivolge, you know, sort of incredibly real voices. Yeah, yeah.
Starting point is 00:11:30 I know you've done some research of late on AI and how it's, how it changes, not just human behavior with respect to technology or information sources, but behavior toward one another, right? It alters the mechanics of human cooperation on some level. Well, you know, take that strand if you want, but I mean, just generally speaking,
Starting point is 00:11:52 what are your thoughts about AI and where all of this is headed for us. So I want to tell a brief toy story or toy model or toy example of the question you just put. But before I tell that, I want to go on a slight digression. Yeah. And because I struggle a lot, as I suspect you do with, you know, what is happening with these incredibly powerful tools that are being so rapidly developed in our society. And there's this scene in the movie Fiddler on the roof where the protagonist, who's a milkman in the town of Anatevka, you know, around the time of the Russian revolution just before, actually, is a very poor man, goes to the town center, and there's a big argument that's going on there. And someone makes something, and Reptebia, he's the character,
Starting point is 00:12:35 says, you're right. And someone makes the opposite point. And he says, you're right too. And then someone says, Reptebia, they can't both be right. And he says, you're also right. And this is how I feel when I listen to debates by experts on AI. I listen to some computer scientists and some tech billionaires who talk about the amazing promise of AI and how there will be some bumps, but mostly it's going to be this extraordinary future and that to oppose it is to be a Luddite. And I think you're right. And then I listen to other incredibly expert computer scientists and tech billionaires who say the exact opposite.
Starting point is 00:13:09 Who said, you know, I think I was at an event with Sam Allman a couple of years ago, or a year ago, actually. And he said that he thought there was like a 2% human extinction risk from Aon. Yeah, I think actually I think it's higher and coming from. from him, I think his estimate was higher, but, or maybe, maybe, maybe he's recalibrated it in the interest of public relations, but I think he was more like 20% at one point. Yeah, but I mean, that's crazy to just not only. Yeah, yeah.
Starting point is 00:13:34 No, 2% is terrifying, but 20% is psychotic. So you listen to those guys and you're like, well, they're also right. Well, they can't both be right. And, you know, that's also true. So I have sort of stopped trying to form in my own, because I'm not so expert in this area. But I am expert in another area, which is related to this, which is this issue of how AI is going to change human behavior. And here, just to preface, one set of ideas,
Starting point is 00:14:01 the kind of toy model that I like to throw out there to sort of help people fix ideas is imagine the manufacturer of an Alexa digital assistant. The manufacturer of a digital assistant is very concerned with a human machine interaction. You would never buy an Alexa. If every time you had to speak to it, You said, you had to say, excuse me, Aletza, I'm very sorry to interrupt you.
Starting point is 00:14:23 If you don't mind, would you please tell me the weather tomorrow? Right? That would be an absurd level of politeness. You never buy an machine like that. You expect to be able to say, Alexa, weather, and it obediently responds. And that's fine until you bring the machine into your home and your children in speaking to that machine learn to be rude. And then they go to the playground and they are rude to other children. So what we've been studying in my lab is human-human interactions in the presence of machines.
Starting point is 00:14:54 And specifically what we've been focusing on is little perturbations in the AI systems, in the machine systems, that modify how the humans interact with each other. And in fact, what we're working on is not so much super smart AI to replace human cognition, but dumb AI to supplement human interaction. And because the humans are smart, you can think of the AI. AI as a kind of catalyst like platinum in an organic chemistry reaction that just facilitates the interaction of humans and helps optimize them. And we've done a broad set of experiments that have shown this as possible, that you can improve human collective and individual performance through the thoughtful injection of AI agents into social systems. Have you done any research or is there any research on the first point you made, though,
Starting point is 00:15:44 that kind of a, you know, coarse and instrumental use of AI has bleed through into human relations. And so kids are actually less socially appropriate if they're been barking orders at their bots all day. We haven't looked at that specifically. Like, that's just an example. I think that work has been done. And I think I think that work comports with my sort of hypothetical example. Well, what would you imagine in the case of humanoid robots? I mean, this is something that, honestly, I haven't spent that much. much time visualizing, but whenever I have spoken about it, I think we can stipulate that we will eventually get out of the uncanny valley and have robots that, that look, you know, if not perfectly human, you know, in some sense, better than human, right? They'll be perfect
Starting point is 00:16:31 humanoid in some sense. You know, when we want our AI shaped like that, we'll make it shaped like that. What do you, I've spoken to Paul Bloom about this some years ago in response to the, the series, Westworld. We looked at that and we thought one piece of philosophy that that was accomplished by that series is that it revealed that a place like Westworld probably couldn't exist because you'd really have to be a psychopath to go on vacation and rape, you know, perfect facsimiles of, you know, human women and girls and then come home and, you know, tell your friends what a good time you had, you know, raping and killing robots that were indistinguishable from humans. And so, unless, you know, maybe you could set up
Starting point is 00:17:14 a theme park that would act like a bug light for psychopaths in that way. But I mean, just normal people would not want to have a perfectly, seemingly veritical experience of being a moral monster. And you'd imagine some real contamination, both of how they felt about themselves and how other people saw them if we did that. So just imagine we get to the place where we have, now we're talking to humanoid robots and making demands upon them. I would imagine that our social graces will come creeping back in. I mean, honestly, even just in typing instructions into an LLM, I find myself being inappropriately polite, right? I mean, I'll use the word please, and I think that probably costs Sam Altman some number of dollars every time I do it. How's that going
Starting point is 00:18:01 to change us? Well, believe it or not, first of all, I'm not 100% sure I know the, I don't know the answer, but I speculate along with you, believe it or not, this also is an old topic, And it actually came up prior to the, well, certainly prior to the modern instantiation of Westworld, after the old movie, there's a book, I know it's over 20 years old now called something like love and sex with robots. People were speculating about what it would mean in some futuristic world in which we had the capacity to have intimate relations with machines. And there were two schools of thought on this. If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org. Once you do, you'll get actually.
Starting point is 00:18:41 access to all full-length episodes of the Making Sense podcast. The Making Sense podcast is ad-free and relies entirely on listener support, and you can subscribe now at samharris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.