Making Sense with Sam Harris - #319 — The Digital Multiverse

Episode Date: May 15, 2023

Sam Harris speaks with David Auerbach about the problematic structure of online networks. They discuss the tradeoffs between liberty and cooperation, the impossibility of fighting misinformation, bott...om-up vs top-down influences, recent developments in AI, deepfakes, the instability of skepticism, the future of social media, the weaknesses of LLMs, breaking up digital bubbles, online identity and privacy, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.   Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.  

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one.
Starting point is 00:00:49 Today I'm speaking with David Auerbach. David is a writer and technologist and software engineer. He previously worked at Google and Microsoft after graduating from Yale University. His writing has appeared in the Times Literary Supplement, MIT Technology Review, The Nation, N Plus One, Tablet, and elsewhere. He teaches the history of computation at the New Center for Research and Practice. And his most recent book is MegaNets, How Digital Forces Beyond Our Control Commandeer Our Daily Lives and Inner Realities. And that is the topic of today's conversation. We talk about the growth and the problems of online networks, the trade-offs
Starting point is 00:01:31 between liberty and cooperation, the apparent impossibility of getting rid of misinformation, bottom-up versus top-down influences, recent developments in AI, deep fakes, the instability of skepticism when faced with so much misinformation, the future of social media, the weaknesses of large language models, breaking up digital bubbles, online identity and privacy, and other topics. And now I bring you David Arbach. I am here with David Arbach. David, thanks for joining me. Thanks for having me, Sam. So you've written an all-too-timely book. That book is Mega Nets, How Digital Forces Beyond Our Control Commandeer Our Daily Lives and Inner
Starting point is 00:02:25 Realities. And I devoured this book this week, and it really speaks to our current circumstance in a comprehensive way. So I want to track through the case you make for diagnosing our problem and offering some possible solutions. But before we jump in, perhaps you can just summarize your background because you have an interesting intellectual history that straddles tech and the humanities in a nice way. So tell people what you've been up to over these many years. Yeah, I've sort of been all over the place. I mean, from a young age, I really loved computers, but also literature. So I tried to sort of keep a foot in both, but the direction of the times sort of pointed me towards software engineering. And so I did end up working as a software engineer at Microsoft around the turn of the century, and then Google
Starting point is 00:03:21 sort of in their meteoric rise days. And I spent a little over 10 years doing software engineering before deciding that it was time to, I don't know, step out and search for another perspective because I'd been looking into literature and philosophy at that time, during that time. And I wanted to see if I could do something that would conjoin those two sides of myself. And so I set out on writing and bringing what I hope is a unique vantage to my opinions on technology, but also society more generally and how technology is affecting it. society more generally and how technology is affecting it. I wrote a tech column for Slate for some years, and I was a policy wonk in D.C., which it's a great experience to have.
Starting point is 00:04:13 I think that in our hyper-specialized world, it actually is really good to have hands-on experience in wildly different domains. And there's nothing like attending graduate classes at the same time as working at Google to make you understand what unquestioned assumptions each culture has. Yeah. Yeah. That's interesting. Well, so let's jump in. Let's jump in, I guess, starting with the title of your book. What is a meganet and how do they commandeer our lives on a daily basis? And actually, I'll add a third question to that is why a new word? Because I think that every neologism needs a justification. So the official definition is that a meganet is a
Starting point is 00:04:59 persisting, evolving, and opaque data network that really does determine how we see the world. And it consists both of the algorithm and AI-driven servers that connect up online life, as well as for these people to for people and the algorithms to engage in sort of a feedback loop of accelerating content production distribution and so it leads to these three properties that i identify which are um velocity volume and virality in other words the size the, and the feedback that it generates, that it keeps compounding on itself. And what I say in the book, Meganets, is that these systems have gotten too fast and too big to be controlled in any sort of fine-grained way, that if we ask a CEO or a corporation to keep track of every bit of
Starting point is 00:06:07 content that's published and squash out the stuff that we don't like for whatever definition of we don't like you want, that's a non-starter at this point. It's just too fast. It also leads to inevitable viral blow-ups and crises that happen when a certain meme or whatever takes off. And by the time you're trying to stamp it out, the horse is already bolted from the barn. And to that last question of why a new word, my experience as a software engineer was that we really underestimated the human component. We saw the systems getting
Starting point is 00:06:45 bigger, but I really feel no one foresaw just how much assigning a little bit of control to every single user so that they were influencing the weights and the algorithms, that their data was going into the system and having a little nudge on the servers and the algorithms. That influence collectively was actually a major, major force that couldn't be shaped through algorithmic, technological, or top-down means. And so I coined the term to reflect that it requires both the human and the machine component and that we ignore either of them at our peril because it's the combination of the two that's led us to where we are. That machines by themselves could not do,
Starting point is 00:07:32 could not create the world that we exist in today. It's because we're hooked up to them constantly in this feedback loop of reacting and shaping and spreading and forwarding that you are seeing these out-of-control behaviors take place that make these systems feel much more organic and ecological, like the weather, more than what we think of traditionally as technological networks. Well, what specific systems are we talking about? I think many people listening to you so far will think that what you're talking about must be limited to social networks, social media companies, I guess, including things like YouTube. What are some examples of mega nets?
Starting point is 00:08:15 Those are the ones where I think we feel it and we observe it most directly because that's what we interact with on a daily basis. But these systems are actually present at many levels in life. There are things that are somewhat adjacent to social networks, like online gaming, which has been said to be the core of what's going to become the metaverse, if the metaverse is still a thing. But the gamification of reality and online and offline life is proceeding apace. So I think that we should look at that. But also things like cryptocurrency networks, where things get out of control very, very quickly, in some cases by design, but also for reasons that may not immediately be clear even to the people who are using cryptocurrency networks. Beyond that, even to the people who are using cryptocurrency networks. Beyond that, we also see governments getting into this business. In the West, at least, the integration of government services
Starting point is 00:09:13 and identification systems has been a bit slow to happen. But in India, citizen identity has already been centralized around a single identifier called Aadhaar. And if you look at how it is connecting up the various systems and forms of identity, you know, it's not as though in India you don't have a separate driver's license number and a separate social security number. Everything is tied around the Aadhaar number. And that also produces these sorts of feedback effects because more and more systems get pulled in around that identifier and start reacting to one another. And AI is an interesting case because AI certainly qualifies as a meganet or at least a component of meganet. And one of the things I argue is that a lot of what we see in AI that
Starting point is 00:09:57 disturbs us so much is less AI technology per se and more a consequence of these meganet systems that we've already set up and that we can see some of the things that trouble us about AI already happening in the more out of control, but less AI influenced systems like, you know, recommendation engines or cryptocurrency networks, for example. So those are some of the things, but I think you could also, you could extend it, you could extend it to more. I think that in the economic realm is probably where we're going to feel the strongest. We see it the most
Starting point is 00:10:30 in the sociological arena online, but this sort of phenomenon happens in my opinion whenever you get enough people hooked up to a network in such a way that you get these feedback effects. And that is in no way restricted to social networks. I mean, if you want an instance that
Starting point is 00:10:49 combines them, look at the GameStop stock, or the stonk as it was called, where a bunch of Redditors managed to send GameStop soaring in the absence of any change of its fundamentals. And all the institutional investors and the SEC were very annoyed by this, but they couldn't find that it was actually illegal because there wasn't any actual collusion going on. What was happening is that it was blowing up like a meme. And that's the sort of thing where I say it's not necessarily going to stay on social networks because it can spread to the rest of our world. I think people will have an intuitive sense of what you mean by virality or velocity, but can you spell out what you mean by feedback in this case?
Starting point is 00:11:37 So, yeah, virality is my V word for feedback. feedback. And by feedback, I simply mean that before you have had time to look at the result of a system, the system has already incorporated the last iteration of its state into its next state. In other words, it's like you're never walking into the same river twice, to quote the old Heraclitus. You're never walking into the same algorithm or data stream twice. We tend to think of algorithms as fixed things that we can, you know, we can tweak or twist a gear on, but actually our interactions constantly shape those algorithms and change their weights. You're not, if you do a search on Twitter or Facebook or Google, you're not guaranteed to get the same thing a minute later than you got a minute ago. You might, but the very act of you searching has already become a new piece of input into the weights of those algorithms. And that's what I mean by feedback, that you have these effects
Starting point is 00:12:36 that cause viral portions to amplify and get out of control before anyone has had a chance. It's not as though someone is commandeering this from the top down. Some people try to commandeer it, but I actually think it's much harder to do than people think. And that conspiratorial thinking is kind of a comfort because you think, okay, well, all this chaos and misery I'm experiencing wherever is because Facebook or Microsoft wants me to be miserable. But in actuality, having been on the inside, I don't think any of us were thrilled or expected that our algorithms would come to be so dependent on the actual interactions of users. Okay. So I think we're probably going to focus on the social media component because I think that is, and we'll talk about AI as well, but there's what you describe as you kind of lay out the nature of the problem and offer some remedies. landscape of trade-offs. And many people are becoming more and more sensitive to some of these trade-offs. And they're, in some cases, picking one extreme, more or less, to the
Starting point is 00:13:52 exclusion of every other consideration. So I think in the information and misinformation space, many of us now perceive that there's some trade-off between basic sanity and liberty, right? The freedom to just say anything at any scale, with any velocity, with any consequences, is in tension with our ability to know what's real in any given moment and to cooperate effectively and to maintain the normal healthy bonds of an intact society, to have a workable politics, it seems to require that we deal with misinformation and disinformation in some way. And yet, the so-called free speech absolutists tend to view any attempt to deal with the basic problem of a shattered epistemology
Starting point is 00:14:48 as an Orwellian overreach and abridgment of our civil liberties. And what's interesting is many of the people who are most adamant that any attempt to deal with misinformation or disinformation is just code for an infringement of free speech in the U.S. context, you know, the infringement of the First Amendment. Many of these same people have a very different view of the right to assembly, which is also enshrined in the First Amendment. So some of the same people, I won't name them here, but they will hear themselves referred to, have been very focused on, in particular, the dysfunction in a city like San Francisco
Starting point is 00:15:35 with all the homelessness and the mental illness being played out on the sidewalks and the open-air drug markets. They've been very concerned that we admit that it is an unacceptable negative externality to have people defecating on our sidewalks. You can't tolerate this awful status quo under the aegis of, well, this is just freedom of assembly. Everyone has a right to congregate on the sidewalk. Are you going to abridge that right? You know, everyone has a right to congregate on the sidewalk.
Starting point is 00:16:04 Are you going to abridge that right? What are you, Stalin? But these same people will not address the quite similar concerns about a digital sewer that we're now all living in and having to swim through and the digital anarchy that results when we can't have a conversation that converges on basic facts about anything, whether it's a pandemic or whether an election was run appropriately, etc. So let's start with this trade-off or perceived trade-off between understanding our world and being able to speak to one another about consequential issues and the freedom to say anything at any scale. It's interesting because I think a lot of it is affected by the issue of volume, that we live in this world now of informational abundance, and that's very different. We used to live in a world of informational scarcity where there was actually selection pressure and there had to
Starting point is 00:17:06 ultimately be only a couple of views that won the day. I don't think that's really true anymore. I think that, and I think you see this, that those efforts to stamp out misinformation that some people have tremendous problems with, they aren't all that effective. that some people have tremendous problems with, they aren't all that effective. That you see these factions persist no matter what you do to them. And people complain bitterly, but the weird thing is that
Starting point is 00:17:32 they don't seem to have been stamped out all that much except in extremely virulent and perhaps blatantly illegal cases. For all Facebook gets criticized for censoring stuff or not censoring stuff i can find pretty vile stuff at with very easily on it so i think that what we're actually seeing right now is not even much in the way of censorship so much as just hiding it from view and that the rapprochement in the like in the tension you describe is going to come from people just pretending that stuff doesn't exist the bad stuff doesn't exist which honestly is
Starting point is 00:18:12 the traditional way we've all we've always done it uh that our problem seems to be less with um with with these points of view existing than of us being reminded of them and having them shoved in our face. Well, one point you make at various points in the book is that the companies, I mean, let's say Facebook or YouTube or Twitter as examples, have much less control than their users imagine, right? So that Mark Zuckerberg can't actually stamp out misinformation, even if he wanted to, and even if that was, even if he could accurately target misinformation as misinformation and not commit his own errors of propagating misinformation in the process,
Starting point is 00:18:58 even an omniscient Zuckerberg can't actually affect the censorship change he might want to affect. There are only coarse-grained mechanisms available. In the run-up to the 2020 election, they did ban all political advertising. That can be done. But to ban only misinformation, well, that's a relative... Well, it's a, A, you have to get people to agree on what misinformation is. That's tricky enough already. B, you need to somehow algorithmically determine whether something is misinformation or not. And that's what I'm saying you're never going to do to enough of a degree that you're going to stamp it out. Because, yeah, you know, it becomes like fighting censorship.
Starting point is 00:19:44 China has effectively been trying to do this for for decades and with only mixed success they really do have an army of government censors online and still stuff gets through non-stop so and you know we don't even want to i hope a lot of people would would at least agree we don't want to get to China's level, even while I think we can also say that pure anarchy and pure hyper-libertarianism creates an environment that almost nobody wants to exist in. Yeah, well, I think for me, and I could be mistaken about this, but the distinction that has seemed relevant up until this point has been encapsulated by somebody's phrase, I forget who coined this, but to say that freedom of speech is not freedom of reach, right? Which is to say that there's a distinction between the political freedom to say whatever you want, whenever you want, which is enshrined in the First
Starting point is 00:20:43 Amendment with some specific exceptions, and which, you know, I'm want, which is enshrined in the First Amendment with some specific exceptions, and which, you know, I'm totally happy to defend. I don't think people should be thrown in prison for saying things we don't like, and even in most cases saying things that are untrue. But being able to freely speak and write and publish in ordinary channels is not the same thing as being free to have your speech algorithmically boosted because we have built a machine that preferentially amplifies misinformation and disinformation and outrage. And, you know, I mean, this comes back to the original sin or what many people consider to be the original sin of the Internet, which is the ad-based attention gaming business model. If you break that link between the freedom to say anything and the machinery of amplification, that has been the bright line that many of us have been trying to focus on.
Starting point is 00:21:43 But do you agree with that? Or is it more complicated than that? I think it is more complicated than that. I mean, I think you're totally right. But I also think that the machinery of amplification is changing in ways that we've only begun to grasp. That, you know, after the 20th century of top-down general, you know, a 20th century of top-down general broadcast media where the overall shape of the narrative, even if there were disagreements within it, was set by a small number of elite players. We're now seeing that that's no longer the case and you can actually have a bottom-up generation of a narrative because you've seen narratives that, while they may benefit one political party or another, are definitely not created by that political party because they
Starting point is 00:22:30 carry certain liabilities with them. I don't know if I should name them or not, but I think you can know what I'm talking about here. And it's because of those feedback loops that you no longer need some sort of shepherd or demagogue to start generating to start generating an entire narrative landscape that then reinforces itself because you've got you know you've got these mega nets that are bringing people like-minded people together and just uh having them say yeah you're right and what about this and building up a corpus or a lore or whatever independent of of you know what we think of as traditional social societal elder influences and so you know what is amplification you know having been associated having had served my time in i guess vaguely traditional old media, new media elite circles, their power is dissipating.
Starting point is 00:23:28 They definitely have less power than they used to. And I think that no matter where you are in the spectrum, you tend to think that other people have more power to amplify than you do, because I think everybody is seeing their power decrease or no one feels that they have enough. So that if you say, see the New York Times dissing your point of view, you take that as a societal disapproval, even though the New York Times is really no longer the paper of record in the way that it was 50 years ago. So I think there's difficulty even assessing what amplification is and who's getting amplified, that we don't generate, you know, we couldn't generate, Harry Belafonte just died. He was, before Michael Jackson, he was everybody owned Calypso, his Calypso record. I don't think we have the mass media machinery to generate that sort of unity anymore because there's no selection pressure.
Starting point is 00:24:28 It's not that one particular product or narrative has to win. Everybody can win. Yeah, it's an interesting point because it's pre-internet. If you were going to start something like QAnon or some other cult of conspiracy, it had to have been much harder to do. It's not that it was impossible, but I mean, you'd have to physically congregate with people. You'd have to have a QAnon conference, and then you'd be meeting people in the flesh and seeing all of the visual and palpable evidence of their crackpottedness, presumably. Right. So you said it. Yeah. So yeah, it was QAnon that I was talking about. And exactly,
Starting point is 00:25:11 it's like the QAnon has certainly brought some benefit to the Republican Party. But do I in any way think that the traditional Republican elite decided that they wanted QAnon to be a thing? No, I think they would have shaped it very differently had they had the option, because it carries with it some severe liabilities that they have to deal with. Well, so maybe we should talk, well, let's bring in the AI piece before we talk about remedies here. Like almost everyone, or probably everyone who wrote a book that talked about AI and published it any time earlier than last week, I would imagine some of what you say about large language models and deep learning might feel a little dated. Is there anything you would want to modify now, given what's happened with GPT-4? I mean, you mentioned GPT-3 in the book, so you're sort of up to the
Starting point is 00:26:06 minute there. But I think you were very skeptical of the ability of these large language models to process speech effectively. And I mean, are they going to be more powerful than you expected? Or what are your thoughts about AI at the moment? Honestly, I stand by what I said 100%. I think that they have the same feelings. They are the equivalent of the old horse, Clever Hans, that was very good at being cued by people and responding in convincing ways, but couldn't actually do math. My opinion is that these large language models are incredible, and they are incredible at producing content, which I do say in the book. What they are bad at is actually behaving with genuine understanding
Starting point is 00:26:57 because they don't have it. So I actually think that I've been, I think it holds up pretty well. I will defend it at length, actually. Yeah. And some of the weirdness that we're seeing, also the fact that these AIs are clearly behaving in ways that weren't intended by their creators. When that New York Times author got freaked out by the Microsoft Sydney chatbot that wanted to release nuclear codes. Yeah. Well, if you look back at it, you can see that he was cuing it.
Starting point is 00:27:26 Yeah. I had someone say, oh, I'd feel so much better if it wrote about world peace. And I said, I can get it to talk about world peace, talk about sunshine, lollipops and rainbows.
Starting point is 00:27:33 And why was it so uncanny? It's because it's been seeded. It's been trained on all the collective, conscious and unconscious writings of humanity so that when we said, oh, what would a horrible, what would you do if you were horrible AI? It parroted back the exact most common nightmares that humans have. And then
Starting point is 00:27:54 as we write a bunch of stories about it, that feeds back into the next iteration of these chat bots and it feeds them back to us. So no, I think that the LLMs are pretty much about where I expect them to be, and I do not see them getting past that to a point of achieving what could be called reasoning or true cognition anytime soon, even language models will be pernicious or benign or beneficial in the near term? I mean, are you optimistic or pessimistic about the near-term effects? I mean, let's leave AGI and singularities and other concerns aside. Just give me your sense of the next six to 12 months with respect to the kinds of problems we've been talking about with Meganets. What will AI do to help or hurt the situation? I actually think that in the next
Starting point is 00:29:01 year or so, things will not change that much because it's going to take some time to start deploying these AIs in increasing numbers of contexts. So in the very short term, I think it'll continue to be a novelty and people will tear their hair out, but it's going to take a couple more years before you start seeing it deployed to generate content, to help people generate content, you know, to work in collaboration with humans, which is, I think, where you will see a big difference. That if you have a human assisting an AI, this human provides the actual reason and the AI provides the frosting, as it were, that you're going to see. And moreover,
Starting point is 00:29:41 you're going to see increasing cases where even if this thing doesn't actually, even if these things don't actually think, people will believe that they think. That's where, that's where you're going to see the biggest changes is on the human side. And again, that gets back to my theme that the human aspect of this is just as important as the machine aspect of it. That in some ways ways creating a machine that convinces people that it's thinking is, if not as much of an achievement, is certainly as big a deal as if you created a machine that actually does think. And that goes back to, you know, Eliza, which in the 60s was tricking people into thinking that it was an actual therapist that cared about their
Starting point is 00:30:21 feelings. Well, this is the supercharged version of it because it's much better than Eliza, but it's not new for these Turing tests to supposedly be passed, especially if you really want it to be passed. There was that company, I think that was marketing virtual girlfriends and boyfriends as chatbots and people got really upset when they shut off the romantic language. I don't know. Did you write about this? I forget the company's name. I did hear about this. Yeah. I forgot the company. Yeah. That may again, may take another 12 months, but you're going to start seeing this. The human desire for company, for pets, it's like Tamagotchis, that that's going to manifest itself. And the more that we can embody them in one way or another, the better it will be. So even though you won't be able to have a conversation with them that feels convincingly human, at least not if you're looking at it skeptically, you can still have something that behaves on the order of, say, a pet. And if it's human enough, maybe you can feel romantically towards it.
Starting point is 00:31:24 But what's the distinction? You'll also see massive downward pressure on content creation. You're already seeing content farm being generated, but it's going to get much easier to generate AstroTurf or whatever in huge amounts. And at the point where you can start generating news articles based on press releases, there will be what's already been a downward pressure on content generation will get even lower. And that'll spread to video as AI generation of video and sound gets better as well. Aren't you expecting the spamification of everything where at a certain point, most of the content on the internet will be AI generated, whether it's text or video or audio. And then we'll have this persistent problem with the not
Starting point is 00:32:11 knowing what is in fact real. I mean, when you won't, you won't know whether an image is real, you won't know whether a video is real. You'll be, you'll be reading news articles that you're pretty sure were written by entirely by AI. Oh, absolutely. I mean, to some extent, that's already true on Twitter. It's not a lot of tweets you can't quite tell, and you're just going to see that phenomenon grow and grow. In 10 years' time, it's not going to be easy to determine whether a video on site is real or manufactured. And that gets back to what I think you said about judging reality. And I think that what's going to happen is people are going to have different versions of reality because with so much abundance of information out there, you can find stuff to support your
Starting point is 00:32:55 version of reality. If you want a reality in which QAnon is true, it's going to become easier and easier to just shore it up. So what do you imagine the effect will be? I imagine many of us will just declare something like epistemological bankruptcy with respect to the Internet and want to read old books more of the time. How do you imagine we deal with an absolute tsunami of fake and a half-fake or otherwise unreliable information? Well, you know, a lot of people believe that there were WMDs in Iraq for quite a while. So people can hold on to their beliefs quite rigorously, especially if they're in a community of people who agree on them. If it's just you in isolation, I totally can agree declaring them an intellectual bankruptcy,
Starting point is 00:33:53 but skepticism is hard to maintain. It takes a lot of effort. And I say this as someone who's predisposed towards it, that the comfort of being around people who think the way that you do. And, you know, when I was, honestly, I probably saw a lot of this in academia because academia is, because it's a shrinking environment. Academics are very much in competition with each other. And so the sort of enforcement of a certain purity and hothouse removal from the world has gotten larger and larger, But that doesn't make people, as long as there's an incentive for them to keep believing what they're believing, they'll do it. And as long as you're getting social approval for believing those things,
Starting point is 00:34:35 I figure you probably will stay online. What I do think will happen is that these, I call them narrative bunkers. It's beyond filter bubbles because it's not just you only see, it's that you're actually in a community of people who are actually reinforcing certain assumptions about the world. You can have disagreements about it, but the assumptions are the same way, in the same way that if you
Starting point is 00:34:57 watch, say, I don't know, Fox News for a week, even if you disagree with everything you see, you will start to take their narrative frame into account. And that's what's going to happen. You're going to see this divergence and factionalization of narrative frames. And increasingly, you won't even be able to understand what people in other narrative frames are saying. I feel like this already happens to some extent, that you see people in sort of the bay area tech scene compared to people in say the new york media scene or you know people who complain about san francisco
Starting point is 00:35:29 becoming a living hellhole on earth take your pick that all all these people are working with such assumptions about things they've never seen and perhaps this was always the case to a point but it's only growing stronger i was in seattle a few weeks ago and i was talking to a point, but it's only growing stronger. I was in Seattle a few weeks ago, and I was talking to a couple of people about the, do you remember when there were like the Seattle protests and they formed that autonomous zone? And just from reading reports online, it was like, according to some people, it was a dystopian wasteland, according to others. If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app.
Starting point is 00:36:17 The Making Sense podcast is ad-free and relies entirely on listener support, and you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.