Making Sense with Sam Harris - Making Sense of Social Media and the Information Landscape | Episode 8 of The Essential Sam Harris

Episode Date: May 5, 2023

In this episode, we examine a series of Sam’s conversations centered around social media’s impact on the information landscape. We begin with Sam’s second conversation with Tristan Harris, which... was conducted shortly after the release of Tristan’s documentary, The Social Dilemma. The documentary lays out Tristan’s thesis on how social media is causing the deterioration of both individual and societal welfare. Author and technologist Jaron Lanier follows, echoing Tristan’s concerns and shifting the conversation to social media’s unique business model, addressing how perverse incentives reliably produce such detrimental outcomes. We then hear from Jack Dorsey, the former CEO of Twitter. Sam and Dorsey’s conversation took place when Dorsey was still working at Twitter, and Sam still had an account. However, the questions they pose—relating to issues of content moderation and corporate transparency—are even more relevant today. Next, psychologist Jonathan Haidt presents the alarming findings from his research on the psychological effects of social media, detailing how teenage girls are bearing the brunt of a mental health crisis.  Shifting to a more political lens, Sam and Cass Sunstein discuss Sunstein’s book, #Republic, and Sunstein addresses one of Sam’s most pressing fears of the last seven years: how social media is warping our opinions on politics. We then narrow down on this issue, with Zeynep Tufekci explaining the real-life consequences of social media’s influence on protest movements. Finally, Sam and technology analyst Nina Schick dive into one of the most urgent concerns of the AI boom: deepfakes and how they might be weaponized to further pollute and degrade our information landscape.   About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating.  

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Welcome to The Essential Sam Harris. This is Making Sense of Social Media and the Information Landscape. The goal of this series is to organize, compile, and juxtapose conversations hosted by Sam Harris into specific areas of interest. This is an ongoing effort to construct a coherent overview of Sam's perspectives and arguments,
Starting point is 00:01:16 the various explorations and approaches to the topic, the relevant agreements and disagreements, and the pushbacks and evolving thoughts which his guests have advanced. The purpose of these compilations is not to provide a complete picture of any issue, but to entice you to go deeper into these subjects. Along the way, we'll point you to the full episodes with each featured guest, and at the conclusion, we'll offer some reading, listening, and watching suggestions, which range from fun and light to densely academic. which range from fun and light to densely academic. One note to keep in mind for this series. Sam has long argued for a unity of knowledge where the barriers between fields of study are viewed as largely unhelpful artifacts of unnecessarily partitioned thought.
Starting point is 00:01:58 The pursuit of wisdom and reason in one area of study naturally bleeds into, and greatly affects, others. area of study, naturally bleeds into, and greatly affects, others. You'll hear plenty of crossover into other topics as these dives into the archives unfold. And your thinking about a particular topic may shift as you realize its contingent relationships with others. In this topic, you'll hear the natural overlap with theories of moral and political philosophy, belief and unbelief, free will, and artificial intelligence. So, get ready. Let's make sense of social media and the information landscape. One very important update before we jump into this compilation. Since the initial writing and recording of this episode, Sam quit Twitter entirely. He recorded a solo episode entitled, Why I Left Twitter, which explains his reasoning and thought process for that decision. We, of course, recommend listening to that,
Starting point is 00:02:59 in addition to the included conversations here. The knowledge that Sam eventually walked away from social media platforms places an interesting lens over his conversations on the subject from the previous decade. Though, as you'll hear, this compilation extends the considered issues of social media well beyond personal engagement and into the many ways in which the surveillance economy generally has warped our politics, social relations, and moral psychologies. Why I Left Twitter is episode 304. And now, back to the episode. Social media is one of those topics that everyone seems to have strong opinions about. That fact in itself, the idea that our feelings on just about everything seem to have gotten stronger,
Starting point is 00:03:52 inflamed by the advent of social media, is something we'll fold into the discussion during this compilation. Like just about all of us, Sam has gone through, and continues to go through, a strained relationship with social media. Apparently, even the most practiced meditators can be hijacked by algorithms that target our propensity for outrage, adulation, annoyance, and disgust. The beast of social media is strong. But of course, social media also has positive potential and its own success stories. There are unignorable societal benefits that must be evaluated and considered. This compilation contains plenty of critique and perspective regarding the darker sides of social media and the economic model which has provided its scaffolding, but the criticism should not
Starting point is 00:04:43 completely crowd out or invalidate the defenders and believers in its positive possibilities. We're also going to be situating the social media question in a broader context of the business model which enables it, something that's been called surveillance capitalism by its critics and personalized advertising by its more supportive advocates. We'll also be zooming in on some of the specific technologies upon which all of this is built. In the introduction to an episode with Jaron Lanier, which is included in this compilation, Sam identified three main lines of inquiry for this topic. The first is economics and the question of incentives in the face of automation and
Starting point is 00:05:26 artificial intelligence. The second is politics and the question of how we can cooperate and cohere on ideas in a space where truth is being hollowed out. And the third is psychology and how our attention and well-being are being assaulted by the power of the surveillance economy and social media. We'll be wandering through that same roadmap through these clips. There won't be a ton of deep philosophical lessons and thought experiments to walk you through in this episode. Much of what we'll be tackling is plain for most people to see and experience for themselves. We're going to hear Sam's conversations with defectors from the ranks of the architects of information ecosystems in Silicon Valley, like Tristan Harris and Jaron Lanier.
Starting point is 00:06:10 We're going to hear some of Sam's conversation and pseudo-interrogation of Jack Dorsey himself, the co-founder of Twitter, who was also its CEO at the time of their conversation. their conversation. And we're going to hear from authors like Jonathan Haidt and Cass Sunstein, who have studied and continue to investigate the impacts of social media on individuals and the health of democracy. We're also going to broaden our lens and listen in on a conversation with Zeynep Tufekci, an author who focuses on global movements and geopolitics, and consider how social media fuels, diverts, or otherwise confuses political efforts. And finally, we're going to tiptoe into the emerging deepfake technology, which threatens to pour even more fuel on the fire of the collapsing integrity of global information. So let's start with Sam talking to Tristan Harris. So let's start with Sam talking to Tristan Harris.
Starting point is 00:07:06 No relation, by the way. Tristan has been called the closest thing Silicon Valley has to a conscience. Tristan had just appeared in a documentary entitled The Social Dilemma when he spoke to Sam. So much of their conversation references the film, which is certainly recommended viewing for this topic. Tristan has been laser-focused on the problems of social media after spending years working as a designer for Google and seeing firsthand the potent attention-harnessing techniques that lurk behind the apps on your phone. If you listen to our compilation about artificial intelligence, you'll be familiar with a concern
Starting point is 00:07:43 about our strengths and competencies being squashed by technology. Here, you'll hear Tristan flip that concern around with a sharp observation. Tristan has appeared on Making Sense twice. This is from the more recent conversation from episode 218, Welcome to the Cult Factory. Welcome to the Cult Factory. going to unspool. Well, it's funny because the film actually opens with that prompt, the blank stares of many technology insiders, including myself, because I think it's so hard to define exactly what this problem is. There's clearly a problem of incentives, but beneath that, there's a problem of what those incentives are doing and where the exact harms show up. And the way that we frame it in the film and in a big presentation we gave at the SF Jazz Center back in April 2019 to a bunch of the top technologists and people in the film and in a big presentation we gave at the SF Jazz Center back in April 2019 to a bunch of the top technologists and people in the industry was to say that while we've all
Starting point is 00:08:50 been looking out for the moment when AI would overwhelm human strengths and when we would get the singularity, when would AI take our jobs, when would it be smarter than humans, we missed this much, much earlier point when technology didn't overwhelm human strengths, but it undermined human weaknesses. And you can actually frame the cacophony of grievances and scandals and problems that we've seen in the tech industry from distraction to addiction, to polarization, to bullying, to harassment, to the breakdown of truth, all in terms of progressively hacking more and more of human vulnerabilities and weaknesses. So if we take it from the top, you know, our brain's short-term memory system have seven plus or minus two things that we can hold. When technology starts to overwhelm
Starting point is 00:09:34 our short-term and working memory, we feel that as a problem called distraction. Oh my gosh, I can't remember what I was doing. I came here to open an email. I came here to go to Facebook to look something up, but now I got sucked down into something else. That's a problem of overwhelming the human limit and weakness of just our working memory. When it overwhelms our dopamine systems and our reward systems, we feel that as a problem called addiction. When it taps into and exploits our reliance on stopping cues that at some point I will stop talking and that's a cue for you to keep going. When technology doesn't stop talking and it just gives you the independent bottomless bowl, we feel that as a problem called addiction or addictive use. When technology exploits our
Starting point is 00:10:12 social approval and giving us more and more social approval, we feel that as a problem called teen depression because suddenly children are dosed with social approval every few minutes and are hungry for more likes and comparing themselves in terms of the currency of likes. And when technology hacks the limits of our heuristics for determining what is true, for example, that that Twitter profile who just commented on your tweet five seconds ago, that photo looked pretty real. They've got a bio that seems pretty real. They've got 10,000 followers.
Starting point is 00:10:39 We only have a few cues that we can use to discern what is real. And bots and deepfakes, and I'm sure we'll get into GPT-3, actually overwhelm that human weakness. So we don't even know what's true. So I think the main thing that we really want people to get is through a series of misaligned incentives, which we'll further get into, technology has overwhelmed and undermined human weaknesses. And many of the problems that we're seeing as separate are actually the same. Just one more thing on this analogy. It's kind of like, you know, collectively, this digital fallout of addiction, teen depression, suicides, polarization, breakdown of truth. We think of this as a collective digital fallout or a kind of climate change of culture that much like the, you know, oil extractive economy that we have been living in an extractive race for attention.
Starting point is 00:11:23 There's only so much. When it starts running out, we have to start fracking your attention by splitting your attention into multiple streams. I want you watching an iPad and a phone and the television at the same time because that lets me triple the size of the attention economy. But that extractive race for attention creates this global climate change of culture. And much like climate change, it happens slowly. It happens gradually. It happens chronically. It's not this sudden immediate threat. It's this slow erosion of the social fabric. And that collectively, we called in that presentation human downgrading, but you can call it whatever you want. The point is that if you think back to the climate change movement, before there was climate change as a cohesive understanding of emissions and linking to
Starting point is 00:12:03 climate change, we had some people working on polar bears, some people working on the coral reefs. We had some people working on species loss in the Amazon. And it wasn't until we had an encompassing view of how all these problems get worse that we start to get changed. So we're really hoping that this film can act as a kind of catalyst for a global response to this really destructive thing that's happened to society. Okay, so let me play devil's advocate for a moment using some of the elements you've already put into play, because you and I are going to impressively agree throughout this conversation
Starting point is 00:12:36 on the nature of the problem. But I'm channeling a skeptic here, and it's actually not that hard for me to empathize with a skeptic because, as you point out, it really takes a fair amount of work to pry the scales from people's eyes on this point. And the nature of the problem, though it really is everywhere to be seen, it's surprisingly elusive, right? So if you reference something like a spike in teen depression and self-harm and suicide, there's no one who's going to pretend not to care about that. And then it really is just the question of what's the causality here and is it really a matter of exposure to social media that is driving it. And I don't think people are especially skeptical of that. And that's a discrete problem that I think most people would easily understand and be concerned about. But the more general problem for all of us is harder to keep in view. So when you talk about things, again, these are things you've already conceded in a way. So attention has been a finite resource always, and everyone has always been competing for it. So if you're going to publish a book, you are part of this race for
Starting point is 00:13:53 people's attention. If you were going to release something on the radio or television, it was always a matter of trying to grab people's attention. And as you say, we're trying to do it right now with this podcast. So when considered through that lens, it's hard to see what is fundamentally new here, right? So yes, this is zero-sum. And then the question is, is it good content or not? I think people want to say, right? This is just a matter of interfacing in some way with human desire and human curiosity. And you're either doing that successfully or not. And what's so bad about really succeeding, you know, just fundamentally succeeding in a way that, yeah, I mean, you can call it addiction, but really it's just what people find captivating. It's what people want to do. They want to grant
Starting point is 00:14:42 their attention to the next video that is absolutely enthralling. But how is that different from leafing through the pages of a hard copy of Vanity Fair in the year 1987 and feeling that you really want to read the next article rather than work or do whatever else you thought you were going to do with your afternoon. So there's that. And then there's this sense that the fact that advertising is involved and really the foundation of everything we're going to talk about, what's so bad about that? So really, it's a story of ads just getting better. I don't have to see ads for Tampax anymore. I go online and I see ads for things that I probably want or nearly want because I abandoned them in my Zappos shopping cart. So what's wrong with that? And I think most people are stuck in that place. We have to do a lot of work to bring them into the place of the conversation where the emergency becomes salient. And so let's start there. Gosh, there's so much good stuff to unpack here. So on the attention economy, obviously,
Starting point is 00:15:55 we've always had it. We've had television competing for attention, radio, and we've had evolutions of the attention economy before. Competition between books, competition between newspapers, competition between television to more engaging television to more channels of television. So in many ways, this isn't new. But I think what we really need to look at is what was mediating, where that attention went to. Mediating is a big word. Smartphones, we check our smartphones, you know, a hundred times or something like that per day. They are intimately woven into the fabric of our daily lives. And ever more so because of we pre-establish addiction or just this addictive checking that we have that
Starting point is 00:16:29 any moment of anxiety, we turn to our phone to look at it. So it's intimately woven into where the attention starting place will come from. It's also taken over our fundamental infrastructure for our basic verbs. Like if I want to talk to you or talk to someone else, my phone has become the primary vehicle for just about for many, many verbs in my life, whether it's ordering food or speaking to someone or figuring out where to go on a map. We are increasingly reliant on this central node of our smartphone to be a router
Starting point is 00:17:00 for where all of our attention goes. So that's the first part of this intimately woven nature and the fact that it's our social, it's part of the social infrastructure by which we rely on. We can't avoid it. And part of what makes technology today inhumane is that we're reliant on infrastructure that's not safe or contaminated for many reasons that we'll get into later. A second reason that's different is the degree of asymmetry between, let's say, that newspaper editor or journalist who is writing that enticing article to get you to turn to the next page versus the level of asymmetry of when
Starting point is 00:17:30 you watch a YouTube video and you think, yeah, this time I'm just going to watch one video and then I got to go back to work. And you wake up from a trance, you know, two hours later and you say, man, what happened to me? I should have had more self-control. What that misses is there's literally the Google's billions of dollars of supercomputing infrastructure on the other side of that slab of glass in your hand, pointed at your brain, doing predictive analytics on what would be the perfect next video to keep you here. And the same is true on Facebook. You think, okay, I've sort of been scrolling through this thing for a while, but I'm just going to swipe up one more time,
Starting point is 00:18:02 and then I'm done. Each time you swipe up with your finger, you're activating a Twitter or a Facebook or a TikTok supercomputer that's doing predictive analytics, which has billions of data points on exactly the thing that'll keep you here. And I think it's important to expand this metaphor in a way that you've talked about, I think in your show before, about just the power, increasing power and computational power of AI. When you think about a supercomputer pointed at your brain, trying to figure out what's the perfect next thing to show you, that's on one side of the screen. On the other side of the screen is my prefrontal cortex, which has evolved millions of years ago and doing the best job it can to do
Starting point is 00:18:37 goal articulation, goal retention and memory and sort of staying on task, self-discipline, etc. So who's going to win in that battle? Well, a good metaphor for this is, let's say you or I were to play Garry Kasparov at chess. Like, why would you or I lose? It's because, you know, there I am on the chessboard and I'm thinking, okay, if I do this, he'll do this. But if I do this, he'll do this. And I'm playing out a few moves ahead on the chessboard.
Starting point is 00:19:00 But when Garry looks at that same chessboard, he's playing out a million more moves ahead than I can, right? And that's why Gary's going to win and beat you and I every single time. But when Gary, the human, is playing chess against the best supercomputer in the world, no matter how many million moves ahead that Gary can see, the supercomputer can see billions of moves ahead. And when he beats Gary, who is the best human chess player of all time, he's beaten like the human brain at chess, because that was kind of the best one that we had. And so when you look at the degree of asymmetry that we now have, when you're sitting there innocuously saying,
Starting point is 00:19:34 okay, I'm just going to watch one video and then I'm out, we have to recognize that we have an exponential degree of asymmetry, and they know us and our weaknesses better than we know ourselves. of asymmetry, and they know us and our weaknesses better than we know ourselves. That part of the conversation sets the stage for us well, but we recommend a full listen to that episode, as Sam continued to skillfully play devil's advocate throughout, and allowed Tristan to flesh out the nuanced and complex considerations. But Tristan remains steadfast in his effort to sound the alarm about the power of algorithms to target our weaknesses, and so that's where we're going to stay in this trek through social media. You heard Sam, while channeling a skeptical view, point to the economic model that serves as the oxygen that keeps the social media monsters
Starting point is 00:20:21 breathing. Advertising. Here is an open question for the health of democracy and individual psychology. Is there a point when advertising can become too effective? And has social media pushed us over that threshold? Advertising is certainly nothing new, of course, and the profit motive has always encouraged persuasion and attention-grabbing wherever possible. But turn back the clock a few hundred years and imagine a handcrafted, colorfully painted wooden sign hanging above a rival blacksmith shop in a town square, and compare its influence to a perfectly timed, personalized, targeted advertisement that was crafted and custom-molded to your taste in music, attraction, color preference, current mood, political persuasion, and just about everything else.
Starting point is 00:21:15 The latter does seem to suggest a deep shift in the power to persuade effectively. If there is something like an objective measure of the effectiveness of persuasion that immorally encroaches on a notion of personal autonomy, it's fair to wonder if we've blown right past it. There's an old adage in marketing that goes like this. I know I'm wasting half of my marketing budget.
Starting point is 00:21:39 I just don't know which half. That built-in uncertainty might be eroding in the face of data collecting machines which promise more and more of a sure thing to advertisers. To explore this area a bit more, we're going to hear from Jaron Lanier. Lanier is a computer scientist and Silicon Valley pioneer who launched virtual reality companies in the mid-80s. He was part of an early wave of bright-eyed, in the mid-80s. He was part of an early wave of bright-eyed, idealistic technologists, and he's among those who have since begun to question what they may have been missing. When he spoke with Sam, he had just written a book which was not shy about its suggestion.
Starting point is 00:22:22 It was called, Ten Arguments for Deleting Your Social Media Accounts Right Now. For this compilation, we're going to be tapping this interview for Lanier's thoughts on the economic models that have run amok on the internet, and listen in on some of his nascent suggestions on how different models might improve the situation. We'll start with Sam and Lanier revisiting the early days of Silicon Valley and the seemingly uncontroversial notion that information should be free. This is from episode 136, Digital Humanism. Many of the worst decisions we've made here, and this is something you point out in your books, in creating this technology, are not, on their face, bad decisions.
Starting point is 00:23:00 I mean, they're certainly not sinister decisions. And one of the first decisions we've made is around this notion that information should be free. And that just seems like a very generous and idealistic way to start. It just seems quite noble. So perhaps we can start here with the digital economy. What could possibly be wrong with information being free? Right. Well, this idea that information should be free was held in the most profound and intense way. It was something that was believed so intensely during a period starting in the 80s. And in some ways, it still holds for a lot of people. And to defy that was very, very difficult. It was painful
Starting point is 00:23:52 for my friends who couldn't believe that I was defying it. It was painful for me. I did lose friends over it. And on its face, it sounds very generous and fair and proper and freeing, but there are problems with it that are so deep as to, I think, threaten the survival of our species. It's actually a very, very, very serious mistake. So the mistakes happen on a couple of levels here. I would say the first one has to do with this idea that information is totally weightless and intrinsically something that's free in an infinite supply. And that's not true because information only exists to the degree that people can perceive it and process it and understand it. It ultimately only has a meaning when it grounds out as human experience. The slogan I used to have back in the 80s when we were first debating these things is that information is alienated experience, meaning information is similar to stored energy that can be released. You put energy into a battery, then you can release it, or you lift up a weight, and then you let go of the weight, and it goes back down, and you've released the energy that was stored.
Starting point is 00:25:06 And in the same way, information ultimately only has meaning as experience at some point in the future. And the problem with experience, or maybe the benefit of experience, is that it's only a finite potential. You can't experience everything. And so therefore, if you make the mistake of assuming that information is free, you'll have more information than you can experience. And what you do is you make yourself vulnerable to what we could call a denial of service
Starting point is 00:25:34 attack in other contexts. So a denial of service attack means that malicious people send so many requests to a website that it's effectively knocked out off the web. You can't reach it anymore. And every website that you use reliably actually has to go through this elaborate structure of other resources created by companies like Akamai that defend it from denial of service attacks, which are just infinitely easy to do. But in the same way, when you have services like Twitter or Facebook where anybody can post anything without any cost to themselves and there's no postage on email and everything
Starting point is 00:26:12 can just be totally filled up with spam and malicious bots and crap to the point where reality and everything good about the world gets squeezed out and you end up amplifying the worst impulses of people. There's no such thing as a free lunch. There's no such thing as free information. There's no such thing as infinite attention. There has to be some way that seriousness comes into play if you want to have any sense of reality or quality or truth or decency. And unfortunately, we haven't created a world in which that's so. But then there's a flip side to it, which is equally important, which is we've created this world in which we're talking about technology often as something that's, if not opposed to humanity, opposed to most of humanity. So there's a lot of talk, and a lot of this comes from really good technologists.
Starting point is 00:27:06 So it's not from malicious outsiders who are trying to screw us up. It's our own fault where we'll say, well, a lot of the jobs will go away because of artificial intelligence and our robots. And that might either be some extreme case where super intelligent AI takes over the world and disposes of humanity,
Starting point is 00:27:23 or it might just be that only the most elite, smart, techie people are still needed and everybody else becomes this burden on the state and they have to go on some kind of basic income. And it's just a depressing, it's like everybody's going to become this useless burden. And so even if that means, oh, we'll all get basic income, we won't have to work for a living, there's also something fundamentally undignified, like you won't be needed. And any situation like that, it's just bound to be a political disaster and an economic disaster on many levels we can go into if it isn't obvious. But the thing to see is that this economic hole that we seem to be driving
Starting point is 00:28:01 ourselves into is one in the same as the information wants to be free. Because the thing is, ultimately, all these AIs and robots and all this stuff, they run on information that at the end of the day has to come from people. And each instance is a little different, but for a lot of them, there's input from a lot of people. And I can give you some examples. So if we say that information is free,
Starting point is 00:28:21 then we're saying in the information age, everybody's worthless because what they can contribute is information. The example I like to use as just an entry point to this idea is the people who translate between languages. So they've seen their careers be decimated, their tenth of what they were, in the same way that recording musicians and in the same way that recording musicians and investigative journalists and many other classes of people who have an information product, they've all been kind of reduced under this weird regime we've created. But the thing is, in order to run the so-called AI translators that places like Bing and Google offer, we have to scrape tens of millions of examples from real
Starting point is 00:29:06 life people translating things every single day in order to keep up with slang and public events. Language is alive. The world is alive. You can't just stuff a language translator once. You have to keep on refilling it. And so we're totally reliant on the very people that we're putting out of work. So it's fundamentally like a form of theft through dishonesty. Okay, so we've hit the ground running here. I want to back up for a second and try to perform an exorcism on some bad intuitions here, because I think people come into this, we've trained ourselves to expect much of our digital content to be free and free forever. And it now seems just the normal state of the world. And of course, podcasts and blogs and journalism and ultimately music should be free.
Starting point is 00:29:55 Or if it's not free, it should be subsidized by ads. And I think there's this sense that TV and radio were free, so there's this precedent. And advertising has its excesses, but I think people feel, what's wrong with ads? Some ads are kind of cool looking and amusing and stylish, and we've lived with them forever. And then there's these other elements like having a personalized news feed. What's wrong with that? Why can't Facebook just give me what I want? So let's just bring the concept of or the role of ads back in here. So most people have decided that in the face of this, the way to monetize work and inspire
Starting point is 00:30:40 good work is to build an ad economy. And this answers the need to have information be free to all of the young people who want to get it that way. And now we who used to be young still want to get it that way. And this is something that many of us are fighting against who've been paying attention to the consequence of relying on ads. And, you know, I've decided that I can't credibly read ads on this podcast. I know that you're more sanguine about the state of podcasting than most forms of media at the moment. And I should say is that for, you know, many podcasters, because I've taken a position against ads on my own podcast, many people come to me wanting to do the same. And the truth is, I don't
Starting point is 00:31:26 actually even know what to tell other podcasters at this point, because I think I'm an outlier in this space where it works for me. I found an audience who, and some percentage of the audience will support this work. But it seems to me by no means straightforward to say that any podcaster who wants to will find an audience to support their work. And I think given the current expectations, I think anyone who does decide to forego ads will be paying an economic price for doing that with whatever audience at whatever scale, given the expectation that podcasts should be free. So it's kind of hard to advise people, even when I'm successfully implementing an ad-free model here.
Starting point is 00:32:12 Well, I need to correct you about something. My objection is not to advertising, but to continuous behavior modification by algorithm, which is really a very different thing. So... Well, it overlaps in one case in that, well, I'm worried as a podcaster about the behavior modification or the perceived behavior modification that can happen to me as just a broker of information. I don't, you know, it's like a credibility concern. Given what I'm trying to do here, I don't feel that I can personally shill for any products. But I think other podcasters can. I
Starting point is 00:32:54 think it's completely convergent with the brand of other podcasters to say, listen, here's the greatest t-shirt I've ever found. You're going to want this T-shirt. And that works for people who... I know, I've heard some really... Listening to some of the podcasters have to read their ads when it's clearly bizarre. It's actually kind of entertaining. But the thing is, as long as every listener hears the same ad... Right. And everybody can understand what's going on, that's okay.
Starting point is 00:33:21 I mean, the reason podcasting is still, in my view, an unmolested, authentic medium is that there aren't algorithms calculating what somebody hears on a podcast. It's crafted by you, and if it includes ads, people can tell it includes ads. There isn't some meta podcast that's taking snippets and creating a feed for people. There isn't some algorithm that's, at least so far, that is changing what you say with audio signal processing technology to suit the needs of somebody who's paying from the side, some advertiser. There's not a calculation of a feed designed by behaviorist theorists to change people. And as long as it's just a crafted thing,
Starting point is 00:34:13 even if it includes commercial communication, I don't think it destroys society. I think it does start to destroy society when everything becomes really manipulative and creepy in a way that people can't possibly follow or understand. Then it starts to undermine human dignity and self-determination. And that's exactly what's going on with social media companies and the way searches run and the way YouTube videos are selected for you and fed to you and many other examples. And that's where we really have the most serious problem. So what is the solution now? If you could reboot the internet,
Starting point is 00:34:53 how would you do it? I would do a few things. The first thing I would do is encourage everybody involved to gradually bring money back into the world of information instead of expunging it. And I think people should be able to earn a living when what they add to the network is valuable. I mean, right now we're creating the most valuable companies in history based on the information that people add to them. And meanwhile, we're creating more and more economic separation, more and more inequality. And obviously that can't go on forever. And the only way to correct it is to start paying the people who are adding the information, that's the value, and grow the pie.
Starting point is 00:35:32 It doesn't mean that I think the big tech companies should be shut down or that they're evil. I actually kind of like a lot of them. It just means that we have to get back to a world where when people add value, they get paid for it. And it's honest. And of course, the flip side of that is just as Netflix proved, and for that matter, Apple with the App Store and many other examples, we have to encourage business models where people
Starting point is 00:35:54 pay for what they want. So, you know, Google should say, hey, search won't be free after 10 years. We're going to gradually start removing the free option. And what you'll get in exchange for that is no more commercial bias and crap on our search results. This is just going to gradually start removing the free option. And what you'll get in exchange for that is no more commercial bias and crap on our search results. This is just going to be serving you. You're going to pay for it. Facebook, same thing. We're going to commit to not having any ads in 10 years. And yeah, you'll start paying for it, but it'll be a great deal. It'll be affordable. You'll get peak Facebook. And just like you got peak TV from places like, you know, HBO and Netflix. We're going to give you peak social media where you can get better information and less crap.
Starting point is 00:36:32 But the other part of that is a little more complicated, which is if you keep your eye out for a piece I have coming out with a colleague in the Harvard Business Review. Sorry to I know it's a snobby thing, but anyway, it's a place to start. We're starting to scope out how to do this in much more detail than before. And a lot of it has to do with creating in-between institutions. Right now, if there's nothing but a bunch of individuals and one giant tech platform like a Facebook or Google,
Starting point is 00:37:02 there's this bizarre situation where we're petitioning the central authority that we have no other power over, that we didn't vote for, to police our own speech and to police our own behavior. And it's just not tenable. We're demanding authoritarianism. And the way around that is to create middle-sized organizations that are analogous to things like scientific journals or universities or trade unions or many other examples, where you can voluntarily join these things and they collectively bargain for you so you can get paid decently instead of having a giant race to the bottom. And they can become brands in themselves that enforce quality and become trustworthy. And so we have to create this sense of intermediate structures. And
Starting point is 00:37:45 remember, in the past, before the internet, the place where excellence and compassion and trustworthiness came from was not the central government declaring it, but rather things like universities and scientific journals and high-quality news outlets developing a reputation and being selective. But that was all voluntary, so it wasn't authoritarian. And so if you have in-between-sized organizations, you can have all these effects that would be authoritarian if they were global and directed from the center. And all of those institutions are exactly the ones that were weakened and destroyed when Facebook said, we're going to move fast and break things. Stuff that was broken were all of those in-between organizations.
Starting point is 00:38:29 And so we have to rebuild them in a new way in order to have this more humane and sustainable internet. It's worth reminding ourselves after those two clips that social media is not entirely destructive. It has potential to do plenty of good, and it has realized some of that potential. There are personal stories of friendships, reconnections, knowledge growth, business opportunities, and meaningful political change which can credit themselves to the advent of social media.
Starting point is 00:39:00 And it can offer valuable, real-time information. So we'll try our best to emphasize a hope to not throw out the perennial baby with the bathwater in the criticism. On that note, we'll listen in now to Sam's conversation with Jack Dorsey. Dorsey co-founded Twitter and is cognizant of the monster which he's created and the struggle to harness it for good. Since this conversation with Sam, Dorsey stepped down from his role as CEO of Twitter, though he's still the CEO of Square, which is a financial tool he also founded. We'll resist the temptation to read into the move away
Starting point is 00:39:38 from Twitter as admitting defeat in his efforts to tame the beast. In this portion of their conversation, Sam and Dorsey discuss how Twitter has entrenched itself into the political and journalistic environment, for better or worse. Dorsey mentions the echo chamber or filter bubble phenomenon, which describes only seeing and hearing news and opinion which coheres with your particular perspective. This phenomenon tends to warp one's worldview and exacerbate partisanship. After we hear from Dorsey, we'll offer an alternative analogy,
Starting point is 00:40:12 which might be even more potent and poisonous to our psychology and democracy. We're going to allow this clip to get into some of the specific policy knots that get tied when any experiment like social media gets underway. Here is Sam with Jack Dorsey from episode 148. You've got these two massive companies which, at least from the public-facing view,
Starting point is 00:40:37 seem diametrically opposed in the level of controversy they bring to the world and to your life, presumably. Square seems like a very straightforward, successful, noble pursuit about which I can't imagine there's a lot of controversy. I'm sure there's some that I haven't noticed, but it must be nothing like what you're dealing with with Twitter. How are you triaging the needs of a big company that is just functioning like a normal big company and Twitter, which is something which on any given day can be just front page news everywhere, given the sense of either how it's helping the world. The thing that's amazing about Twitter is that it's enabling revolutions that we might want to support, right?
Starting point is 00:41:27 Or the empowerment of dissidents. And there's just this one Saudi teenager who was tweeting from a hotel room in the Bangkok airport that she was worried that her parents would kill her. And I don't think it's too much to say that Twitter may have saved her life in that case. I'm sure there are many other cases like this where she was granted asylum in Canada. And so these stories become front page news, and then the antithetical story becomes front page news. So we know that ISIS recruits terrorists on Twitter, or there are fears that misinformation spread there undermines democracy. And how do you deal with being a normal CEO and being a CEO in this other channel, which is anything but normal? Well, both companies and
Starting point is 00:42:14 both spaces that they create in have their own share of controversy. But I find that in the financial realm, it's a lot more private. Whereas with communication, it has to be open. And I would prefer them both to be out in the open. I would prefer to work more in public. I'm fascinated by this idea of being able to work in public, make decisions in public, make mistakes in public. And I get there because of my childhood. I was a huge fan of punk rock back in the day, and then that transitioned to hip-hop, and that led me to a lot of open source, where people would just get up on stage and do their thing, and they were terrible. And you saw them a month later, and they were a little bit better,
Starting point is 00:43:03 and then a month later, they're a little bit better and then a month later they're a little bit better and we see the same thing with open source which led me to technology ultimately but so i i approach it with with that understanding of that you know we're not here just to make one single statement that stands the test of time that our medium at Twitter is conversation and conversation evolves. And ideally it evolves in a way that we all learn from it. There's not a lot of people in the world today that would walk away from Twitter saying, ah, I learned something, but that would be my goal. And we need to figure out what element of the service and what element of the product we need to bolster or increase or change in order to do that. So I guess in my role of CEO at Twitter, it's how do I lead this company in the open, realizing that we're going to take a lot of bruises along the way. But in the long
Starting point is 00:44:06 term, what we get out of that, ideally, is earning some trust. And we're not there yet, but that's the intention. If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs, and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free and relies entirely on listener support, and you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.