Making Sense with Sam Harris - #71 — What Is Technology Doing to Us?

Episode Date: April 14, 2017

Sam Harris speaks with Tristan Harris about the arms race for human attention, the ethics of persuasion, the consequences of having an ad-based economy, the dynamics of regret, and other topics. If th...e Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Today I'm speaking with Tristan Harris. Tristan has been called by the Atlantic Magazine the closest thing that Silicon Valley has to a conscience. He was a design ethicist at Google and then left the company to start a foundation called Time Well Spent,
Starting point is 00:01:06 which is a movement whose purpose is to align technology with our deepest interests. Tristan was recently profiled on 60 Minutes. That happened last week. He's worked at various companies, Apple, Wikia, Apsure, and Google. And he graduated from Stanford with a degree in computer science, having focused on human-computer interaction. We talk a lot about the ethics of human persuasion and about what information technology is doing to us and allowing us to do to ourselves. This is an area which I frankly haven't thought much about, so listening to Tristan was a bit of an education. But needless to say, this is
Starting point is 00:01:46 an area that is not going away. We are all going to have to think much more about this in the years ahead. In any case, it was great to talk to Tristan. I have since discovered that I was mispronouncing his name. Apologies, Tristan. Sometimes having a person merely say his name in your presence proves insufficient. Such are the caprices of the human brain. But however you pronounce his name, Tristan has a lot of wisdom to share. And he's a very nice guy as well. So, meet Tristan Harris. Meet Tristan Harris.
Starting point is 00:02:29 I am here with Tristan Harris. Tristan, thanks for coming on the podcast. Thanks for having me, Sam. So we were set up by some mutual friends. We have a few friends and acquaintances in common. And you are in town doing an interview for 60 Minutes. Yep. So I was actually, I confess, I was not aware of your work. I think I'd seen the Atlantic article that came out on you recently, but I think I had only
Starting point is 00:02:49 seen it. I don't think I had read it. But what you're doing is fascinating and incredibly timely, given our dependence on this technology. And I think this conversation we're going to have, I'm imagining it's going to be something like a field guide to what technology is doing to the human mind. I think we'll talk about how we can decide to move intentionally in that space of possibilities in a way that's healthier for all of us. And this is obviously something you're focused on. But to bring everyone up to speed, because even I was not up to speed until just a few days ago, what is your background? And I've heard you've had some very interesting job titles at Google, perhaps among other places. One was the resident
Starting point is 00:03:31 product philosopher and design ethicist at Google. So how did Tristan Harris get to be Tristan Harris? And what are you doing now? Well, first, thanks for having me, really. It's an honor to be here. I'm a big fan of this podcast. So yeah, my role at Google, that was an interesting name. So design ethicist and product philosopher, I was really interested in essentially when a small number of people in the tech industry influence how a billion people think every day without even knowing it. If you think about your role as a designer, how do you ethically steer a billion people's thoughts, framings, cognitive frames, behavioral choices, basically the schedule of people's lives. And so much of what happens on a screen, even though people feel as if they're making their own choices, will be determined by the design choices of the people at Apple and Google and Facebook.
Starting point is 00:04:24 So we will talk, I'm sure, a lot more about that. I guess prior to that, when I was a kid, I was a magician very early. And so I was really interested in the limits of people's minds that they themselves don't see, because that's what magic is all about, that there really is a kind of band of attention or short-term memory or ways that people make meaning or causality that you can exploit as a magician. And that had me fascinated as a kid. And I did a few little magic shows. And then flash forward, when I was at Stanford, I did computer science, but I also studied as part of a lab called the Persuasive Technology Lab with BJ Fogg, which basically taught engineering students how this kind of library of persuasive techniques and habit formation techniques in order to build
Starting point is 00:05:12 more engaging products, basically different ways of taking advantage of people's cognitive biases so that people fill out email forms, that people come back to the product, so that people register a form, so that they fill out their LinkedIn profiles, so that they tag each other in photos. And I became aware when I was at Stanford doing all this that there was no conversation about the ethics of persuasion. And just to ground how impactful that cohort was, in my year in that class in the persuasive technology lab, my project partners in that class and very close friends of mine were the founders of Instagram. And many other alumni of that year in 2006 actually went on to join the executive ranks at many companies we know, LinkedIn and Facebook, when they were just getting started. And again, never before in history have such a
Starting point is 00:06:02 small number of people with this tool set influenced how people think every day by explicitly using these persuasive techniques. And so at Google, I just got very interested in how we do that. Yeah. And so you were studying computer science at Stanford? Originally computer science, but I dabbled a ton in linguistics and actually symbolic systems. Yeah. Because you were at Stanford eventually. Yeah, yeah. So that was a great major at Stanford. I was in the philosophy department. There was overlap between philosophy and computer science for symbolic systems. I think Reid Hoffman was
Starting point is 00:06:31 one of the first symbolic systems majors at Stanford. So yeah, so persuasion, the connection to magic is interesting. There's an inordinate number of magicians and fans of magic in the skeptical community as well, perhaps somewhat due to the influence of James Randi. But I mean, magic is really the ultimate act of persuasion. You're persuading people of the impossible. So you see a significant overlap between the kinds of hacks of people's attention that magicians rely on and our new persuasive technology? that magicians rely on and our new persuasive technology? Yeah, I think if you just abstract away what persuasion is, it's the ability to do things to people's minds that they themselves won't even see how that process took place. And I
Starting point is 00:07:16 think that parallels your work in a big way in that beliefs do things. To have a belief shapes the subsequent experience of what you have. I mean, in fact, in magic, there's like principles where you kind of want to start bending reality and creating these aha moments so that you can do a little hypnosis trick later, for example, that people be more likely to believe having gone through a few things that have kind of bent the reality into being more superstitious or more open. And there's just so many ways of doing this that most people don't really recognize. I wrote an article called How Technology Hijacks Your Mind that ended up going viral to about a million people. And it goes through a bunch of these different techniques.
Starting point is 00:07:55 But yeah, that's not something people mostly think about. You also said in the setup for this interview that you have an interest in cults. Yeah. What's that about? And to what degree have you looked at cults? Well, I find cults fascinating because they're kind of like vertically integrated, persuasive environments. Instead of just persuading someone's behavior or being the design of a supermarket or the design of a technology product, You are designing the social relationships, the power dynamic between a person
Starting point is 00:08:28 standing in front of an audience. You can control many more of the variables. And so I've done a little bit of sort of undercover investigation of some of these things. You mean actually joining a cult? No, not joining, but... Showing up physically? Showing up physically. Many of these things are... None of these cults ever would call themselves
Starting point is 00:08:49 cults. I mean, many of them are simply workshops, sort of new agey style workshops. But you start seeing these parallels in the dynamics. Do you want to name any names? Do I know these? I might prefer not to at the moment. We'll see if we get there. Okay. You have a former girlfriend who's still in one? at the moment. We'll see if we get there. Okay. You have a former girlfriend who's still in one? No, but I did actually, one of the interesting things is the way that people that I met in those cults who eventually left and later talked about their experience and the confusion that you face, and I know this is an interest you've had, the confusion that you face when you've gotten many benefits from a cult. You've actually deprogrammed, let's say, early
Starting point is 00:09:26 childhood traumas or identities that you didn't know you were holding or different ways of seeing reality that they helped you, you know, get away from. And you get these incredible benefits and you feel more free, but then you also realize that was all part of this larger persuasive game to get you to spend a lot of money on classes or courses or these kinds of things. And so what the confusion that I think people experience in knowing that they got all these benefits, but then also felt manipulated, and they don't know in the sort of mind's natural black and white thinking how to reconcile those two facts. I actually think there's something parallel there with technology. Because for example, in my previous work on this, a lot of people expect
Starting point is 00:10:05 you if you're criticizing how technology is designed, if you might say something like, oh, you're saying Facebook's bad, but look, I get all these benefits from Facebook. Look at all these great things it does for me. And it's because people's minds can't hold on to both the truth that we do derive lots of value from Facebook. And there's many manipulative design techniques across all these products that are not really on your team to help you live your life. And that distinction is very interesting when you start getting into what ethical persuasion is. Yeah, it is a bit of a paradox because you can get tremendous benefit from things that are either not well-intentioned or just objectively bad for you or not optimal. I mean, the ultimate case is you hear from all these people who survived cancer, and cancer was
Starting point is 00:10:54 the most important thing that ever happened to them. So a train wreck can be good for you on some level, because your response to it can be good for you. You can become stronger in all kinds of ways, even by being mistreated by people. But it seems to me that you can always argue that there's probably a better way to get those gains. Well, and this is, I mean, frankly, with your work on the moral landscape, you know, when you're thinking about if you're a designer at Facebook or at Google, because of how frequently people turn to their phone, you're essentially scheduling these little blocks of people's time. If you put, you know, if I immediately notify you for every Snapchat message, which Snapchat is one of the most abusive, more manipulative of the
Starting point is 00:11:37 technology products, there, you know, when you see a message from a friend in that moment urgently, a lot that will cause a lot of people to go swipe over and not just see that message, but then get sucked into all the other stuff that they've been hiding for you, right? And that's all very deliberate. And so if you think of it as, let's say you're a designer at Google and you want to be ethical and you're steering people towards these different timelines, you're steering people towards schedule A in which these events will happen or schedule B in which these other events will happen.
Starting point is 00:12:04 You know, back to your point, should I schedule something that you might find really challenging schedule A in which these events will happen or schedule B in which these other events will happen. You know, back to your point, should I schedule something that you might find really challenging or difficult, but that later you'll feel is incredibly valuable? Do I take into account the peak end effect where people will have a peak of an experience and end? Do I take a lot of their time or a little bit of their time? Should the goal be to minimize how much time people spend on the screen? What is the value of screen time? And what are people doing that's lasting and fulfilling? And when are you steering people as a designer towards choices that are more shallow or empty? So you're clearly concerned about time, as we all should be.
Starting point is 00:12:37 It's the one non-renewable resource. It's the one thing we can't possibly get back any of, no matter what other resources we marshal. And it's clear that our technology, especially smartphone-based technology, is just a kind of bottomless sink of time and attention. I guess there's the other element that we're going to want to talk about, which is the consequence of bad information or superficial information and just what it's doing to our minds. I mean, the fake news phenomenon being of topical interest, but just the quality of what we're paying attention to is crucial. But the automaticity of this process, the addictiveness of this process, the fact that
Starting point is 00:13:21 we're being hooked and we're not aware of how calculated this intrusion into our lives is. This is the thing that's missing is that people don't realize, because there's this the most common narrative, and we hear this all the time, that technology is neutral and it's just up to us to choose how we want to use it. And if it happens, if people do fake news or if people start wasting all their time, that that's just people's responsibility. What this, that that's just people's responsibility. What this misses is that because of the attention economy, which is every basically business, whether it's a meditation app or the New York Times or Facebook or Netflix or YouTube,
Starting point is 00:13:57 you're all competing for attention. The way you win is by getting someone's attention and by getting it again tomorrow and by extending it for as long as possible. So it becomes this arms race for getting attention. And the best way to get attention is to know how people's minds work so that you can basically push some buttons and get them to not just come, but then to stay as long as possible. So there are design techniques like making a product more like a slot machine that has a variable schedule reward. So, you know, for example, I know you use Twitter. You know, when you land on Twitter, notice that there's that extra variable time delay
Starting point is 00:14:31 between like one and three seconds before that little number shows up. You have a return and the page loads. There's this extra delay. I haven't noticed that. Yeah, hold your breath. And then there's a little number that shows up for the notifications.
Starting point is 00:14:43 And that delay makes it like a slot machine. Literally, when you load the page, it's as if you're pulling a lever, and you're waiting, and you don't know how many there's going to be. Is there going to be 500 because of some big tweet storm, or is there going to be one? Does it always say 99? Well, not everyone as Sam Harrison has so many. No, but isn't that always the maximum? It never says 500, right?
Starting point is 00:15:04 So I don't, because again, I'm not you. I don't have as many followers. No, well, I think I can attest that, I mean, mine is always at 99. So it's no longer salient to me. Well, right, which actually speaks to how addictive variable rewards work, which is the point is it has to be a variable reward. So the idea that I push a lever or pull a lever, and sometimes I get, you know, two, and sometimes I get nothing, and sometimes I get 20. And this is the same thing with email. Well, let's talk about what is the interest of the company, because I think most people are only dimly aware. I mean, they're certainly aware that these companies make money off of ads very often. They sell your data. So your attention is their resource.
Starting point is 00:15:46 But take an example. I mean, so something like Twitter seemingly can't figure out how to make money yet, but Facebook doesn't have that problem. Let's take the clearest case. What is Facebook's interest in you as a user? Well, obviously, there's many sources of revenue, but it all comes down to, whether it's data or everything else, it comes down to advertising and time. Because of the link that more of your attention or more of your time equals more money, they have an infinite appetite in getting more of your time.
Starting point is 00:16:19 So time on your newsfeed is what they want. That's right. And this is literally how the metrics and the dashboards look. I mean, they measure what is the current sort of distribution of time on site. Time on site is the, that and seven-day actives are the currency of the tech industry. And so the only other industry that measures users that way is sort of drug dealers, right? Where you have the number of active users who log in every single day. So that combined with time on site are the key principle metrics.
Starting point is 00:16:47 And the whole goal is to maximize time on site. So Netflix wants to maximize how much time you spend there. YouTube wants to maximize time on site. They recently celebrated people watching more than a billion hours a month. And that was a goal. And not because there's anyone who's evil or who wants to steal people's time, but because of the business model of advertising, there is simply no limit on how much attention that they would like from people. Well, they must be concerned about the rate at which you click
Starting point is 00:17:15 through to their ads, or are they not? They can be concerned about that, and ad rates are depreciating, but because they can make money just by simply showing you the thing. And there is some link between showing it to you and you clicking. You can imagine with more and more targeted things that you are seeing things that that are profitable. And there's always going to be someone willing to pay for that space. But this problem means that as this starts to saturate, because we only have so much time to even hold on to your position in the attention economy, what do you do? You have to ratchet up how persuasive you are. So here's a concrete example. If you're YouTube, you need to add autoplay the next video to YouTube. You didn't used to do that in the last year. I always find that incredibly annoying. Yep. I wonder what percentage of people find that annoying. Is it
Starting point is 00:18:00 conceivable that that is still a good business decision for them, even if 99% of people hate that feature? Well, it's with the whole exit voice for loyalty, if people don't find it so annoying that they're going to stop using YouTube, because the defense, of course, is... There's no way they're going to stop using YouTube. Of course not. And that's what these companies often hide behind this notion that if you don't like it, you can stop using the product. But while they're saying that, I mean, they have teams of thousands of engineers whose job is to deploy these techniques I learned at the persuasive technology lab to get you to spend as much time as possible. But just with that one example, let's say YouTube adds autoplay the next video. So they just add
Starting point is 00:18:37 that feature. And let's say that increases your average watch time on the site every day by 5%. So now they're eating up 5% more of this limited attention market share. So now Facebook's sitting there saying, well, shoot, we can't let this go to dry. So we've got to actually add autoplay videos to our newsfeed. So instead of waiting for you to scroll and then click play on the video, they automatically play the video. They didn't always used to do that. Yeah, it's another feature I hate. Yep. And the reason though that they're doing that, what people miss about this is it's not by accident. The web and all of these tools will continue to evolve to be more engaging and to take more time because that is the business model. And so you end up in this arms race for essentially who's a better magician, who's a better persuader,
Starting point is 00:19:22 who knows these back doors in people's minds as a way of getting people to spend more time. Now, do you see this as intrinsically linked to the advertising model of revenue, or would this also be a problem if it was a subscription model? It's a problem in both cases, but advertising exacerbates the problem. So you're actually right that, for example, Netflix also maximizes time on site. What I heard from someone through some back channels was that the reason they have to do this is they found that if they don't maximize, for example, they have this auto countdown watching the next episode, right? So they don't have to do that. Why are they doing that?
Starting point is 00:20:01 Strangely, I like that feature. Try to figure that out, psychologists among you. Well, and this is where it gets down to what is ethical persuasion, because that's a one persuasive transaction where they are persuading you to watch the next video. Yeah. But in that case, you're happy about it. I guess the reason why I'm happy about it there is that it is, at least nine times out of ten, it is by definition something I want to watch because it's in the same series as the series I'm already watching, right? Whereas YouTube is showing me just some random thing that they think is analogous to the thing I just watched. And then when you're talking about Facebook, or I guess I've seen this feature on embeds in news stories, like in the Atlantic or Vanity Fair, the moment
Starting point is 00:20:40 you bring the video into the frame of the browser, it'll start playing. I just find that annoying, especially if your goal is to read the text rather than watch the video. Yep. But again, there's this, because of the game theory of it, when one news website evolves that strategy, you can think of these as kind of organisms that are mutating new persuasive strategies that either work or not at holding onto people's attention. And so you have some neutral playing field, and one guy mutates this strategy on the news website of auto-playing that video when you land. Let's say it's CNN. So now the other news websites, if they want to compete with that, they have to, and assuming that CNN has enough market share that that makes a difference, the other ones have to start trending in that direction. And this is why the internet has moved from being this neutral feeling resource where you're kind of
Starting point is 00:21:23 just accessing things to feeling like there's this gravitational wormhole suck kind of quality that pulls you in. And this is what I think is so important. You asked, you know, how much of this is due to advertising and how much of it is due to the hyper competition for attention. It's both. One is we have to be able to decouple the link between how much attention we get from you and how much money we make. And we actually did the same thing with, for example, in energy markets, where it used to be the energy companies made more money the more energy you use. And so therefore, they have an incentive. They want you to please leave the lights on.
Starting point is 00:21:58 Please leave the faucet on. We are happy. We're making so much more money that way. But of course, that was a perverse incentive. And so this new regulatory commission got established that basically decoupled, it was called decoupling, it decoupled the link between how much energy you use and how much energy they, how much money they make. Well, and there's some ads online, I can't even figure out how they're working or why they're there. There are these horrible ads at the bottom of even the most reputable websites, like the Atlantic.
Starting point is 00:22:30 You'll have these ads. I think usually they're framed from around the web, and it'll be an ad like, you won't believe what these child celebrities look like today. Yeah, Taboola and Outbrain. There's a whole actual market of companies that specifically provide these related links at the bottom of news websites. garish ad after another. But the thing that mystifies me is when you click through to these things, I can't see that it ever lands at a product that anyone who was reading that article would conceivably buy. I mean, you're just going down the sinkhole into something horrible. Everything looks like a scam. It's just... It all comes down to money, though. The reason why... So
Starting point is 00:23:21 I actually know a lot about this because the company the way i arrived at google was they bought our little uh startup company for our talent and we didn't do what this this sort of market of websites did but we were almost being pushed by publishers who used our technology to do that uh so one of the reasons i'm so sensitive to this time on site stuff is because i had a little company called apsure which provided uh little in-depth background pieces of information without making you leave news websites. So you'd be on The Economist, and it would talk about Sam Harris. And you'd say, who's Sam Harris? You'd highlight it, and we'd give you sort of a multimedia background or thing, and you could interactively explore and go deeper. And the reason we sold this, the reason why The Economist wanted it on their website, is because it increased time on site.
Starting point is 00:24:04 sold this, the reason why economists wanted it on their website is because it increased time on site. And so I was left in this dilemma where the thing that I got up to do in the morning as a founder was, let's try to help people understand things and learn about things. But then the actual metric was, is this increasing our time on site or not? And publishers would push us to either increase revenue or increase time on site. And so the reason that The Economist and all these other even reputable websites have these bucket of links at the bottom is because they actually make more money from Taboola and Outbrain and a few others. Now, time on site seems somewhat insidious as a standard, except if you imagine that the content is intrinsically good. Now, I'm someone who's slowly but surely building a meditation app,
Starting point is 00:24:47 right? So now time on my app will be time spent practicing meditation. And so insofar as I think that's an intrinsically good thing for someone to be doing, anything I do in the design of the app so as to make that more attractive to do, and in the best case, irresistible to do, right? I mean, the truth is, I would like an app in my life that got me to do something that is occasionally hard to do, but I know is worth doing and good for me to do, rather than waste my time on Twitter. Something like meditation, something like exercise, eating more wisely. I don't know how that can be measured in terms of time, but there are certain kinds of manipulations,
Starting point is 00:25:31 speaking personally, of my mind that I would happily sign up for, right? So how do you think about that? Absolutely. So this is a great example. So because of the attention economy constantly ratcheting up these persuasive tricks, the price of entry for, say, a new meditation app is you're going to have to try and find a way to sweeten that front door so that that competes with the other front doors that are on someone's screen at the moment when they wake up in the morning. And of course, as much as I, and I think many of us don't like to do this, it's like the Twitter and the Facebook and the email ones are just so compelling first thing in the morning, even if that's not what we'd like to be doing. And so because all of these different apps are neutrally competing on the same playing field for morning attention, and not a specific kind of like helping Sam wake up best in the morning, for your meditation app, and what many meditation apps I personally know, they have to provide these usually these notifications. So they start realizing, oh, shoot, Facebook and Twitter are notifying people first thing in the morning to get their attention. So if we're going to stand a chance to get in the game, we have to start notifying people.
Starting point is 00:26:44 with, it's this race, classic race to the bottom, you don't end up with, you know, a screen you want to wake up to in the morning at all. It's not good for anybody. But it all became, came from this, this need to basically get there first to race up. So wouldn't we want to change the, you know, the structure of what you're competing for? So it's not just attention at all cost. So yeah, so you have called for what I think you've called a Hippocratic Oath for software designers. First, do no harm. What do you think designers should be doing differently now? Well, I think of it less as the Hippocratic Oath. That's the thing that got captured in the Atlantic article. But a different way to think about it is that the attention economy is like this city. Essentially, Apple and Google and Facebook are the urban planners of this city
Starting point is 00:27:26 that a billion people live inside of. And we all live inside of it. Like a billion people live inside of this attention city. And in that city, it's designed entirely for commerce. It's maximizing basically attention at all costs. And that was fine when we first got started. But now, this is a city that people live inside of. I mean, the amount of time people spend on their phone, they wake up with them, they
Starting point is 00:27:50 go to sleep with them, they check them 150 times a day. That's actually a real figure too, right? 150 times a day is a real figure for sure. Yeah. And so now what we'd want to do is organize that city almost like, you know, Jane Jacobs created this sort of livable cities movement and said, you know, there are things that make a great city great. There are things that make a city livable. You know, she pointed out eyes on the street, you know, stoops in New York. She was talking about Greenwich Village. These are things that make a neighborhood feel different, feel more
Starting point is 00:28:19 homey, livable, safe. These are values people have about what makes a good urban planned city. There is no set of values to design this city for attention. So far, it's been this Wild West, let each app compete on the same playing field to get attention at all costs. So when you ask me, what should app designers do? I'm saying it's actually a deeper thing. That's like saying, what should the casinos who are all building stuff in the city do differently? If a casino is there and the only way for it to even be there is to do all the same manipulative stuff that the other casinos are doing, it's going to go out of business if it doesn't do that. So the better question to ask is, how would we reorganize the city by talking to the urban planners, by talking
Starting point is 00:29:02 to Apple, Google, and Facebook to change the basic design. So let's say there are zones. And one of the zones in the attention economy city would be the morning habits zone. So now you just get things competing for what's the best way to help people wake up in the morning, which could also include the phone being off, right? That could be part of how the phone, the option of the phone being off for a period of time and telling your friends that you're not up until 10 in the morning or whatever, could be one of the things competing for the morning part of your life in the life zone there. And that would be a better strategy than trying to change, you know, meditation app designers to take a Hippocratic oath to be more responsible when the whole game is just not set up for them to succeed.
Starting point is 00:29:44 both to be more responsible when the whole game is just not set up for them to succeed? Well, to come back to that question, because it's of personal interest to me, because I do want to design this app in a way that seems ethically impeccable. If the thing you're directing people to is something that you think is intrinsically good, and forget about all the competition for mindshare that exists that you spoke about. It's just hard to do anyway. I mean, people are reluctant to do it. That's why I think an app would be valuable, and I think the existing apps are valuable. So if you think that any time on app is time well spent, which I don't think Facebook can say, I don't think Twitter can say, but don't think Twitter can say, but I think
Starting point is 00:30:25 Headspace can say that. Whether or not that's true, someone else can decide. But I think without any sense of personal hypocrisy, I think they feel that if you're using their app more, that's good for you, right? Because they think that it's intrinsically good to meditate. And I'm sure any exercise app, or the health app, or whatever it is, I'm sure any exercise app, you know, the health app or whatever it is, I'm sure that they all feel the same way about that. And they're probably right. Take that case, and then let's move on to a case where everyone's motives are more mercenary and where time on the app means more money for the company, which isn't necessarily the case for some other apps. When time on the app is intrinsically good,
Starting point is 00:31:12 why not try to get people's attention any way you can? Right. Well, so this is where the question of metrics is really important, because in an ideal world, the thing that each app would be measuring would align with the thing that each person using the app actually wants. So time well spent would mean, in the case of Meditation app, asking the user, I mean, just not using the app actually wants. So time well spent would mean, in the case of meditation app, asking the user, I mean, just not saying the app would do this, but if you were to think about it, a user would say, okay, in my life, what would be time well spent for me in the morning, waking up? And then imagine that whatever the answer to that question is, should be the rankings in the app store, rewarding the apps that are best at that. So that, again, is more the systemic answer that the systems like the app stores and the ranking functions that run,
Starting point is 00:31:54 say, Google Search or Facebook News Feed would want to sort things by what helps people the most, not what's got the most time. And the measure of that would be the evaluation of the user? I mean, some questionnaire-based rating? Yeah. Is this working for you? Yeah. And in fact, we've done some initial work with this. Actually, there's an app called Moment on iOS.
Starting point is 00:32:18 So Moment tracks how much time you spend in different apps. You send it a screenshot of your battery page on the iPhone, and it just captures all that data. And we've actually, they partnered with Time Well Spent to ask people, which apps do you find are most time well spent, most happy about the time you spent, when you can finally see this is all the time you spent in it, and which apps do you most regret?
Starting point is 00:32:40 And we have the data back that people regret the time that they spend in Facebook, Instagram, Snapchat, and WeChat the most. And they tend, so far, the current rankings are, for the most, are like MyFitnessPal and podcasts, and there's a bunch of other ones that I forgot. The irony is that being ranked first in regret is probably as accurate a measure as any of the success of your app. Yeah, exactly. And this is why the economy isn't ranking things or aligning things with what we actually want. I mean, if you think about it as everything is a choice architecture,
Starting point is 00:33:16 and you're sitting there as a human being worth picking from a menu, and currently the menu sorts things by what gets the most downloads, the most sales, the most gets most talked about the things that most manipulate your mind. So the whole economy has become this, if you assume marketing is as persuasive as it is in a bigger level, the economy reflects what's best at manipulating people's psychology, not what's actually best in terms of delivered benefits in people's lives. And so if you think about this as a deeper systemic thing about how would you want the economy to work,
Starting point is 00:33:49 you'd want it to rank things so that the easiest thing to reach for would be the things that people found to be most time well spent in their lives for whatever category of life choice that they're making at that moment in terms of making choices easy or hard because you can't escape,
Starting point is 00:34:04 in every single moment there is a menu and some menu, and some choices are easy to make, and some choices are hard to make. It seems to me you run into a problem which behavioral economists know quite well, and this is something that Danny Kahneman has spoken a lot about, that there's a difference between the experiencing self moment to moment and the remembered self. So when you're giving someone a questionnaire, asking them whether their time on all these apps and websites was well spent, you are talking to the remembered self. And Danny and I once argued about this, how to reconcile these two different testimonies. But at minimum, you can say that they're reliably different, so that if you were
Starting point is 00:34:43 experience sampling people along the way, you way, for every 100 minutes on Facebook, every 10 minutes you were saying, how happy are you right now? You would get one measure. If at the end of the day, you ask them, how good a use of your time was that to be on Facebook for 100 minutes? You would get a different measure. Sometimes they're the same, but they're very often different. And the question is who to trust. Where are the data that you're going to use to assess whether people are spending their time well? Well, I mean, the problem right now is that all of the metrics
Starting point is 00:35:16 just relate to the current present self version, right? Everything is only measuring what gets most clicked or what gets most shared. So back to fake news, just because something is shared the most doesn't mean it's the most true. Just because something gets clicked the most doesn't mean it's the best. Just because something is talked about the most doesn't mean that it's real or true, right? The second that Facebook took away its human editorial team for the Facebook trends. And they fired that whole team. And so it's just an AI picking what the most popular news stories are. Within 24 hours,
Starting point is 00:35:52 it was gamed and the top story was a false story about Megyn Kelly and Fox News. And so right now, of getting into AI about all of these topics, AIs essentially have a pair of eyes or sensors that are trying to pick from these impulsive or immediate signals. And it doesn't have a pair of eyes or sensors that are trying to pick from these impulsive or immediate signals and it doesn't have a way of being in the loop or in conversation with our more reflective selves it can only talk to our present in the moment selves and so you can imagine some kind of weird dystopian future where the entire world is only listening to your present in the moment feelings and thoughts which are easily gameable by persuasion. Although it just is a question how to reconcile the difference between being pleasantly engaged moment by moment in a activity at the end of which you will say,
Starting point is 00:36:40 I kind of regret spending my time that way. There are certain things that are captivating where you're hooked for a reason, right? Whether it's a video game or whether you're eating french fries or popcorn or something that is just perfectly salted so that you just can't stop, you're binging on something because in that moment it feels good. And then retrospectively, very often you regret that use of time. Well, so one frame of this is this sort of shallow versus deep sense. That's what you're getting at here is a sense of something can either be full but empty, which we don't have really words in the English language for this, or something can be full and fulfilling. Things can be very engaging or pleasurable, but not fulfilling.
Starting point is 00:37:24 Yes. And even more specifically, regretted. Yeah. And then there's the set of choices that you can make for a timeline. If you're, again, scheduling someone else's life for them, as people at Google and Facebook do every day, you know, where you can schedule a choice that is full and fulfilling. Now, does that mean that we should never put choices on the menu that are full, but you regret? Like, should we never do that for Google or for Facebook? That's one frame, but let me actually flip it around and make it, I think, even more philosophically interesting. Let's say that in the future, YouTube is even
Starting point is 00:37:53 better at knowing exactly what at every bone in your body you've been meaning to watch, like the professor or lecture that you've been told was the best lecture in the world, or just think about what every bone in your body tells you, in fact, would be full and fulfilling for you. And let's imagine this future deep mind powered version of YouTube is actually putting those perfect choices next on the menu. So now it's auto playing the perfect next thing that is also full and fulfilling. There's still something about the way the screen is steering your choices that are not about being in alignment with the life you want to live, because it's not in alignment with the time dimension now. So now it's sort of blowing open or blowing past boundaries.
Starting point is 00:38:36 You have to bring your own boundaries. Right. You have to resist the perfect. You have to resist the perfect. Now, should that be... And by the way, because of this arms race, that is where we're trending to. People don't understand this. The whole point of the attention economy, because of this need to maximize attention, that's where YouTube will be in the future. And so wouldn't you instead say, I want Netflix's goal to basically optimize for whatever is time well spent for me, which might be, let's say for me, watching one really good movie a week that I've been really meaning to watch. And that's because I'm defining that. It's in conversation with me about what I reflectively would say is time well spent. And it's not trying to just say you should maximize as much as possible. And for that
Starting point is 00:39:18 relationship to work, the economy would have to be an economy of loyal relationships, meaning I would have to recognize as a consumer that relationships, meaning I would have to recognize as a consumer that even though I only watch one movie a week, that's enough to justify my relationship with Netflix. Because they found in this case that if they don't maximize time on site, people actually end up canceling their subscription over time. And so that's why they're still trapped in the same race. Right. And what concerns you most in this space? Is it social media more than anything else? Or is everything that's grabbing attention engaged in the same arms race and kind of of equal concern to you?
Starting point is 00:39:54 Well, as a systems person, it's really the system. It's the attention economy. It's the race for attention itself that concerns me. Because one is people are, in the tech industry, appear to me very often as being blind to what that race costs us. You know, if one, let's, I mean, for example, the fake news stuff, instead of going to fake news, let's call it fake sensationalism. You know, the newsfeed is trying to figure out what people click the most. And if one news site evolves the strategy of outrage, outrage is a way better persuasive strategy at getting you to click
Starting point is 00:40:30 if it generates outrage. And so the newsfeed, without even having any person at the top of it, any captain of the ship saying, oh, I know what's going to be really good for people is outrage, or that'll get us more attention. It just discovers this as an invisible trait that starts showing up in the AI. So it starts steering people towards news stories that generate outrage. And that's literally where the news feeds have gone in the last three months. This is where we are. True or fake, it's an outrage machine.
Starting point is 00:40:57 And then the question is, how much is that outrage? I mean, if you thought about it, in the world, is there any lack of things that would generate outrage? I mean, if you thought about it, in the world, is there any lack of things that would generate outrage? I mean, there's an infinite supply of news today, and there was even 10 years ago, that would generate outrage. Right. And if we had the perfect AI 10 years ago, we could have also delivered you a day full of outrage. And so... That's a funny title.
Starting point is 00:41:19 A day full of outrage. Yeah. How easy would that be to market? A day full of outrage. Nobody thinks they want that, but we're all acting like that's exactly what we want. Well, and I think this is where the language gets interesting, because when we talk about what we want, we talk about what we click. But in the moment right before you click, I mean, I'm kind of a meditator too, it's like,
Starting point is 00:41:38 I notice that what's going on for me right before I click is not, as you know from free will, like, how much is that a conscious choice? What's really going on phenomenologically in that moment right before the click? None of your conscious choices are conscious choices. You're the last to know why you're doing the thing you're about to do, and you're very often misinformed about it. We can set up experiments where you'll reliably do the thing for reasons that you, when you're forced to articulate them, are completely wrong about. Absolutely. And even moreover, people, again, when they're about to click on something, don't realize there's a thousand people on the other side of the screen whose job it was, was to get you to click on that. Because that's what Facebook
Starting point is 00:42:19 and Snapchat and YouTube are all for. So it's not even a neutral moment. Do you think that fact alone would change people's behavior if you could make that transparent? It just seems it would be instructive for most people to see the full stream of causes that engineered that moment for them. Well, one thing, I've got some friends in San Francisco who are talking about this, that people don't realize, and especially when you start applying some kind of normativity and saying, you know, the newsfeed's really not good. We need to rank it a different way. And they say, whoa, whoa, whoa, whoa, whoa, who are you to say what's good for people? And I always say this is status quo bias. People are thinking that somehow the current thing we have
Starting point is 00:42:57 is set up to be best for people. It's not. It's best for engagement. If you were to give it a name, if Google has page rank, Facebook is engagement rank. Now let's say, let's take it all the way Yes. the variables so that whatever, let's show people the things that will addict them the most. Or we have outrage rank, which will show you the things that will outrage you the most. Or we have NPR rank, which actually shows you the most boring, long comment threads where you have like these, you know, long, in-depth conversations that your whole newsfeed is these long, deep, threaded conversations. Or you could have the Bill O'Reilly mode where you get these, as something I know you care about, these sort of attack dog style comment threads where people are yelling at each other. You can imagine that the news feed could be ranked in any one of these ways. Actually, this form of choice is already implemented on Flickr where you, when you look for images, you can choose relevant or interesting or so you could have that same dropdown menu for any of these other media. And this is your point of people don't see transparently what the goals of the designers who put that choice in front of you are.
Starting point is 00:44:11 Right. So the first thing would be to reveal that there is a goal. It's not a neutral product. It's not just something for you to use. You can obviously, with enough effort, use Facebook for all sorts of things. But the point is the default sort of compass or North Star on the GPS that is Facebook of steering your life is not steering your life towards, hey, help me have the dinner party that I want to have, or help me get together with my
Starting point is 00:44:36 friends on Tuesday, or help me make sure I'm not feeling lonely on a Tuesday night. There is, it seems to me, a necessary kind of paternalism here that we just have to accept because it seems true that we're living in a world where no one or virtually no one would consciously choose the outrage tab.
Starting point is 00:44:57 Right. Like, basically, I want to be as outraged as possible today. Show me everything in my news feed that's going to piss me off. Nor the addiction tab. Yeah.
Starting point is 00:45:04 Nor the superficial uses of attention tab, you know, just... Cat videos. Yeah. Or it's just give me the Kardashians all day long and I'll regret it later. So no one would choose that. And yet we are effectively choosing that by virtue of what proves to be clickable in the attention economy. In service of the greater goal of advertising. Again, that goal wasn't a accident. In fact, it's a pleasing consequence. only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad free and relies entirely on listener support.
Starting point is 00:45:50 And you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.