Your Undivided Attention - Do You Want to Become a Vampire? — with L.A. Paul

Episode Date: August 12, 2021

How do we decide whether to undergo a transformative experience when we don’t know how that experience will change us? This is the central question explored by Yale philosopher and cognitive scienti...st L.A. Paul. Paul uses the prospect of becoming a vampire to illustrate the conundrum: let's say Dracula offers you the chance to become a vampire. You might be confident you'll love it, but you also know you'll become a different person with different preferences. Whose preferences do you prioritize: yours now, or yours after becoming a vampire? Similarly, whose preferences do we prioritize when deciding how to engage with technology and social media: ours now, or ours after becoming users — to the point of potentially becoming attention-seeking vampires? In this episode with L.A. Paul, we're raising the stakes of the social media conversation — from technology that steers our time and attention, to technology that fundamentally transforms who we are and what we want. Tune in as Paul, Tristan Harris, and Aza Raskin explore the complexity of transformative experiences, and how to approach their ethical design.

Transcript
Discussion (0)
Starting point is 00:00:00 Two quick things before we dive in. First, the Center for Humane Technology is hiring. And second, the guest featured in this episode will be joining us for a podcast club on Friday, August 20th. Details of all that are at the end of the episode. And with that, here we go. Imagine that you're touring some castle in Romania and you go down into the dungeon and you're exploring it and all of a sudden then Dracula comes to you. and offers you the chance to become one of his own. He says you'll get amazing new sensory powers.
Starting point is 00:00:40 There are some negatives. You become undead and you have to drink blood and sunlight will burn your skin in this horrible way. But all things considered, I think it's worth it. You've got 12 hours to decide you'll never have another opportunity to do this. That's philosopher and cognitive scientist L.A. Paul. She studies transformative experience.
Starting point is 00:01:06 And to illustrate the nature of what makes a transformative experience, Paul uses the metaphor of becoming a vampire. By definition, you can't know what it's going to be like to become a vampire. This is an irreversible, life-changing decision. And so the question is, how do you even evaluate these options and know how to assign them value in ways that it's going to be meaningful? Like becoming a vampire versus living your life as a huge. human and kind of passing up this chance, how do you evaluate those possibilities and determine your
Starting point is 00:01:34 preferences? How do we determine our preferences about a transformative experience when that transformative experience will change our preferences? Whose preferences should we prioritize? The person before being transformed or the person after? The only way persuasion can be ethical is when the goals of the persuader align with the goals of the persuadie. But the plot thickens when the persuader is transforming us into someone who ends up wanting the thing that we were persuaded into. What if the persuader is a vampire, turning us into someone who enjoys feeding on human blood?
Starting point is 00:02:18 What if social media is transforming us into attention-seeking vampires who feed on the attention of other people? Technology isn't just persuading us or nudging our choices. It's fundamentally transforming who we are and what we want, and that has enormous ethical implications for design. Today with L.A. Paul, we're raising the stakes of the social media conversation from technology that steers our time and attention to technology that fundamentally transforms who we are.
Starting point is 00:02:55 What if when you joined Reddit, it asked, do you want to become an online troll? What if when you joined TikTok, it asked, do you want to become an exhibitionist? What if by joining social media, we were being asked the question posed by L.A. Paul? Do you want to become a vampire? I'm Tristan Harris. And I'm Azaraskin. And this is your undivided attention, the podcast from the Center for Humane Technology. In L.A. Paul's vampire story, as it turns out, most of your friends and family members have
Starting point is 00:03:33 already become vampires. And they say they love being vampires. And even though there's no real way for you to understand what it's like, you should definitely become one. So you could decide whether to become a vampire based on the testimony of your vampire friends and family. But I think there's another question in which immediately pops up, which is on what basis do you rely on that testimony because it sure seems like becoming a vampire might work on you in a certain way such that you really love being a vampire once you've become one. So it doesn't really matter what you thought as a human. I'd like to draw an analogy to becoming a parent. Even someone who doesn't want to have children once they have a child, at least ordinarily they attach very strong
Starting point is 00:04:13 to that child and they're usually very happy they had that child. Something about you changes when you have a child such that you love the very child that you've created. And that's a very natural thing. even if before you'd ever had children and you really were strongly against ever having a child. So there's a kind of internal preference change that happens when you have a child. And it could be something like that that happens with becoming a vampire. And if my memory of the research is correct, when you pull people post having a child versus before having a child, almost universally moment-to-moment happiness for parents is lower than those who have not had a child, which doesn't actually say anything about the experiences like, but it's interesting. Yeah, there's a lot of questions about that, but that's right. Actually, one of the interpretations I like about some of that research is that maybe there's a kind of element of suffering in becoming
Starting point is 00:04:58 a parent or being a parent that we still value, but it's not really right to say you're happier as a parent that kind of just isn't the right way to characterize it. But there's a non-obvious answer to the question of should you become a parent if you aren't already convinced that you want to become a parent. And I think the same kind of thing happens with respect to becoming a vampire with a bunch of other potential life changes. So basically the question is if you know that you're faced with a chance to have a new kind of experience and if there isn't a kind of predetermined answer about whether that's something that you should do, it's not kind of legally mandated or socially mandated or morally mandated. In other words, a kind of self-interested choice. But you don't know
Starting point is 00:05:37 what that experience would be like and you don't know whether the self that you would become is a self that you now want to be, then on what basis do you make that choice? The first problem is that we haven't got the information we need to make the choice in the kind of informed way that we ordinarily want to. Like when I go to the store and I'm going to buy ice cream, I know I like Ben and Jerry's New York Superfunds junk better than dairy-free vanilla. I like vanilla ice cream, but not dairy-free. So I'm not going to go for the dairy-free option ever.
Starting point is 00:06:07 My preferences are defined. But if I don't have the relevant information I need to be able to know what these different life experiences are like, I can't have defined preferences in the way that I want to, so I can't make the choice in the ordinary way. And added to this, the self that's going to emerge from this process could have quite different preferences from the self that I am now. Which self matters? Is it the self now that's making the choice that should be running the show? Or is it the self that I would become? I don't think there's a kind of principled decision rule that we have for these kinds of contexts. And this adds to the complexity when we're thinking about real life
Starting point is 00:06:41 decision-making with respect to transformation because it's not like I think there is an obvious right answer. There's a distinctive way that someone changes when someone is faced with a new kind of experience. When it's familiar experiences, you go for a run every day or whatever you know, like it's something that you're planning ahead or forecasting or prospectively assessing in a way where you know the kind of experience you're going to have, at least more or less, at least you can predict with some kind of certainty. Obviously, the surprises can happen. But you have a kind of information when you're doing this that's not available to you when we're talking about having a new kind of experience. And so one extra moving part is that it's new experiences in particular.
Starting point is 00:07:16 It can be new to society or new to the individual. And then when you get this new kind of experience that carries a new kind of information that then also has this impact with new kinds of technology and certain kinds of social media designed to be persuasive, where it feels like it's just coming from you really important, right, because it's built into the program that you perform certain kinds of activities that then feel like a very natural progression of your own preference change. I think the real issue is to be clearheaded about what we're doing here, to know what we know, and in particular what we don't know. And that's always a trap. We always like to think we know more than we actually do. You think, I have control over how I think about the world. I have control over what I believe and what my preferences are.
Starting point is 00:08:01 And you give me the information, I think very carefully, and I do things. But this is in some sense really quite a naive picture. and partly because there are these sub-personal processes that happen that get exploited with some of this technology involving certain kinds of cognitive processing and attention. Sometimes it's the brain acting as opposed to kind of our explicit choice. But also there's a way in which an experience can just come from the outside and influence us in ways that we don't expect that have nothing to do with our beliefs. And that's what's really hard for people to grasp because you feel like you have a kind of control. You actually don't have both at the level of the conscious with respect to it's not just about your beliefs, but experience comes in and causes you to change, you walk out and you have a devastating accident. An experience caused you to change, and it wasn't about just thinking about your beliefs.
Starting point is 00:08:45 And then also from below, there's this implicit way in which the stimuli are changing how you think. And so these are two different kinds of causal processes, both of which don't happen at the level where you sit there and theorize and think about how you want to act and behave and care. And in the case of technology, both of those things are happening simultaneously. it is both the case that I am choosing to engage with Facebook or Instagram and use it in some way. And I do not get to choose that the entire world is using Facebook and Instagram and that's modifying my environment or that I'm forced to use my cell phone to interact with the people I care most about. And so it's interesting that we have these two different types of transformative
Starting point is 00:09:22 changes happening overlapping at the same time. In really interesting context, the way in which the experience works on you changes the structure. of who you are such that after you've been changed, you don't actually regret eating all those French fries or drinking all that one. So consider the process of professional development where you undergo extensive expertise training. And so the kind of person that you are once you've exited 10 years of intensive, let's say, medical training and residency, is really just different from the person that you were when you started. Strictly speaking, you're the same person, but you're different kind of person. I think of that as being a different self that realizes
Starting point is 00:10:01 is who you are, and your preferences are going to be different. Like, let's say you weren't sure if you wanted to become a doctor early on, because you know there's long, hard hours, and you find blood a little bit stressful, and it could impact your ability to have a family. At the end of that process, though, you're like, oh, I thought those were all bad things, but now I'm just so committed. My preferences are such that I'm really wedded to who I am now. And this is actually, in some sense, perfectly rational.
Starting point is 00:10:22 Who you are has changed in terms of your preference structure is actually changed as the result of those experiences. And I think that's another kind of transformative change that we need to think about as a society and also as individuals that in a certain way, by creating these technologies without actually thinking carefully about how they're affecting us, the end result could be quite significantly different. We would change who we are in a permanent sense and in a way lose track then of the values that we once had. Is that the kind of society that we want to have? That's an open question, which I think we should consider. Right now I want to say it's bad. It probably is bad.
Starting point is 00:10:56 But the real question is, how do you understand the new preferences and the new values that we're going to emerge with after this process? And in any case, we should be clearheaded about what we're doing as opposed to just getting there without even thinking carefully about the result that we're going to end up with. If I think about just grounding this in a few tech examples, Facebook like button co-inventor Justin Rosenstein didn't know that the like button would transform the relationship that we have to social validation, that we'd be addicted to checking likes, that people would determine their self-worth in terms of that. LinkedIn didn't know it would transform the meaning of being a professional into suddenly needing to broadcast things in a feed or control your reputation on a profile. Instagram didn't know that it would transform the values, culture, and aspirations of an entire generation into being influencers, along with YouTube channels. In each of these cases, we're transforming the basis, again, not just of the individual who's
Starting point is 00:11:46 becoming a kind of vampire, but I think the society at large is another patient, another person who's about to go through a kind of vampire-like change. in my own work on the topic of persuasive technology I flashed back to 2013-2014 when I had conversations with my friends at Google on the Google busts and I would talk about this social media thing you know there's really a problem here this is really not good for people it's warping their view of reality it's distracting them it's amusing ourselves to death in the classic Neil Postman sentence and the most common thing that I would hear over and over and over again was who are you to say what's good for people? If they're happy and they're happy
Starting point is 00:12:26 using Facebook, we're just a neutral tool or platform. If they're happy doing it, who's to say that they shouldn't be doing this or that we should be doing something different or we should tweak the newsfeed? Because they're enjoying it. One mistake we know we could make is claiming that we're neutral and we're not actually transforming any of those patients into vampires. You could point to a number of things that make technology either transformative or not. You could say, well, if it's just modifying their behavior in 1% of a way, that's not really a transformational experience. But as Jaron Lanier says in the film, the Social Dilemma, if I can change the world by 1%, by just 1% of its beliefs, it just tilt the world a little bit. That's the whole world. You can transform
Starting point is 00:13:03 the entire world by 1%. That's actually a big change. Think about climate change. You're only changing the temperature by 1.5, 2 degrees, and you end up devastating ecosystems getting a 6-math extinction event, and you have these massive changes. But also, just by what we're talking about before, where each time the world evolves forward, so you make one change, well, then that changes the probabilities for change later on. So just saying, well, I'm just making a small change
Starting point is 00:13:27 is actually meaningless because what matters is how much of a change you make over a stretch of time. It reminds me of the most powerful force in the universe is compound interest. I think Benjamin Franklin had that quote of money makes money and the money that makes money makes money. and this is true of change.
Starting point is 00:13:43 Change makes change, and the change that makes change makes change, and it compounds. And again, I think this is especially problematic when we're talking about new kinds of experience or new kinds of technology or new kinds of techniques, because the new stuff is the hardest stuff because you just don't know how the change is going to iterate out. You might think I can engage in what's called pre-commitment. What's pre-commitment? Pre-commitment is kind of like the thing you do when you realize, oh, God, give me some New York superfudge chunk, and I'm just helpless in this.
Starting point is 00:14:12 face of it, right? So when I go to the store, I don't even go to the ice cream section because if I buy it and take it home, it doesn't even make it into the freezer. I really need to keep myself under control. So I pre-commit. I don't go to the freezer section with some of this technology. I might think, oh, yeah, we could just do that. If I know enough, I can just set up some kind of things, so I'm not influenced by these bad sorts of technologies. Or now that I know about it, I won't be influenced that way. The problem is, is it just doesn't work that way. It's more like of a situation where you find it incredibly appealing. You think you have control. over what's going on, but that's actually part of what the insanity involves, right? Part of the picture
Starting point is 00:14:46 is that this beautiful experience is both corrupting you mentally in some sense, like it's changing what you prefer and how you think. And also, you're not able to recognize that that's what's happening. So you can't self-bind or pre-commit in the way that we ordinarily can, like by avoiding ice cream. This seductive nature of this experience, because of these implicit effects, it's not as obvious as feeling yourself go insane. And so you're in a kind of intellectual bind. Now there's this urgent thing that has to happen, which is we have to take all of these philosophical conundrums, and we actually have to implement whatever best philosophy we have, at least as tools of thinking, to, as you said, sort of at least be incorporating that model
Starting point is 00:15:29 versus not even recognize the situation that we're in. What should we be paying attention to when we are transforming the basis of who we are as individuals and as societies? I mean, the first thing is. to acknowledge responsibility and acknowledge the complexity of these different kinds of actions and also how they're related. It's not a bad thing to have responsibility. I can see how people might not want to think about that when they're building companies or working for a company. I mean, it's understandable and people have to make a living and there's great financial gain to be had. But that doesn't mean that there shouldn't be some kind of clear-eyed assessment and at least a commitment to exploring the philosophical questions involved. That I think would be a minimum rather than denying that these questions are right. And then the next question is which actions are good versus bad. But then there's this further question about who decides that.
Starting point is 00:16:21 And so I would really like to see tech companies first, honestly, just really identify all of these different structural facts about action, both collective and individual, all these relationships and how they matter, how we're making choices, both at individual levels and high levels, and also how our preferences can be influenced and changed and also how this can be implicit and we can do this without even knowing that we're doing it. But then also, once that is identified, then there's this question of, well, if you really think that people should have control over how they're being influenced, there should be, I think, some process where actually people get to decide, or at least the process is transparent. There's a kind of transparency
Starting point is 00:17:00 question here as well. And transparency comes not just from telling people what you're doing, but also from being very clear about what these structures are and what's really going on. when you talk about persuasive technology, it's one thing to persuade somebody by laying out your reasons and arguing with them or having a full-blown open discussion. It's another thing to persuade them through deceit, which is a kind of nefarious thing. And then there's another kind of persuasion involving nudges and stuff like that. And there are really interesting questions here about like when is nudging okay and when it isn't. And I think nudging can be okay as long as it's done in a transparent fashion where there's open
Starting point is 00:17:35 discussion. But when nudging is not done transparently when it's implicit or you're exploiting certain kinds of facts about the way that our brain physically works as opposed to when we're consciously reflecting on possibilities, that's when I think we start to get into problematic moral territory. When you go to have some procedure done and your doctor says, this procedure is controversial or it has terrible side effects, I'm going to suggest it, but I need to have your informed consent. Well, what's necessary for truly informed consent? That's the first problem. Let's say it's possible to get truly informed consent. That's great. Then there's an obligation
Starting point is 00:18:07 to provide that. But let's say it's actually not really possible. I don't really understand the statistics well enough to know what's going on. Well, then there's the question of like, do I just rely on the expert, on the doctor to tell me what to do? And so am I in some sense consenting an informed way because I say, okay, you is the expert. I trust you to make the best decision for me. That's another way you could get informed consent. It requires trust for that to happen. A further problem, though, is when there's this fake informed consent where someone says, well, I know this is going to be right for you. There's all this testimony. I'm going to go back to transformation. There's all this testimony from people who have undergone this procedure that
Starting point is 00:18:45 they're super happy that they've become vampires. So Dracula, I'm going to give my informed consent for you to turn me into a vampire because I'm relying on all this testimony. And you as the expert, obviously can tell me all about what it's like to be a vampire. So I put myself in the hands of Dracula. But I'm not actually consenting in an informed way because that testimony, is corrupt. Why is it corrupt? Because becoming a vampire makes me happy to be a vampire. And so all these vampires that are testifying to how satisfied they are with that procedure, well, their testimony isn't reliable because there's a kind of endogenous change that's involved. And that's not real informed consent. So I guess there has to be some kind of informed consent involved, but I don't
Starting point is 00:19:23 know how simple it would be to implement that kind of informed consent or if it's even possible. In academia, if you want to run a psych study asking questionnaires of 20 undergraduates, you have to go through a pretty rigorous process, an IRB review board. But if you want to, as a Facebook engineer, or Twitter engineer, or TikTok engineer, completely change the kinds of people we interact with, which posts from which friends, what emotional availance, which news we read, there is no process that you have to go through. And it's sort of mind-boggling that we don't have a kind of FDA process. We have environmental impact reports. We don't have social impact reports. And the scale of the changes that an individual engineer or designer can now make are, in fact, much bigger than the equivalent of the civil engineer or the psychologist running an experiment at a university. And so we need a proportional scale amount of responsibility, sort of in your terms, Laurie, the upfront commitments to understanding and demonstrating understanding.
Starting point is 00:20:27 of the complexity and scale of impact. I agree, because even if one could argue that everyone who's using Facebook has given informed consent when they agree to whatever waiver it is you agree to in order to participate in the platform, there's a sense in which we're being used as experimental subjects. And there are two ways. One is there's no way you can give informed consent if you don't know what experiments are going to be performed on you ahead of time.
Starting point is 00:20:50 That is not informed consent. And I don't think it's right to ask people to consent to be manipulated, which is the only way I can interpret this. And then the other thing is that when we're being manipulated, there isn't a clear sense of how it's going to impact us, which is part of what you're saying as well. So it relates to questions about eugenics and the ways that people were treated by doctors and other kinds of researchers in the 20th century and before where the thought was, well, the experts have the best interests of all the subjects in mind or all the patients in mind. So they should be allowed to do what they want to do. Or they're doing it in the name of science or in the name of
Starting point is 00:21:24 medicine. That is an unacceptable argument. There's no justification. for using people as subjects, even if you can't get their explicit consent for whatever particular change you want to make, somebody else needs to be vetting it, even if there were principled reasons. And I think there's an obligation to do that kind of vetting, to do that kind of modeling before using people in this way. One of the problem we have here is you've all who already said earlier on our podcast as a philosophical crisis on multiple counts more than one because people actually all already conceive of themselves as not being influenced or manipulated or transformed. And so how can you actually have,
Starting point is 00:21:57 consent when people don't believe that the software they're about to use will come to dominate and change the basis of their preferences. This is making me think about Tristan. Uval's prompt to us in a previous episode where he was asking, okay, it's very easy to look at technology and the future and see dystopia. There's a much harder task of trying to imagine what a utopia would look like in a world where technology knows us better than we know ourselves. We've collected enough data such that your refrigerator is like, actually, I know the perfect life partner for you. And your refrigerator is not wrong. And your toaster is like, I know what profession you should go into because I've analyzed the way you slather your toast. And it's not
Starting point is 00:22:37 wrong. Something still feels very unsettling about that, although it's very hard to pinpoint why. And, you know, in a world where technology sort of can figure out the best next steps for us, and it's optimizing us for something, what is that something that it should optimize us for? And what basis do you choose it on? Tristan, I think, had a really interesting answer to that, which is, well, it's hard to know what exactly is good and what exactly is bad, but we know what's better and what's worse. Perhaps we should be optimized for a sense of lifelong development. Just like there are childhood developmental stages, some of which are higher than others, imagine we extended that for the rest of life, and each person becomes more identical,
Starting point is 00:23:15 more aware of what they're unaware of, more aware of the externalities of their actions, of how other people act. And that certainly seems like it would be a, better world than the one we're in now. I'm really curious how you'd react to that. Like, how do we think about what is the basis upon which decisions would get made for us at that scale when technology could in fact know us better than we know ourselves? So, I mean, the first question is, does it really know us better than ourselves? So I'm just going to go back to another example, which is, again, you don't want to become
Starting point is 00:23:43 a parent, but your mother tells you she knows you better than you know yourself and you'd be so happy. And she's right. You have a kid and you're so happy. but is that because she knew you better than you know yourself or because the process of having a kid transforms you into someone who loves having a kid? Those are two different kinds of processes. So one issue with some of these products and some of these ideas is that they're changing us in virtue of the interactions that we're having with them. And there's a way in which
Starting point is 00:24:12 there's a kind of endogenous change involved so that maybe it's no surprise that I'm super happy with the person I end up with my refrigerator chose. My refrigerator designs my Tinder profile in some way so as to exploit the algorithm the right way so that I meet the perfect match. But what's the perfect match? Maybe what happens is I meet somebody who interacts with me in such a way that I change my preferences in ways that I hadn't expected this person has interest that I didn't have or whatever. I'm super satisfied with who I end up with, but there could have been a different algorithm implemented and the same thing would have happened. It's just that then I would have had different preferences to match that different person. I'm not sure what I think about that in the
Starting point is 00:24:51 following way. If you don't have anybody in your life, then I'd rather have my fridge successfully choose somebody and have my preferences morphs than I had this kind of satisfying outcome. There's an argument there. But don't fool yourself that somehow it's just this simple thing where the fridge is like God in some sense, able to step back and have a God's eye point of view on the universe and know what's best. Machines and the technology that we're using are not like that. It's not a simple process where we're just making the world better and maximizing our utility given our current set of preferences. It's a completely continuous interaction with constant change on both sides.
Starting point is 00:25:28 And as we were talking about before, it'd be really easy to go in a bad direction if we're not aware that this is the structure that's happening and we're not able to assess the outcomes that we're driving ourselves towards. There's a famous Zen story. It's called Maybe. The story is of an old farmer who had worked his crops for many years. and one day his horse runs away. Upon hearing the news, his neighbors came for a visit,
Starting point is 00:25:53 and they tell him such bad luck, they say empathetically, and the farmer replies, maybe. The next morning, the horse returned bringing with it three other wild horses. How wonderful the neighbors exclaimed. The old man replied, maybe. The following day, his son tried to ride one of the untamed horses and was thrown and broke his leg. The neighbors again came to offer their sympathy on this horrible misfortune,
Starting point is 00:26:16 and the farmer replied, maybe? The day after, the military officials came to the village to draft young men into the army, and seeing that the son's leg was broken, they passed him by. The neighbors congratulated the farmer on how well things had turned out. And he replied, maybe. And the point of that the truth of a statement is always ongoing and reinterpreted based on the next phase, the next one percent change, the next set of things that unfolds from the last
Starting point is 00:26:45 change and the unknowability of complex systems and chaos is kind of what we're circling around here. Another one of your examples that I think is related, which is the kind of transformation that diminishes people's capacities for self-awareness. Dementia or cognitive decline, I actually had a family member who went through that process, and it's a very challenging thing because there's actually a part of that person who is enjoying their life maybe in a more ignorance-as-blest kind of way. And yet they're not really aware of the erosion of their own capacities as it's happening. And I actually think there's a very close parallel to what's happening with technology where there is a derangement, there is a diminishment of our capacities. But one in where we're not
Starting point is 00:27:27 fully aware of those consequences, even in that TikTok case of the 10-year-old who doesn't know what they're signing up for over the next four years, five years of how they're going to be transformed. Yeah. Yeah. So there's one way in which it's obviously bad this cognitive decline or change. But then I think there's an open question. It's not necessarily bad. And we have to have a way of carefully assessing that and making a determination. So here's something that seems obviously bad. I say, well, I could have a frontal lobotomy. I mean, I have a lot of stress going on right now. COVID is terrible. We've got a lot of work going on. If I just had a frontal lobotomy, life would be beautiful. And I could just make sure that I had enough money in the bank to make sure I had,
Starting point is 00:28:03 you know, New York Superfund chunk every single day. I'd be pleased as punch and a beautiful view to look out on. and why wouldn't I do that? Now, I wouldn't be able to do philosophy anymore, I assume. But the new you wouldn't want to do philosophy because you'd be perfectly happy in your bubble. I would have no desire at all to do philosophy. And why would I want to worry about those intractable questions? And it's very stressful to try to think about these things that don't have clear answers, right? No, have some ice cream and look out the window. So this goes back to like the question about informed consent and corrupted testimony, right? Like I would testify to you enthusiastically, I'm sure, with great commitment about how happy I was in my new state.
Starting point is 00:28:40 Is it a good idea for me to do that? Well, right now, I certainly think, no, it's not a good idea. But there are other more complicated cases where maybe it's not obvious. So becoming a parent, I think becoming a parent often really just creates a certain kind of impairment. I love my children in a fundamental way more than I love myself. That means I make lots of destructive choices. I spend money, way more money than I would.
Starting point is 00:29:06 all kinds of things. I don't get enough sleep. I don't do as much work as I would otherwise do. I don't spend as much time with my friends. Yes, I get lots of joy out of being with my children, although there's quite a bit of suffering as well, as anyone who's a parent knows. There's a sense in which taking the mommy drug reduces me. I'm just going to be frank. I think it does. And that's something that I'm glad of. I now willingly trade that. But there's a real reason why I have a lot of respect for anyone who doesn't want to become a mother because I think you give up a huge amount. And it's not clear why it's rational to give that up, honestly. So with the frontal lobotomy case, I think I can step back and say, well, it's just
Starting point is 00:29:44 worse to have a frontal lobotomy. But that's me talking now without the frontal lobotomy. Why do I prioritize the self that has the cognitive capacities I have over this intellectually reduced self who might be incredibly happy and satisfied with their life? I don't think there's a principle of distinction that we can make. There's just no straightforward way to determine which way is better. I think one of the big pivots that has to happen in the ethical discussion here. So far we've been trying to adjudicate this is, is the transformed agent happier, better off,
Starting point is 00:30:12 or well, or not from their perspective? And how can we locate an authoritative position to stand from that can look at all these factors and say, well, this is actually genuinely good? Well, one way is if you do the cognitive decline example or the dimension example, is if we inadvertently reduce the capacity of civilization so that it couldn't keep going anymore. If we took away the capacity for the game to continue to be played in the James Carson's finite and infinite game sense, if the game can no longer continue, we know that we're making transformative changes that are unsustainable. Back in the 19, early 2000s, I would go to
Starting point is 00:30:45 some climate change debates and he would hear these conversations about, well, the Earth's going to get warmer and maybe there's going to be some good things coming with that. Some people like the warmer weather. And you can always talk about the Faustian bargain. There's many positive things that are going to come from climate change. There's going to be all sorts of unintended consequences. but if you actually change the entire thing so that life itself can't continue because you just eradicated a huge chunk of the web of life and now the pollinators don't exist, we need to look at it as nature as a system or human society or civilization if we care about that as a system and say what are the conditions that would allow that continue? Because otherwise we get trapped
Starting point is 00:31:17 in the kind of Zen maybe story. I think there's something really interesting here, which is like what is responsibility in some sense responsibility is being aware of the externalities of your actions and then seeking or being beholden to reduce the negative externalities. And if you reduce the capacity to seek out or see those negative externalities, we can say at sort of the global scale that that is demonstrably worse. That is a poor transformation because it limits the sustainability of the system as a whole. Maybe you can't know what outcome you're getting to, but you can know that you're doing something that's transformative.
Starting point is 00:31:53 And you can know, for example, that certain types of transformations lead to good results, whatever they are, and other types lead to bad results. And so your obligation is to choose the right type of transformation, even if you don't know the details. Even if you don't know the details, then there's a principled reason for you to not strike that match and drop it into the pool of oil or whatever. If you know it's going to create a huge fire, that's just the wrong thing to do. Even if you don't know how bad the fire is going to be or what it's going to burn or anything like that, you have enough information to not do it. One of our mentors in our community talks about Omni-considerate choices that to have theory
Starting point is 00:32:28 of mind is to be able to consider, if I make a recommendations, you trust it more if the person says, well, I recommend this for you, but not for this person and that person, and here's why. And the more distinctions that they make, the more omni-considerate they're demonstrating they have the capacity to be, meaning they're considering more balance sheets, more people who are impacted, short and long-term versus just short-term. The more demonstrations of consideration, the more trust you might have in that system. That doesn't mean, though, that that system can be godlike and see everything, because in a chaotic and complex world, you can't know all things.
Starting point is 00:32:59 But certainly one in which someone is considering more than someone who's considering less, I think we'd probably opt for the agent who's considering more in the transformative relationships they have. But again, I want to say, like, sometimes there's just no answer. You ask somebody, would you rather be a pianist or a top engineer for Google or a politician? And let's just say each of those careers would be satisfying in different ways. There's a sense in which you just can't compare them. One thing I'm just taking away so far right now is an incredible humility for what we might
Starting point is 00:33:31 be doing as we interact in the world at all. To be self-aware of we are Zeus when we're making decisions. And we can be scorching parts of earth with every little micro step and change and attention shift that we make. There is a problem, I think, when there's a kind of persuasive tech where it's clear that the motivation for the persuasion is not in the best interest of the individual, but rather for profit. That's a problem. But when there's a different argument, which I think can be made, that these changes in individuals and society, in some sense, there's something really interesting about them, right? Like, I love my iPhone.
Starting point is 00:34:03 I love using the apps. I love texting. Facebook and Twitter, all these things, they're used for good in lots of cases. So if we just focus on that, then how do we compare the life pre-technology to post-technology? One of the reasons why I think there is a kind of philosophical crisis here is that it's not like there's an easy answer even here. I want to emphasize, again, I think the real, the real failure on the Google Bus is not necessarily that they went ahead and did what they were doing, but rather that there was a kind of refusal to really think about this in full brutality.
Starting point is 00:34:43 L.A. Paul is a professor of philosophy and cognitive science at Hale University. Her research explores questions about, among other things, the nature of the self, decision-making and essence. She's the author of the 2014 book, Transformative Experience. And L.A. will be joining our podcast club for a discussion and Q&A on Friday, August 20th. You can find details at humanetech.com. Another exciting thing you can find at humanetech.com is that we're hiring. The Center for Humane Technology is hiring for two full-time roles, director of mobilization and a digital manager. And we're really especially excited about candidates who are truly aligned with our mission, which is why we wanted to share them here with you. For the Director of Mobilization,
Starting point is 00:35:27 we're looking for experienced community builders who can help us support the responsible and humane technology ecosystem. And for the digital manager, we're looking for a skilled communicator who can lead the execution of all of our digital strategy in a way that respects our audience's attention. And if that's you or someone you know, please visit humanetech.com slash careers. Your undivided attention is produced by the Center for Humane Technology, a non-profit organization working to catalyze a humane future. Our executive producer is Stephanie Lepp. Our senior producer is Natalie Jones.
Starting point is 00:36:00 And our associate producer is Nur al-Samurai. Dan Kedmi is our editor at large. Original music and sound design by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and more athumaneTech.com. And a very special thanks goes to our generous lead supporters, including the Omidyar Network, Craig Newmark Philanthropies, and the Evolve Foundation, among many others.
Starting point is 00:36:27 I'm Tristan Harris, and if you made it all the way here, let me just give one more thank you to you for giving us your undivided attention.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.