Making Sense with Sam Harris - #155 — Mental Models

Episode Date: April 29, 2019

Sam Harris speaks with Shane Parrish about some of the mental models that should guide our thinking and behavior. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain ac...cess to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense Podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Welcome to the Making Sense Podcast. This is Sam Harris. As I mentioned in the last housekeeping, there is a subscription policy change happening on the podcast. And this will be going into effect on Wednesday, May 1st. and this will be going into effect on Wednesday, May 1st. So in order to have access to subscriber-only content on my website, you will need an active monthly subscription. This means that those of
Starting point is 00:01:14 you who never subscribed monthly, or whose subscriptions have lapsed, and this includes anyone who used to support the podcast through Patreon, will need to start a monthly subscription at SamHarris.org. Now, as always, if you cannot afford to support the podcast, you know I don't want money to be the reason why you can't get access to my content. So if you really can't afford a monthly subscription, you need only email us at info at Samharris.org, and we will open a free account for you. But you will need to either subscribe or send us that email in order to get behind the paywall going forward. And again, that change starts May 1st. That includes access to the live town hall, the Ask Me Anything episode of the podcast that's coming up on May 9th.
Starting point is 00:02:07 That's in Los Angeles, and that will be videotaped and streamed live on my website at 8 o'clock Pacific time. And this will be interesting. This is an experiment, and if it works, we may do all of our AMA episodes this way. We will see what value is added with a live audience. Anyway, those tickets sold out, I think, in 20 minutes. So that's great. I look forward to meeting you all, and we should have fun. Again, the video will be streamed live on the website, and a final cut will be posted there. So if you're in some time zone totally
Starting point is 00:02:47 out of sync with Los Angeles, you need not worry. The episode will be available to you as well. As always, reviews of the podcast and iTunes are very helpful, and this is also true for the Waking Up app. Please keep those reviews coming. Those affect our visibility in the app store and therefore help determine how many people are made aware of the app. And again, I gotta say, releasing this app has been extremely gratifying. Honestly, it's the one thing I've done where there is no distance between my intentions and the apparent effect of what I have produced out in the world. As you know, the app is continually under development and only getting better for your input, so all your feedback is much appreciated. And as you know, our policy for subscription
Starting point is 00:03:39 to the app is also the same. If you actually can't afford it, just send an email to info at wakingup.com, and we will give you a free year on the app. And if your luck hasn't changed at the end of that year, send us another email. I believe there are some seats left for my event at the Wiltern in Los Angeles with Mingyur Rinpoche in July. You can find out about that on my website at samharris.org forward slash events. That is the first event associated with the Waking Up app, and that event is being co-sponsored by UCLA's Mindful Awareness Research Center. And now for today's podcast. Today I'm speaking with Shane Parrish. Shane is a blogger and podcaster. His website is farnamstreet at fs.blog, and his podcast is The Knowledge Project.
Starting point is 00:04:38 Many of you know him, I believe. There was recently a profile in the New York Times about him that brought him into greater prominence. He has a background in computer science, and he worked for many years in the Canadian equivalent of the NSA. In fact, he briefly worked with the NSA as well. But now he is a full-time digital media person, and he spent a lot of time thinking about thinking. And we talk a lot about what he calls mental models. This conversation has a lot in common with the conversation I had with Danny Kahneman about reasoning under uncertainty. But I think you'll find it very different as well. Anyway, without further delay, I bring you Shane Parrish. Shane Parrish. I'm here with Shane Parrish.
Starting point is 00:05:30 Shane, thanks for coming on the podcast. Happy to be here. So we're doing this in your hotel lobby, hence the ambient city vibe. This is a non-studio sound, but it's an experiment. If people can hear us, it has worked. I think we probably share a significant audience, and many people know who you are. But you run the Farnham Street blog, and you have your own podcast, The Knowledge Project. We've interviewed some of the same people, so we have many interests in common.
Starting point is 00:06:02 But there was a great New York Times profile on you, which I think brought you to the attention of many people. So let's just jump into a kind of history of your background, because you came into this from an interesting angle. You started in, was it cybersecurity specifically is your background? Is it computer science and cybersecurity? Yeah. So I started work August 28th 2001, for an intelligence agency. And then September 11 happened two weeks later. And I worked in, I guess you could say cybersecurity in one way or another for, I guess, 15 years. Is that something you can talk about? Or are you bound by laws of Canadian espionage that you will make that part of a very short conversation. We can't talk about it too much in terms of specifics.
Starting point is 00:06:49 I think we can talk about general things around cybersecurity or maybe privacy issues. But yeah, it's not something I think there's a lot of stuff out there now with Snowden and everything. So I think people have a fairly good insight into what goes on inside intelligence agencies. So you were in computer science and got into cybersecurity right like two weeks before September 11th. So the landscape completely changed. Oh, yeah. Your job description completely changed. Well, we didn't even have a sign on the building as of August 28th.
Starting point is 00:07:22 And by Christmas that year, we actually had a sign we existed. But we've existed since the forties. So just to contextualize for people, I worked for the Canadian version of the NSA. Right. And it just, it was a really amazing time to be working there. I mean, it was unfortunate, the events that sort of led to our increased visibility and band-aids. But with that said, it was, we went from, I don't know, 500 people to 2,000 or so when I left. Right. A lot of growth, a lot of expectations. You know, I ended up doing a job that I wasn't really hired to do,
Starting point is 00:07:56 but I love doing. And it was a good way to sort of give back to Canada and the country that I was born in. My parents were in the military, so we lived coast to coast. I ended up working in the States for a little bit at NSA for a short time. And then most of my other time has been in Ottawa. Right. So what's the connection to Wall Street? Because this could have been an artifact of what the New York Times did to you, but there seemed to be a real emphasis on how popular your blog and podcast are among the financial types.
Starting point is 00:08:27 It's really strange. We have three main audiences for our blog and podcast, which is Wall Street, Silicon Valley, and professional sports. And the way that it started was I took some time to go back to school, I think around 2008, 2009, to do an MBA and quickly realized that I wasn't going to learn what I was trying to learn from my MBA. I wanted to learn how to make better decisions because I was doing operations and I was making decisions that impacted people and countries. And I felt like there was an obligation on my part to get better at making decisions. And it's not, on my part to get better at making decisions. And it's not, there's no sort of like skill that is making decisions better. It's a whole bunch of sub skills that you have to learn and apply. So I went back to school to try to get better at some of that stuff and quickly realized that the
Starting point is 00:09:17 MBA wasn't going to teach me what I needed to know. And so I started a website called 68131.blogger.com, I think. And that's the zip code for Berkshire Hathaway. And the reason that I did that was the site was an homage to Charlie Munger and Warren Buffett, who were actually giving me things that I could think about and put into practice about how to see the world differently, how to make better decisions. And I started just journaling for me. And the reason that we used 68131 was because I didn't think anybody would type it in at the time. It wasn't meant for anybody else's consumption.
Starting point is 00:09:49 It's more like a personal online notepad for my own edification and connecting ideas. And then it just, I don't know, it took off from there. It wasn't anything conscious. We didn't try to reach Wall Street. It was anonymous, too. It didn't have my name on it because I was working for an intelligence agency, and they wouldn't sort of let me put my name on it. You took time off of doing intelligence to get an MBA
Starting point is 00:10:13 with the intention of going back to intelligence, being better equipped to make decisions, or were you getting out of intelligence at that point? I did full-time MBA studies and full-time work at the same time. Oh, interesting. So I switched jobs to take a less demanding job in the organization while I did full-time MBA studies and full-time work at the same time. Oh, interesting. So I switched jobs to take a less demanding job in the organization while I did that. And the intent was always to go back and sort of like see what options were available. I went back and went into management.
Starting point is 00:10:36 How do you view the current panic around online privacy and just what is happening to us based on our integration with the internet. I can imagine you have a few thoughts on what we are doing with our data, what's being done with our data, how cavalier we are with these lives of transparency we're leading now. I think it's something that we need to be aware of and make conscious choices around. And I don't think there's a historical precedent where we can look back and sort of use that as a guide because the environment is changing so quickly. I think one of the big things that are going to dominate over the next 10 to 20 years is online privacy and sort of the question about whether we're going to let foreign companies control parts of our infrastructure. And I think those questions are not they're not necessarily resolvable. We have individual choices about what we do. I mean, you don't want to use Google.
Starting point is 00:11:29 You can use DuckDuckGo or, but you also want these valuable services that are being provided. I think we need to come to some sort of understanding about what that information that we're giving away is in a transparent way. I also think that there's an interesting, if you think about it, one of the questions that I think is relevant is,
Starting point is 00:11:54 do these companies get a cumulative advantage from having this information that prevents competition? And so is Google better at search because we use it? And the more we use it, the better they better at search because we use it and the more we use it the better they get at search which means that it's much harder for competition to start right as these algorithms get better and they're trained with more and more data it becomes harder and harder for the person in the garage to compete and then you you end up having to compete with capital and not necessarily technology. And I think that changes sort of the landscape of what we're seeing in the market today.
Starting point is 00:12:35 So I think maybe it's a case where history has always been the same, where big companies and incumbents tend to get bigger. But I think that it's a little bit different this time in the sense that these companies make a lot of money. They're not necessarily bound by employees. They have a huge influence over regulatory frameworks. The harder or more regulated they become, almost the more barriers to entry you'll get for competitors as well. Where do you come down on the question of having a foreign company build critical infrastructure? I think that's a great question, right? And I think one of the ways that you can think through that question is, if we were to go back to World War II or something, to what extent would we want another country building our tanks?
Starting point is 00:13:20 Yeah. To what extent do we want to be dependent on another? Tanks that could be turned off remotely. yeah to what extent do we want to be dependent tanks that could be turned off remotely right so to what extent do we want to be dependent on another country even if we have good relations right now i think one of the questions we ask is like are we always going to have good relations with these countries what could go wrong right and we can't again looking backwards it's hard to find historical precedents where we can clearly say what could happen. But I think that the variability in outcomes is high, and we're focused maybe on short-term optimization over long-term survival.
Starting point is 00:14:02 This is one of these places where it feels like the market fails us because it's just in the abstract, you can understand why you would want a free market for more or less everything, but it's just so easy to see what could go wrong here. If you have China or some quasi-hostile foreign power, or at least a foreign power that is probably best viewed as a competitor. And it's very easy to see how we could be really in an open state of war at some point in the future. There's no other way to look at it. If they were going to put something malicious into the system, they would have the power to turn the lights out. And it doesn't have to be war in a physical sense. It could be trade war, economic war.
Starting point is 00:14:46 I mean, there's lots of different sort of... Stealing IP, which we know they do with abandons, right? And so one of the ways that we think to address this, and I'm speaking of we as people, not we as my intelligence background, is, okay, well, we'll set up a lab and we'll review your source code and then we'll verify that it compiles and the checksums and then we'll deploy it into our infrastructure as a means to sort of reduce the risk. And I think that there's problems inherent with that, one of which is logic errors
Starting point is 00:15:15 and computer code are extremely hard to pick up on. But the one that stands out a little bit more to me would be, what if there was a zero-day found, and a zero-day, for people who don't know, is a vulnerability that's not patched. It becomes available. It's found in the code of this infrastructure. So the phrase zero-day means you have zero days to fix this. It's already. Right. There's nothing you can really do other than unplug your system to prevent it.
Starting point is 00:15:44 And so they issue a patch. And does that patch go through this long process of code review or does it get deployed right away? And you can quickly see circumstances where you would be forced into deploying something, even under this regime of labs and stuff, where you would end up with stuff that you would review it later. And at that point, it might be too late. And that's not to say that any nation would do that. It's do you want to be put in a position where you have to think about that? Right.
Starting point is 00:16:13 So back to finance. It sounds like you were inspired by Berkshire Hathaway, by Warren Buffett and Charlie Munger. Do you have a connection to those guys? Have you met those guys or is it just you're just a fan based on reading their stuff? Just a fan. I mean, they're people who've influenced my thinking a lot. The website Farnham Street is named after the street in Omaha where they have their headquarters and Buffett has his house. And I think it's just interesting to me when I was doing my MBA and I think it's just interesting to me when I was doing my MBA and I was sort of thinking about this it's you sort of learn the you had Daniel Kahneman on recently yeah so you learn these cognitive biases that are great at explaining why we make mistakes and you have sort of Michael
Starting point is 00:17:00 Porter and his five forces theory of business competition. And I found it really interesting that these two guys in Omaha, Nebraska, or I guess one's in Pasadena, Charlie Munger is in Pasadena, California. But these two guys took that work and they made it practical and useful and used it to make better decisions in the real world over a wide variety of companies and businesses. And I thought it was really interesting. And that's how I really got interested in them and their thinking. Well, it was interesting. That conversation with Danny at one point,
Starting point is 00:17:33 so for those who aren't aware, Daniel Kahneman is one of the fathers of what has become behavioral economics, but decision theory, prospect theory was part of that. The work he did with Amos Tversky, for which Danny won the Nobel Prize in economics, revealed how bad we are at reasoning through various decisions. We have heuristics where we make certain decisions under uncertainty, and many of these heuristics are bad ones. They're not always bad, but they're often bad. And one thing that surprised me in my conversation with Dan is, he's the godfather of this way of debugging human reason, and yet when asked how much he's
Starting point is 00:18:19 internalized this, how much better he is at not falling prey to bad intuitions or making bad decisions or decisions that will, in hindsight, prove to be bad. He claimed more or less to be as bad at this as anyone else, like all of his knowledge hasn't really paid dividends in his practical reasoning. But I get the sense you're not quite in that same boat. How do you view yourself as a decision maker based on everything you've thought about and studied? I think it's really interesting that he said that, and I was going to bring that up, that he basically said, I've studied this my whole life, and I feel like I'm no better at
Starting point is 00:18:58 avoiding these things. And I think what that means is cognitive biases are really great retrospectively at explaining how we go astray. And they're not so great before in terms of avoiding maybe the pitfalls of those things. And the way that we typically sort of, or the way that I deal with people and how they try to address it is they create a checklist of, oh, I'm going to write down overconfident. I'm going to write down,confident. I'm going to write down sample size bias. And then the problem with that is the more intelligent you are, the better the story you're going to tell yourself about why that doesn't apply in this particular
Starting point is 00:19:34 situation. It's almost like you've made your decision and then you're rationalizing it, but you're going through this checklist. So you're going to create overconfidence in terms of your decision and the range of outcomes. This is a point that Jonathan Haidt and Michael Shermer and other connoisseurs of faulty reasoning have made that Haidt puts it this way, that we reason rather often more like lawyers than like people who are actually trying to get at the truth, where we're doing some internal PR, trying to convince ourselves and others why our gut intuitions actually make sense. And the problem is the smarter you are, the better you are at doing that. And on some level, the better you are at fooling yourself. Yeah, it's egos over outcomes,
Starting point is 00:20:19 right? We're trying to protect our ego. And it's not a conscious thing. We're not sort of like meta thinking about protecting our ego. We're just unconsciously trying to protect our ego. And it's not a conscious thing. We're not sort of like meta thinking about protecting our ego. We're just unconsciously trying to protect our view of the world and our interpretation of the world as being correct. And we're willing to take a less optimal outcome in part because we can excuse it away after. Like who could have seen that happen? And, you know, it becomes really interesting when you start thinking about what are the things that I can do in foresight to make better decisions, one of which we sort of alluded to this earlier, like there is no meta decision making skill that you just learn. There's no class on decision making. There's a subset of skills that apply in a particular
Starting point is 00:21:00 situation and tools. And those are the things that we want to learn, right? Just as there's no meta skill. I think it was Herbert Simon who said there's no meta skill of sort of like problem solver. And what there is, is there's people who bring particular skills that are relevant and then they deploy that schema to a particular problem and they can see things and chunk things in a way that other people can't see or chunk and make better decisions based on that. And that's only relevant if the environment hasn't changed from where they've owned their expertise or they've acquired that sort of like mental models, if you will, of like how the world works and the variables that interact. And I think one of the interesting
Starting point is 00:21:39 things that my sort of study of Buffett and Munger has picked up on is they've deployed this and they've made a lot of sort of money in the process. But one of the things that they've done is they've stayed away from a lot of companies that are highly variable. They're more predictable. And I think one of the reasons they do that is that gives them a better lens. So my knowledge becomes cumulative instead of like having to reacquire it all the time. If I'm trying to understand the technology behind Google, well, that's changing every day. But if I'm trying to understand the technology behind a dry cleaner, the dry cleaner or Burlington Northern Railway, it's changing a lot slower. So my knowledge as I'm learning becomes additive and cumulative. And so I think
Starting point is 00:22:20 in those cases, your schema, your mental schema is more likely to be correct. I think in those cases, your schema, your mental schema is more likely to be correct. Right. So what do you do differently in your personal life or in your professional life as a result of all the all the study you've done about decision making? Well, one thing that I do that I don't think a lot of people do is I rarely make a decision on the spot. I rarely feel the need to sort of like sit down and decide something to demonstrate to other people that I'm in control or that I'm a decision maker. I'll often take 20 minutes or 30 minutes and go for a walk and actually just try to think through the problem and think around it. And the way that I conceptualize this in my mind is like you have a problem or situation and you just want to walk around it from a three-dimensional point of view. What does that problem look like to you?
Starting point is 00:23:08 What does it look like through different lenses of the world? And what does it look like to other people and how is it likely to impact them? Can you think of an example of a decision where you would? This is one thing Danny Kahneman said is that if he's better at anything now, it's that he's more alert to the situations where more care is needed. He's more likely to make an error and perhaps can take a little more time. Where have you applied this? We were talking about sort of allowing companies in your foreign infrastructure. That would be an example of where you can think through the problem from different lenses, right? The immediate sort of response is,
Starting point is 00:23:48 oh, it's cheaper, they're good, they're friendly with us. And then you start, the longer you rag on that problem, the longer you work through it, the more implications you can see as to the outside. You can also think about it in terms of, one of the ways that I think about this is like, how do I want to live my life? A lot of life is sort of optimized for financial maximization, but I don't agree with that, right? I think that it's actually good to have a lot of
Starting point is 00:24:17 margin of safety in terms of your financial position because things can change. Interest rates aren't always going to be where they are. I mean, I don't know, but historically, if you wanna look out into the future, we could have a situation where we have 10 or 20% interest rates again. I don't wanna go back to zero. So when I'm making decisions on finances,
Starting point is 00:24:37 it's not necessarily just optimizing the short term, it's optimizing over a wide variety of outcomes. And I think when you start to take time to think about decisions, you don't necessarily need to have more cognitive horsepower than other people to make better decisions. You just have to think through a wider variety of situations and circumstances. It's almost like you're doing a Monte Carlo simulation in your head, where you're just thinking about what are the extent of the possible outcomes? Where am I likely to end up on a probabilistic basis? And are there outcomes that are unacceptable to me, in which
Starting point is 00:25:11 case I want to avoid those outcomes and invert the problem. And then if you can avoid all the bad outcomes, you're likely to end up with good problems or good outcomes. Yes. Maybe we should just run through some of your mental models, because your blog, for those who haven't seen it, is just an absolute arsenal of short essays on what you and others have called mental models. decision-making of the sorts that Danny Kahneman has spoken about, but also just ideas and memes that you think everyone should have in their cognitive toolkit, whether they relate to biology or finance or probability or just many topics. So I just, I've listed a few here that we could touch. The map is not the territory. The best example of that is online dating, right? So you get a profile of a person that is the map, and then you meet the person, and they're often two completely different things.
Starting point is 00:26:12 And we use maps all the time, right? We use maps in businesses like strategic plans. We use balance sheets, income statements, or maps to what's happening in the business. They're an abstraction of it, but they don't represent every nuance and detail in the business. And we need maps to operate because our brains can't handle that amount of details.
Starting point is 00:26:32 We have to have a map. And we can't have a map with perfect fidelity of the thing that it's representing. But territories change. And if the map becomes the goal in and of itself, you lose track of what's actually going on in the territory. So when I say online dating is the best way to conceptually, it's the quickest way to conceptually recognize this, right? Where you have a profile, a person is presenting a view of themselves.
Starting point is 00:26:55 It could be a tailored view. It's definitely a curated view. And then you go meet them and you talk with them and they're nothing like their profile or their interests don't line up with their profile. So you based your decision to meet them on a map. And then when you sort of met them, you're dealing with the territory and it's a different proposition. And I think that we just need to be aware of when we're dealing with a map. And if you're running a business or a team, you want to be touching the territory, right? You want to have a feel for what's going on. Are things changing? How is the sort of morale of the team? And Ron would
Starting point is 00:27:27 be another example of sort of like a map territory problem before they went bankrupt. Everybody was reading the map and the map was saying... Well, they were lying about the map. The map was lying to you. But it is a map. That's the thing, right? So the maps can deceive you and they can lie to you. And your job, to the extent that you're an investor, is to sort of like understand the territory and understand what's going on at a different level. Yeah, I ran into this recently with somebody was urging me to make a few business projections, like a project. This is now a map of the future where, you know, like growth targets with respect to a business. And maybe there's some context where this makes sense for people to do, but it just seems so obviously just made up. Right. And I just was thinking of what are the consequences of making this up?
Starting point is 00:28:18 So you posit whatever it is, you know, 20 percent growth over some period of time. And that is being put forward as some criterion of success. And yet you don't know, you don't in fact know what's possible, right? So it made no sense to me to be anchored to that number. It made no sense to imagine that we should be happy with that number or depressed not to have reached it because it's just plucked out of thin air. If you could have 10x something, why would you be happy with 5x? And if 5x-ing something is in fact impossible, why would you be disappointed with 4x, right? So it's like all of this is made, you're basically creating a psychological experiment for yourself where you're either going to feel good or bad based on this confabulation that you
Starting point is 00:29:05 did some months prior. Maybe there's more to it than I understand, but it just seemed like a crazy use of intelligence. On a one-off basis, projections are sort of, as you said, they're dangerous, right? So you can also start working towards the projection and not do the obvious best thing to do because you want to hit your projection. And then on a recurring basis where you work for an organization or a body or entity that sort of like is consistently making projections, there's very few of those organizations go back and calibrate the individuals making those projections. I mean, we used to have people who would make projections in a very sort of rote fashion. They knew which projections would get accepted.
Starting point is 00:29:52 And they also knew that there was no consequences to sort of like pulling those projections out of their ass. Right. And so if there are no consequences and you're not sort of held to account for your projections, you also have no way to calibrate the person making the projections. Is this person more accurate than another person at these projections? And then an interesting question would be what makes them more accurate than other people? And can we use that information to make better decisions? And it's also you're aiming at an arbitrary target, right? So if the projection is 20% growth and that's what's going to satisfy you because you put that target on the wall, My question is, why not just do the best things you should be doing?
Starting point is 00:30:30 In this case, we're talking about a business. Do those best things and see what happens, right? So why aim at an arbitrary target that doesn't take into account the higher level thinking of just what are the best things you should be doing for this business we don't make projections on our happiness right yeah it's not gonna be like 15 percent more happiness here we do it with finances and numbers because it tends to be a little easier but i think it causes a lot more harm okay another mental model here first principles thinking yeah i mean el, Elon Musk is sort of like the recent example of that, but it's breaking things down. And one of the things that the intelligence agency that we had to do a lot of was solve problems that are sort of like ungoogleable,
Starting point is 00:31:16 where people haven't really solved them before or dealt with that particular problem. And you get constrained into thinking about things through your particular lens. So your discipline, if you went through computer science or engineering or arts or HR, and we were so fortunate to have a wide variety of people there. But one of the things that sort of got us out of what we had been done, the other constraint is what you've done before. Right. So you're you're beholden to improve upon what already exists versus, I wouldn't say reinvent the wheel, but rethink the problem. Right. It's like a legacy code for the mind. Right. So you bring all this baggage with you, but if you actually stop and pause a bit the problem for a second and think about, well, what are the actual physical constraints of the world?
Starting point is 00:32:01 What are the building blocks that I'm dealing with? What are the limitations, like the actual limitations, not what exists today? And then you can sort of rethink the problem in terms of how you want to solve it. And you at least know what's possible. It might be more expensive. It might be cost prohibitive. So the organization can't do it, but it sort of like gets you into this, out of this incremental improvement state and more seeing the problem more fundamentally. And I think that's where we see a lot of disruption in the world is, you know, I think it was Peter Thiel who had the concept of zero to one. And if you think of innovation as possibly having two types of different innovation, one being incremental improvement and one being sort of like a fundamental change, I think the fundamental
Starting point is 00:32:44 change is coming when we tend to think through problems from a first principle basis and take a different approach to them within the boundaries of what is possible. Whereas the incremental improvement is we look at something and we just move the widget faster. And they're both valuable and they're both valuable in an organization. I think it's just a lot easier to do the incremental improvement. And so if you think of optics and a lot easier to do the incremental improvement. And so if you think of optics and promotions and how sort of the internal dynamics of an organization work, it becomes a lot less risky to do the incremental improvement than think about things through a first principles basis and what's possible. Yeah, I guess that's somewhat in tension with another mental model you have here, or at
Starting point is 00:33:23 least it's possibly so in doing no harm. It's often the, well, first, let's explore what that means. What do you mean by doing no harm? On your blog, you call this the via negativa. Yeah. So harm avoidance. We're sort of like prone to demonstrate value in an organization, right? We're prone to having this bias towards action, this bias towards doing something and being seen as doing something. And often when we do that, we have a knee-jerk reaction. We solve the most visible problem that exists. We don't necessarily solve the fundamental problem.
Starting point is 00:34:07 problem that exists. We don't necessarily solve the fundamental problem. And a great example of this is sort of if you think about software and you have a problem with a software, hypothetically, you're using an HR software at work, you have a problem with that software. And that problem is, you know, people can't take vacation leave through that software, they have to manage and track it through an Excel spreadsheet. And so you're put in charge of solving this problem. And while you go out in the world, and you look for software that can solve this particular problem, where you can track vacation, and you implement this new software, but you don't realize that the software has created other problems, you don't realize that you've just changed one problem for another, and the problems that you're getting now could be a lot worse than
Starting point is 00:34:50 the ones that you're dealing with. The tension I saw there is that the Via Negativa model would counsel a kind of conservatism or an incrementalism, where it's like rather than tear up the whole approach by the roots and reinvent it, you do just want to shave off inefficiencies or find other ways of optimizing what has worked in the past, rather than completely rethink it. yesterday he successfully launched his Falcon Heavy rocket and landed all the booster stages, right? So this fundamental change of, you know, thinking of rocket launches as something that should be totally reusable, and you've got to figure out how to land these things, land the first stage. It's, you know, on its face, sounds like a crazy idea, but once you set that goal based on rethinking the first principles of the whole enterprise, now we've discovered there's a solution. But that requires such a vast use of resources to rethink something so fundamental in an area that's so expensive already. I mean,
Starting point is 00:35:59 obviously, the goal here is to cut the costs and to make it a bigger industry. But it's easy to see that you could have gone down that path. And for a very long time for Elon, it looked like he was going down this path to a waiting cliff, right? There was no guarantee of success. What an amazing time to be alive. Yeah, it's really nuts. I just want to say that, right? Like watching rockets launch and sort of like reland and then redeploy is... rockets launch and sort of like reland and then redeploy uh is well that footage is so there are a few things which every time you see them you don't really habituate to how weird and futuristic
Starting point is 00:36:35 they seem i mean and this this is footage that i'm sure at some point will become jaded enough to say well that's of course that's the way that's supposed to work. But watching those boosters land perfectly in unison, it just looks like a science fiction movie from the 80s that, you know, was just preposterous. And then when you think you sort of alluded to why that happened, right, when he's being interviewed, I remember him talking about it in the sense of, I just thought about what was possible. And I thought it was possible, it was physically possible to reuse rockets. And so he thought about the problem in a different way. And he has a very great ability to attract not only capital, but people to working on those problems. And the result can be amazing. But it's also important to note that not all of those results are amazing. I mean, we see this sort of like SpaceX's of the world, and we probably don't see the hundreds or thousands
Starting point is 00:37:31 of companies that rethink the problem as well and fail. But I mean, that's how we make incremental progress as a society. But that is, I guess that's probably another mental model you have written about. There's a survivorship bias that we're constantly being advertised the evidence of only those success stories, and we're not given any true indication of the ocean of failures that is behind many of those. Maybe we should talk about that. I mean, I guess this also connects to another model, which is just understanding base rates. I mean, just how many new businesses succeed, for instance, or how it's like, this is not something that you necessarily understand when you calculate the probability that any new venture is going to work out for you.
Starting point is 00:38:19 I mean, our view is based on ego, right? So we think, you know, the restaurant we're opening, based on ego, right? So we think, you know, the restaurant we're opening or the podcast we're launching or the app we're doing or sort of the new business that we're sort of endeavoring to undertake is going to be successful because we're involved in it. But everybody has that view and the success rates are, you know, abysmal, especially after a five-year period. Same as marriage, right? If you ask people whether their marriage is going to be successful, if they're sort of like on day one and embarking on that, they're of course going to say, like, we're not going to fall victim to this 50% of marriages dissolve sort of base rate.
Starting point is 00:38:54 But you don't have that. You need to factor in that outside view in terms of making decisions. And you don't need to do it all the time. Maybe it's best not to do it in matters of love, right? And maybe it's best to make a more emotional decision there, I think. Well, in that having a positive bias or an optimism bias could actually be a self-fulfilling prophecy to some degree in many endeavors. I mean, it's just that the positive attitude has to count for something in various contexts. I agree.
Starting point is 00:39:26 I think this desire to be purely rational all of the time in every decision that we make might actually be a disservice because it would sort of take people like Elon and why would I try to reuse a rocket? It's never been done before. And it would sort of dissuade us from doing that. We need some sort of emotional component to our decision-making. It's just a matter of determining when it's serving us and when it's hurting us. And I think that that would be the more accurate view of
Starting point is 00:39:54 how you think about that. So thought experiments, how do you think about thought experiments? The phrase now for me is fairly charged because I am the victim of having used thought experiments. The phrase now for me is fairly charged because I am the victim of having used thought experiments on controversial topics that did not get received as like they were thought experiments. Oh, which ones? This is something that I got being a student of philosophy, where just to look for any kind of ground truth, especially morally, you want to think of the corner cases. You want to think of conditions where you've simplified a real-world scenario so that you can discover whether or not you actually have an argument against or for the thing you think should be clear-cut. So probably the clearest case for me is thinking about the ethics of torture.
Starting point is 00:40:47 It's a fascinating and consequential argument to be had about whether torture is ever ethical. And it's by no means straightforward when you line it up against the other things we accept without blinking our eyes, which on paper seem worse than torture as you line it up against the other things we accept without blinking our eyes, which on paper seem worse than torture as you line them up. And the example I used was collateral damage. But in order to have that conversation, you talk about ticking bomb scenarios, right,
Starting point is 00:41:17 which in the real world don't happen very often. And in the purest cases, they don't happen at all. But the issue is, if you actually want to get down to bedrock, if you want to understand whether you can make an ethical argument against the use of torture in all cases, you need the clearest case. You need to say, okay, let's take out all the variables. Let's take out the uncertainty, for instance, of a person's guilt, right? So we know the person we have is guilty, right? We know that, you know, we caught him with his heart. He even claims to be guilty, right? And we caught him with his computer and we can see, you know, the kinds of nefarious things he's been planning.
Starting point is 00:41:55 And, you know, we see the plans for the nuclear device that he claims is hidden in the middle of a city, right? And he won't give us the information. So you need the purified case, not because that's the likely case, but let's just figure out if we actually have an argument against the use of torture in all cases, because that would be immensely clarifying. Because if we solve that, then we know, okay, we're never tempted to make an exception to this rule, right? Because we've thought it through in the clearest case where we know the person is guilty. We know they've got a nuclear bomb in the middle of a city. We know we have a shortage of time. There's no other methods we can use to get the intelligence.
Starting point is 00:42:34 You distill it down to the case where even good people would be the most tempted to resort to torture, then see if you have an argument against it. But what happens when you have conversations like that is that then people, rather than receive them in the spirit of ethical inquiry for the purpose of charting a course in the future, politically, they put a journalistic or political lens on it from the start, right? And so, I mean, even in a clearer case, and this is a case I haven't actually used, but this is the kind of thing that one would routinely do in a philosophy seminar. You say, okay, well, why can't we eat babies, right? So, like, you know, babies, there are unwanted children in the world, they're full of protein, what's wrong with eating babies? Now, it's not that the person who's raising that example
Starting point is 00:43:26 has an interest in eating babies. It's just, this is like a laser focus on moral bedrock, to go that far to the edge case. And it's instructive that some people will find it difficult to even argue that case, right? I mean, some people will feel like they need to resort to a holy book revealed by an invisible God in order to get you some bedrock where you can stand so as to not eat babies. And so it is an engine of interesting and morally rich conversation. Now, obviously not all thought experiments deal with ethically fraught territory. But I do find that the concept of a thought experiment has been stigmatized because it is synonymous with, or thought to be synonymous with, not making contact with the real world. You're basically creating the straw
Starting point is 00:44:18 man case that you're then going to use to guide you in the future with predictably bad results. A couple of comments just as you were talking there. One of the things that I found myself thinking as you were talking is, how do we find out about what we think on an issue? How do we find out where we land on a particular issue? And so we're expected to have these fully formed opinions. We're expected to have these fully thought out. And we have really, I would argue,
Starting point is 00:44:53 it's sort of increasingly difficult to have conversations about these things. And that in itself is a problem. Can you imagine the outrage that would ensue about having this debate on Twitter or just trying to figure out where you land? You put this out there and then the feedback would be like the media would be all over you. People would be jumping on you. I don't have to imagine it. This is my life on Twitter. This is why I'm tempted to delete my Twitter account on a monthly basis. Aren't we better off having this safe space, almost like a sandbox where we can
Starting point is 00:45:31 play with ideas, where we can explore things, where they don't have to infect us. We don't have to believe it. For me, this podcast has become that sandbox. I have taken great pains to insulate it against the normal commercial pressures. As you know, maybe we'll talk about that at some point. But another example occurs to me that a guest brought up, who I believe you've also had on your podcast, Will McCaskill, the ethicist, who's just fantastic. And he was talking about the ethics of, you know, running into a burning building to save a child, you could do that. But if you run to that burning building and on your way to the child's bedroom, you discover that
Starting point is 00:46:14 there's a Picasso on the wall, and you could also save that and liquidate that and use the $75 million or whatever you get from that sale to save many more children than one. And if there were really a zero-sum contest between the money or the child, at minimum, that's an interesting, apparent ethical dilemma to sort through. Now, it seems we have a very strong intuition that you would be a psychopath to grab a painting rather than a child from a burning house. But of course, the choice is never really presented to us in that form. But there are many analogous choices. When you look at just the decision for a news organization to spend 24 hours covering a story about a single suffering person as opposed to a genocide that is raging in some distant country. It's just the way we marshal our resources, you know, the single
Starting point is 00:47:12 compelling case that causes the massive judgment as opposed to the statistics of vast human suffering that doesn't move the needle at all. This is how we can discover and correct for moral bugs that are actually of great consequence. We need a mechanism to sort of have these conversations, and I think it's going away. And as of right now, I mean, the only safe, guaranteed safe space you ever have is just inside your own brain. But in the future, we might even see that go away
Starting point is 00:47:45 as technology increasingly sort of like permeates us and maybe our skin. And then what happens is like the minority report might become real, right? Where you think of somebody cuts you off and you're like, I want to kill them. And all of a sudden you're arrested because you had this thought.
Starting point is 00:48:01 And I think like we're at the very, we're in a very interesting time for thinking where that sandbox doesn't exist like you you can't go out being Sam Harris and say something I mean you can because you're you but I mean a lot of people with such a public profile can't come out with a controversial idea because the backlash on them is going to be so huge and I think as a society, we need a way to sort of maybe preface it. Maybe there needs to be a standard way to preface it. along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free
Starting point is 00:48:50 and relies entirely on listener support, and you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.