The Knowledge Project with Shane Parrish - TKP Insights: Making (Even) Better Decisions
Episode Date: October 25, 2022This episode is packed with wisdom from previous guests on the art of making better decisions. We discuss the three types of decision-makers, how to control your emotions when making decisions, why... it’s crucial to look at every decision differently, the processes for coming to the right decision, and how to learn from your mistakes when you get it wrong. The guests on this episode are author Ventakesh Rao (Episode 7), psychologist, author and professional poker player Maria Konnikova (Episode 89), Stripe co-founder Patrick Collison (Episode 32), cognitive-behavioral decision science author and professional poker player Annie Duke (Episode 37), and Shopify co-founder Tobi Lutke (Episode 41). Transcript: https://fs.blog/knowledge-project-podcast-transcripts/tkp-insights-decision-making/ -- Want even more? Members get early access, hand-edited transcripts, member-only episodes, and so much more. Learn more here: https://fs.blog/membership/ Every Sunday, our Brain Food newsletter shares timeless insights and ideas you can use at work and home. Add it to your inbox: https://fs.blog/newsletter/ Follow Shane on Twitter at: https://twitter.com/ShaneAParrish Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Welcome to the Knowledge Project. I'm your host, Shane Parrish.
The goal of this show is to master the best what other people have already figured out so you can use their insights in your life.
What you're about to hear is a special episode of the Knowledge Project.
Over the last five years, I've been lucky enough to speak with some of the most incredible people in the world,
and when I re-listen to these episodes, they feel like they were recorded yesterday.
Except I sound like even more of a clumsy dork than I am now.
What really stands out is how the insights in these conversations stand the test of time.
They're as applicable today as they were when they were recorded.
So a few times a year, we'll go back to earlier episodes, some of which you have made.
have missed, some of which you may have forgotten, and we'll pull out some gems on a single
theme weaving together multiple guests. This episode is on making better decisions. As you can
imagine, this topic comes up quite often on the show, so it wasn't easy to pick some of our
favorite moments for this episode. This episode includes some timeless insights from Stripe co-founder
and CEO Patrick Collison, psychologist Maria Kornikova, consultant Venkatesh Rao, poker player and author Annie
Duke and Shopify co-founder and CEO Toby Luque. Let's start with Venkatesh.
Venkatesh Rao is a consultant and author of Tempo, a book that examines decision-making by
weaving together concepts and principles from math, cognitive psychology, and philosophy.
Venkatesh appeared on episode seven of the show back in 2018, and he starts this portion of our
interview by discussing the three types of decision-makers, as well as an analogy of how world-building
in fantasy books like Lord of the Rings
can demonstrate how mental models work.
It's time to listen and learn.
Right after I wrote the book,
some of the more interesting criticisms that I encountered
actually helped me see what the book didn't cover well
and exploring that ended up being a very fruitful thing for me.
So I'll just point out a couple of those.
those things. One was the idea that there are a couple of major categories of decision-making
styles, so to speak, and the book really is strongly focused on one of them. And I was
sort of unaware of the sort of structure of the other approaches to decision-making that were
equally big. So the big one that I've concluded in my mind is, there's this approach to
decision making that you could say is based on not reasoning per se but it's a very conceptual
approach where you think in terms of say mental models what frames am I looking through what
metaphors am I using what is the sort of significance of my decisive actions versus my
random actions what narrative is sort of framing the unfolding of events so that's an approach
that's very natural to me it's why I think I ended up becoming an engineer it's also an
approach that's very natural to you.
Farnham Street, I think, explores that in a great deal of detail from a variety of
different sources.
So that's, I don't know what to call it, but let's call it sort of conceptual reasoning
as a framework for decision making.
And I would say maybe a third of humanity operates that way.
It's their operating system for life.
But the other two thirds does not operate that way.
And the two categories that have realized that I'm very, very unlike are one is what I would
call ethical reasoners. And ethical reasoners very sincerely and honestly start with a very deep
and intuitive sense of right and wrong. And these people in my opinion, right and wrong are
in the sense of good and evil, not in the sense of true or false. So these people who start
with the framework of good and evil, not only do I not resonate with them, I often really
struggle to understand why they're thinking the way they're thinking. And invariably, when I
disagree with people very strongly about something, it's usually the fact that they're starting
from good and evil premises. And it's not as unsophisticated as we tend to think, that these
are just, you know, like not very sophisticated religious thinkers, for example. That's a subcategory
of people who use ethical reasoning as a framework. But it's much broader and it can get much
more sophisticated. So I think that's a big blind spot in my own thinking that I've only slowly
become aware of and explored a lot more. And the other, which I would say the second third of
the category of people I don't truly get is people whose entire decision making process and
framing is based not on in something that goes on in their own heads, but in the sort of
collective consciousness of the group they belong to. So these are what a friend of mine, Greg
Raider, he phrased as affiliational thinkers, people for whom every decision basically boils
down to, which group do I want to be like? Which group do I want to belong to? And they do that
by saying, all right, let's take a issue like abortion or is Trump good for being president.
And I'm not going to process that through examination of the issue itself. But which group
can I belong to that their views on the topic are comfortable for me to be socially. Does that make
sense? It's your social skin. That's your first consideration. So those are my three big buckets
of types of human decision making. And you and I, I, I think, represent the first kind,
which has explored quite a lot in tempo and your blog. The second is these people who start
with good and evil, who I understand a little bit now, a little better four years down the line.
And the third is affiliational thinkers or tribal thinkers, whatever you want to call them.
These are the people I understand the least because in a way, understanding these people individually is the wrong thing to even attempt.
You have to understand sort of how their groups or tribes think and think of these individuals only in terms of which tribes they choose to join.
So that's the only decision that ever matters in their life, which is which tribe should I join.
Every other decision or thought process they go through is really somewhere in the collective consciousness.
Do you think those tribe decisions are based on the particular decision that you're making or are they based on the tribe at large?
So if I am I gravitating towards a tribe on particular issues or is it that I want to be like that tribe in all issues?
I think it's the latter because if you're talking about being like the tribe on particular issues,
you're very too individualistic.
You're making decisions based on the merits of a particular case.
You might say that on capital punishment, I'm with liberals and I'm against it.
I'm referring to a U.S. context, of course, here.
So you might say on capital punishment, I've thought through the issues and I'm against it,
therefore I'm with the Democrats, but on gun control, I've thought through it,
and I'm with the Republicans on that.
That's too much thinking.
It shows that you're not a tribal thinker.
For a tribal thinker, it would be who do I want to be with as sort of the operating system of my life?
Who do I want to have barbecues with?
Who do I want to have barbecues with?
Who do I want sort of as my friends in my bowling league?
That sort of issue.
It's not explicit.
They don't sit through and say, all right, these are the 50 activities and habits that define my life.
Therefore, I'm going to pick the optimal tribe to join.
No, it's not like that.
It's more a process of emotional resonance.
And after that, you basically are partisan in a predictable way on all issues.
And to people who are more individualistic and discriminating thinkers, this seems kind of stupid.
It's like, how could you possibly take this huge set of like 50 different issues with very different contexts and considerations
and basically agree with one tribe on all 50 of those issues?
But if you look at just how tribal reasoning works, it is possible and that's kind of how it does.
does in fact work. And I would say that this is my total, I know nothing kind of guess that
each group is a third, a third, a third. It could be that tribal thinkers are actually much more
numerous than the other two kinds. And do you think that, I mean, between the framing of those,
I mean, it leads us to the obvious conclusion that the first one is better, you know, not putting
it in a good versus evil context. Do you think people approach the good versus evil in terms of
I'm the hero and I'm trying to write this evil, or do you think it's more nuanced than that?
Well, first I would resist the temptation to conclude that the first approach to thinking and
decision making is the best. It's the one most suited to certain personality, certainly,
and in certain situations, it makes for like much higher probability of survival and success
and thriving, right? Like in, say, the American context, because of whatever the social
operating system for the country at large is, for better or worse, Americans tend to believe
in individualism, the myth of individualism, even though Americans are not super individualistic,
but if you believe in the myth of individualism, the first approach to decision-making
where you kind of maintain this fiction of processing everything on your own and staying away
from tribal pandering and so forth, that tends to work very well. Whereas if you go over
to like strongly traditional Asian cultures,
the reverse might be true where everything is framed
with respect to the context of the social environment
and that might be a much better survival strategy
if you want to actually succeed in that environment.
So I'd say whether or not one is better than the other
is a question of context and what you mean by better.
But to your other question of good and evil types,
I don't know, I've been thinking about
it for quite a while yet, and at a philosophical and a practical and a reasoning kind of level
and epistemological level of are they actually exploring the truth about the way the world
works, I think they're kind of full of shit on all those fronts.
The new Mitsubishi Outlander brings out another side of you.
Your regular side listens to classical music.
Your adventurous side rocks out with the dynamic sound Yamaha.
Regular U owns a library car.
Adventurous U owns the road
With super all-wheel control
Regular side
Alone time
Adventurous side
journeys together with third row seating
The New Outlander
Bring out your adventurous side
Mitsubishi Motors
Drive your ambition
Wendy's most important deal of the day
Has a fresh lineup
Pick any two breakfast items for $4
New four piece
French toast sticks
Bacon or sausage wrap
biscuit or English muffin sandwiches
Small hot coffee and more
Limited time only at participating Wendy's Taxes Extra.
But there's something hardwired deep in human nature
that seems to work very well with good versus evil reasoning frames.
And I think here's my hypothesis on why that is the case.
Why is it that this is buried so deep in our firmware?
If you think about most species of animals,
their survival concerns all have to do with their material environment,
which is, can I get enough water, can I get food,
can I hunt my prey, right?
Whereas humans, a great deal of their survival depends on other humans.
It's how do I get along with the group?
Does the leader of the troop of monkeys like me or not?
What will happen to me in a tropical environment if I'm kicked out of the monkey troop
versus a temperate environment versus an Arctic environment, right?
So 90% of our consequential survival behaviors as a human social species depends on
things having to do with other people.
And good and evil, if you think about it,
is a very, very good way of simplifying
that whole area of decision-making,
where if you simply decide that a certain group is good
and other groups are evil,
everything else gets massively simplified.
So that's how you get, I think, the abstract,
good and evil approach to thinking.
It's a re-fight form of tribal affiliational thinking.
So if you want to stack them
in sort of evolutionary primacy order,
I think tribal affiliational is the sort of most ancient of our decision-making frameworks.
The good and evil framework is slightly more recent in evolutionary history because you need
a certain capacity for abstract thought before you can frame good and evil as categories.
And then the kind that you and I try to promote in our writing and thinking is the most recent
of all.
It might be, I don't know, no more than 500 years old.
I was just thinking that they were almost inverse from the way that you had mentioned them
from an evolutionary perspective and the two the two and three so the good versus evil and the
tribal tend to blend right more so than the other one it was mental models that first
drove me to your book i mean one of my friends read it and um they pointed out that you were talking
about mental models in your book and at the time and i mean to a large extent today so few people
are talking about that what's your definition of a mental model well i have a sort of
technically inspired definition in the book, as you may recall. So I used something called the
belief, desire, intention model of Michael Brockman, who's a philosopher at Stanford, and it's been
the basis of a lot of artificial intelligence research. So that's one sort of effective way to
get at defining what a mental model is. It's a set of beliefs, desires, and intentions. But I think
that sort of definition is useful for certain narrow technical needs. UI people have
similar sort of technical definitions, politics people have similar definitions like you've mentioned
George Lakoff a couple of times in your writing, I think, and Lakeoff has one based on
conceptual metaphor. All these narrower technical definitions of mental models, they're useful
for certain questions that are honestly a little, I don't know, too detailed and deep for a general
mass audience. They're not interesting. So for a mass audience, I would say the best definition of
a mental model is a world in the sense of science fiction or fantasy, right? So you've got, say, a
universe like the Harry Potter universe or the Lord of the Ring universe. And so there's the world
and then there's the story that's told in the world. And look at the way the most popular
science fiction and fantasy is written. You're told the story, but through learning the story,
you also learn about the world. And stories differ in their ability to do that elegantly.
So a lot of the Rings is sort of has a poetic elegance to it in that you don't feel like you're learning about the world,
but by the time you're done with the trilogy, you actually know a lot about it.
Whereas Harry Potter is a little bit more heavy-handed where a lot of it is very clearly world building.
And you kind of get the tedious sense of reading a geography book or being asked to memorize a list of countries.
That's sort of explicitly learning a world in which a story set.
But if you look at the movie versions, you realize that in a way,
J.K. Rowling is a product of her time where she's not really the author of a book so much as a media property
that she was at least at some level aware would turn into a movie, an online world, a game, and so forth, right?
So it could be that she's just a product of our times and they're two very different works.
But that's basically my idea of what a mental model is.
it's your sort of implicit understanding of what the world is and it's very easy to see in the
case of like fictional universes with a few rules that are different from our own but this same thing
is true for much more realistic worlds as well so like take the law and order franchise you've got
I don't know if it is popular around the world as much as it is here but you've got this
franchise of TV shows,
a criminal intent,
special victims unit, and so forth.
And this sort of gives you a sense
of an entire universe of
police work and
crime and a sense of the world
as a very dangerous place where you've got these
brave defenders protecting you.
And that's a mental model, right? So while you're watching
Law and Order episodes, that mental
model is active in your head and it allows you to make sense of the
stories you're being told very efficiently.
So mental models allow you
to very efficiently make sense of stories.
And if you are not familiar with the mental model,
then the author must build the mental model.
And that's what happens with science fiction and fantasy.
But the interesting thing is,
when you read, say, extremely foreign fiction.
So fiction that's very alien to you in terms of mental models,
the author may assume you understand the mental morals,
but you may not.
So, for example, Japanese comic books,
the couple of times I've tried to read them,
They just feel so bizarre to me.
They're sort of conventions for indicating, you know, emotions and actions and so forth.
They're just so unintuitive to me that the world kind of, that should be in the background and implicit
and I should just be able to reference it like an operating system, it sort of becomes a little
too visible for me to read defection seamlessly.
So it's like I'm trying to run a Windows program on a Mac computer without realizing it.
So I think that sort of is the best way to understand mental models for, I don't know,
people who don't need to deal with them in any technical way.
That's Venkatesh Rao from episode seven, the three types of decision makers.
Check out the show notes for a link to the full episode.
Next up is Maria Kornikova, a professional poker player, psychologist,
and author of three New York Times bestsellers, including the biggest bluff,
how I learned to pay attention, master myself, and win.
Maria holds a PhD in psychology from Columbia University
and took a unique route to becoming a professional poker player.
But as you'll hear in this segment,
she mastered the game by closely observing her own emotions
when it came to making high-stakes decisions at the poker table.
And I started playing online first.
So even though I was going to be playing live poker,
that's what I wanted to do, online is a,
easy way to learn because of the sheer volume of hands. You have a computer dealing hands, so it's
really quick. So whereas in live poker, you know, in an hour you might play, I don't know, 30 hands,
maybe even fewer, depending on the table you're sitting at, you know, one slow player and suddenly
you're playing 10 hands an hour. But online, it's hundreds. And so you just get a lot of
experience quickly because you see all of these situations unfolding before you. I did not enjoy playing
online. It did not make me want to become an online player, but it did allow me to get more
experience quickly. And it also allowed me a really important tool that Eric and I could use because
I could record myself playing. So we actually were able to go through my games and he could explain
what things I was doing wrong, what things I was doing right, what I could change about my
thinking. Because the way that he approached it is he never said this is a mistake. He would
say, what are you thinking here? Why did you do this? And that's how he taught me. It was not a
prescriptive, this is what you do here. It was a, why did you do this? What motivated you? What were you
thinking? That's fascinating because you're getting a lot of iterations and you're getting feedback,
but what you're really getting is reflection. Like by making you walk through your thought
process, you're reflecting on what you were thinking given the information at hand. Exactly.
Exactly. And just you can't underestimate how important that is. One of the things that I suddenly realized after we did this a few times was that I started thinking better because I just started thinking in advance, what am I going to tell Eric? And that made me actually stop and reflect for that extra second. Whereas before I would have just acted because, well, you know, let's do this. And it seems right. Now I'd say, okay, if I say that
to Eric, he's going to say, well, that's not good enough. That's not a decision process. That means
that this is a mistake. Even if you did the right thing, it's still a mistake. So one of the things
that he really taught me is that you have to distinguish the action and the outcome from the thought
process. As long as your thought process is solid, as long as you're thinking through things
correctly, then you did well, even if you ended up coming to the wrong conclusion, because
that means that eventually with better inputs, you'll come to the right conclusion. And even if you
came to the right conclusion, but then the cards went against you, and you were, you had a bad
outcome. In both of those cases, as long as your thought process is solid, then you've won
basically, then that's all you need. And so forcing me to actually think through,
what am I going to tell Eric forced me to start thinking and to start thinking better and to start
reflecting on elements of my thought process that I didn't even realize we're there because I
would just kind of do things. And it really, let me tell you, it transferred out of poker very
quickly. And I found myself doing this in all sorts of situations. But at the time, it was just
helping me become a more thoughtful player and through being more thoughtful, a better player,
not necessarily a player who was winning right away because I didn't know enough. I didn't have
enough experience. All of those input variables weren't there. But the player who was building,
kind of creating the building blocks that would enable me to be a successful player down the line.
I want to double click on two things you said there. First, the thought process. Like, how do we know
we have a good thought process? Especially if you're a novice in it. Do you need a mentor or a coach
or can you like ascertain that for yourself? And then how did you apply that to life? Like a
side of that. Well, yeah, I do think you need someone to tell you if you're a complete novice.
I mean, I would have had no way of knowing what I should be paying attention to, what I should
be thinking through, what I should be doing if I didn't have Eric to direct me. I mean,
I could have gotten some level of that from online tutorials, online videos, but it just would
not have been the same. The fact that it was personalized, the fact that it was directed at me,
at how I was playing, at how I was thinking, and that it was kind of this direct feedback.
I think that was crucial in enabling me to improve as rapidly as I did and to actually
appreciate what was going on. So sure, you can get some of those tools elsewhere, but I do
think it's very important to have someone else. And I actually think this is true of everything,
not just poker. I mean, the way that I became a better psychologist is because I had great
mentors who were able to talk me through my thought process along the way. The way I improved as a
writer was obviously doing it myself and you have to with all of these things, nothing will supplement
or nothing will completely override the need to just do it and do it and do it and practice
and kind of get better that way. But I also had amazing writing mentors. I had people who really
helped me learn how do I think through pieces. How do I think through writing? How do I think through all of
this, people who taught me to take apart other people's books so that I could figure out how
they did what they were doing and how I could apply that to my own writing. So I think in
everything in my life, that was incredibly important. And I think that, you know, my first book
was about Sherlock Holmes. And I think that one of the really interesting things, people talk about
the Sherlock Watson relationship as, you know, Watson being this foil for Holmes. And that's true.
But Watson also plays an incredibly important role in forcing Holmes to think better, to be actually a better detective, to be better at his job to have a clear thought process because he asks questions.
And Holmes needs to respond to him and needs to bring him through his thought process and tell him how he arrived at a certain conclusion.
And doing that helps Holmes see flaws in his logic that he didn't see before.
for. And I think he actually becomes a much better detective through their friendship and through
kind of that evolution that happens only because he had to talk things through with Watson.
So I'm, I mean, I'm someone who works by myself. I'm a writer. And in poker, I, you know,
I play by myself. No one else is playing my cards for me. But I do think that forming those
relationships, having someone to talk to, being able to verbalize your thought process, I just think
it's so crucial in becoming a good thinker, no matter what you're doing.
I think that's really insightful.
And poker gives you sort of like the added component of you have something at stake, right?
Didn't Emmanuel Kant say something about the role of betting in his critique of pure reason?
Yes, he did.
He did.
And I was so excited when I found that section of critique of pure reason and was able to apply it to poker.
Kant says that betting actually forms an integral part to improving your decision process and
improving your level of certainty in something because people can say all sorts of stuff
and just make all sorts of pronouncements if they're not held accountable for it.
However, if you're forced to put a monetary value on your opinion on how certain you are
of something you're saying on how certain you are about, you know, if you're a doctor,
how certain you are about your diagnosis, if you have to bet on it, all of a sudden it forces
you to stop for a second and think, okay, am I really this sure? How much do I value it?
And Kant even proposes a thought experiment to start upping the values. You know, would I,
would I say this and stand by it for $10? Yeah, sure. What about for $100? Okay, yeah, that seemed
reasonable, a thousand? Okay, well, maybe I need to look through some of my assumptions. Give
me a second. A million? Okay, okay, right. I'm not, I'm not that sure anymore. I really need to
figure it out. Or, you know, a million, yes, I will bet even a million. I'll bet everything
because that is how certain I am that I'm right. It really forces you to take a step back and
reflect because all of a sudden you have something very, very tangible on the line. And Kant uses
this as an experiment for, you know, how to how to make people reason better, how to make people
better thinkers, how to make people actually stop to consider what they're saying rather than just
spewing stuff and being experts. And when I was reading this and working on rereading Kant
and thinking through all of this, it just made me realize how relevant he is to the modern world
into social media and to the internet once suddenly, you know, everyone's an expert on everything.
And everyone is so certain about everything. And you have all of these amplified opinions online.
And I just wish all of them would just do that thought exercise and be forced to, you know,
actually forced to go through with it that we had some mechanism by saying, okay, before you tweet
something out, you know, how much money are you willing to bet on the fact that what you're saying
is actually correct. I think that could lead to some pretty interesting results.
It's definitely, it forces you to think about not only your thought process, but sort of like
how confident am I that this is. So how did that just thought process, like explore this with me
a little bit in terms of how that changed outside of poker? Did you find yourself thinking out loud
and like self-criticking your thought process? Or did you go to somebody and you're like,
here, I want you to hear my thought process on this and then point out. So I started
critiquing myself a lot more, and I started being my own fact-checker a lot more than I had
been used to doing in the past. So, you know, I'm someone who...
What does that mean your own fact-checker?
Yeah. So I'm someone who is a fairly opinionated person, and I'm also prone to exaggeration,
especially when I speak. And it's just kind of for rhetorical effect. And I never really,
I've never really thought about it. It's just something that I do. But,
all of a sudden, I actually started saying, okay, well, if this isn't accurate, maybe I
shouldn't say it, and maybe I should verify the accuracy before saying it.
You know, let me start fact-checking myself a little bit.
And if I'm offering an opinion, there was a really, really funny thing that happened when
I was pretty young, maybe 15 years old or so.
My mom and I were taking a walk, and I grew up in the Boston suburbs, and we were in the
city visiting and walking around and we got a little bit lost. And I just so confidently said I knew
where to go and really thought I did and just got us further and further lost. And at some point,
this was obviously before smartphones. This was before everyone had a GPS on them. All of a sudden
we just, we were deep in Chinatown and had no idea where we were or how to get anywhere.
And my mom had actually kind of known where we were before, but I was so confident.
that she just deferred to me.
And I've done things like that a few times, not a few,
a number of times in my life in the past where I think I know something
and so I just confidently do it.
And poker and thinking about things in this more metacognitive way
where you have to think about your thinking and not just think,
oh, I think I know what this is and I'm pretty confident in it.
It actually, I think, made examples.
like that much less likely to occur. These days, I'm actually much less confident in any of my
opinions and in any, even my opinion of which way we should turn in order to get back to where
we were trying to go, you know, something as simple as that. I've learned to really tone it down
and to really question myself every single time and say, okay, wait, why am I sure? Why am I so
confident in this? Is the data reliable? Is it something that I can trust or not? You know,
where is that confidence coming from? Because one of the things I learned in psychology is that, you know,
we have intuitions all the time and we are actually horrible, horrible, horrible at being able to
tell the correct intuitions from the wrong intuitions. We're about 50-50. And that's a really bad
track record. And sometimes our intuitions are spot on and sometimes they're completely
wrong, but we can't tell the difference. And poker really, I think, forced me to go deeper into that
and to figure out why can't we tell the difference and how can you become more confident in your
intuitions? Well, you have to go back and say, do I have a basis for this? Do I have the experience in
this? Do I have, you know, is there a reason my intuition should be right? Because what correct
intuition is, is all of this experience that we've accumulated in something that we don't necessarily
have conscious access to the process of acquiring. And so your intuition should only be trusted
if you're an expert in this area, if there's a reason why you should be confident here. And
learning to kind of disentangle that and learning to spot false confidence for what it is, I think,
is a crucial skill just in absolutely anything.
How do, like, if you factor in a continuum between rational and emotional, how does that
factor into your thought process and how you're thinking about something?
I think that we are all necessarily emotional if we're human.
I mean, even Sherlock Holmes was someone who experienced emotions, even though people think
of him as this kind of cold, almost computer-like person. What he did, and this is, it's interesting,
there are actually a lot of ways that writing about Sherlock Holmes helped me become a better
poker player because it's a lot of things, a lot of things overlap there. And this rational,
emotional thing is one of them. So what Sherlock Holmes does is he acknowledges his emotions.
He recognizes that they exist and then he dismisses them if they're irrelevant to the decisions.
So he says, you know, sure, you know, I feel sympathy for this woman.
Sure, I see that she's, you know, a really good person.
But now I'm going to dismiss all of that because it's irrelevant to the decision.
So I feel the emotion.
I experience it.
And then I put it away and do not use it when I'm coming to a decision because it's not relevant.
In poker, there's this idea called tilt, which Eric introduced me to and which
It's just, it became one of my favorite words in the world. I use it all the time now.
Another way that poker has filtered its way into my life is that now I say that I'm tilted
and things put me on tilt and that something is tilting all the time without even thinking about
it because it's such a, such a convenient word. What tilt means is that you've let emotion
into your thought process, irrelevant emotion into your thought process. Usually it's negative.
So, and by negative, I mean a negative emotion.
So for instance, you lose a really big pot.
So you lose a lot of money and you get angry or you get, you know, really frustrated.
And so that affects how you're playing because you start making decisions, not because
that's the right decision to make, but because you want to get those chips back or you
want to punish the person who took them from you because you think that he's being a jerk
or whatever it is.
But it can also be positive.
So sometimes they're positive emotion.
that also are irrelevant to your decision process.
So in poker, the example would be you've won a lot of money.
You've won a number of pots, and all of a sudden you think you're invincible.
And so you have kind of this very, you're feeling great, you're feeling so confident,
and then you start maybe taking risks you shouldn't take and making decisions in a way
you shouldn't be making decisions because you're letting that emotion seep into your decision
process. So when you say something is tilting, it means it's something that's getting under
your skin, making you kind of emotional. When you say that you're on tilt, it means, oh, you're
emotional. You're not thinking as rationally. You're not thinking as clearly anymore. What the goal should
be, and this actually also relates directly to what I did my PhD on, because Walter Michelle, what
he studied with the marshmallow kids was self-control and hot and cool decision-making. How to
do we, in the face of a marshmallow, which we can use just as a symbol for anything that is
enticing, anything that we want, kind of this craving, this hot condition. In the face of that,
how do we resist? How do we cool it down? How do we let rational thought processes prevail when we're
four years old and we just want to eat the marshmallow? And all of his work was basically
centered on how do you teach self-control? How do you teach self-control? How do you
teach people to cool down hot processes. How do you go off tilt? How do you actually learn to
cool down your emotions so that you can make the correct decision? In the case of the
marshmallow, don't eat the thing, wait for your second marshmallow. Or in the case of poker,
you know, maybe I should fold this hand or whatever it is. You can make the analogy to basically
any decision that you want to make where you're being emotional. So all of these things kind of
came together and helped me understand that you can't ever be purely rational because
you're always going to experience emotion. And yet, what you can do is learn, this goes back to
the being kind of more self-reflective and going through your thought process, learning to
be more mindful about the emotions your experience to say, okay, I am experiencing this emotion
right now. Why? What made me experience it? Because sometimes emotions are actually incredibly
relevant to a decision. It's the irrelevant emotions that you need to get rid of. There's
psychological work that shows that people who can't experience certain emotions like fear because
they have neural damage. They actually make really bad gambling decisions and they go broke because
they just don't care. They have no risk conversion whatsoever. Well, that's also not good. They're
completely not emotional, but in that particular case, the fear was important. It was integral to
the decision, and they no longer had that emotional feedback. So what you need to learn is to identify
the emotion, figure out that you're experiencing it, because a lot of times you don't even
realize, you know, you don't realize you're angry, you don't realize you're frustrated, you don't
realize you're hungry, you don't realize that all these things are going on. So first you identify
it, then you try to identify the root cause. Is this something?
that's totally incidental or is this something that's actually integral? More often than not,
it's incidental. And then you say, okay, now I need to dismiss it, figure out, how does this
normally affect my decision process? And how do I correct for that? How do I actually take it out
of the decision process? And the funny thing is, just the very process of identifying and kind of
going through that thought process tends to cool you down because all of a sudden you've kind
of distance to yourself from the immediacy of the event, all of a sudden you're actually
thinking about it in a different way. And so by the time you even get there, you're already
calmer. You're already more rational. So it helps in more ways than one.
That's Maria Konnikova from episode 89, less certainty, more inquiry. Check out the show
notes for a link to the full episode. Next up, we hear from Patrick Collison, co-founder and
CEO of the leading online payment processing company Stripe.
If you've purchased anything online over the past few years, there's a good chance that Stripe facilitated that transaction.
We'll start this portion of our interview with Patrick's thoughts on the biggest differences in his decision-making practices from five years ago up until now.
I think there are four big differences.
The first is, and I just place more value on decision speed, in that if you can make twice as many decisions,
at half the precision, that's actually often better.
And then given the fact that sort of the rate of improvement of decision making with additional
time almost necessarily tends to kind of flatten out, I think that most people, certainly
the Patrick of five years ago, and potentially even the Patrick of today included, should be sort
of earlier, should be operating earlier in that curve, make more
decisions with less confidence, but in significantly less time, right?
And just recognize that in most cases, you can course correct and treat fast decisions
as a kind of asset and capability in their own right.
And it's quite striking to me how some of the organizations that I hold in the highest regard
tend to do this.
The second thing is not treating all decisions kind of uniformly.
I think the most obvious kind of axes to break them down on are degree of reversibility
and magnitude.
And things with low reversibility and great impact and magnitude, those ones you do want
to really deliberate over and try to get right.
But I think it's very easy sort of absent care to have maybe this mechanism you put in place
for those decisions to seep into decision-making for the other categories, and really in the
other three quadrants, you can afford to be sort of much more flexible and much more fluid
and, again, really just to prioritize speed, because obviously if it's very reversible, then
by definition, you can always correct it later. And if it's, you know, of low import, then
who cares, right? And so that's kind of the second one. And just being kind of cognizant of that
and before making the decision, trying to categorize, well, what kind of decision is it?
The third thing is I now try to fairly deliberately just make fewer decisions, in that, why am I making the decision?
And for some kinds of decisions, there are some good reasons for that.
I mean, there are some decisions the CEO ought to make and is kind of fundamentally on the hook for,
but there are some decisions where if I'm making it or if I have to make it, that probably suggests that something else
organizationally or institutionally has broken.
And I think the need for a decision from anyone, not just from me, is often like only a sort of an epiphenomenon.
And there's really some other underlying issue that's causing you to have to make in the first place.
And so thinking about that and concretely doing more to push others to make decisions and sort of pushing them back sort of to people who ought to be the domain experts.
And then fourth, when I realize that I would make a decision differently to how someone else is making it, not even really discussing the decision itself, but trying to dig into what is the difference in our models such that you want to make decision A and I want to make decision B.
And one thing we're currently spending a bunch of time here at Stripe is having different parts of the organization write down what they're optimizing for essentially, like what their mission is, what the long-term key metrics are for kind of.
they're part of the organization, who their customers are, either internally or externally.
And so things of this kind of persistent ongoing underlying nature such that hopefully
once there's agreement on those longer-term things, then maybe a difference on to any particular
decision might just be, well, we differ on what the most instrumentally effective way to achieve
this outcome is, but we're both really unified on what the desired end state is. And there, I think,
I think disagreement over sort of instrumental efficacy, you know, well, that's really that problematic
a disagreement because, well, if you're right, then we'll soon learn that. If you're wrong,
reality will probably sort of make that pretty clear in short order. I think the more troubling
ones and the ones that tend to cause more kind of persistent friction in an organization are where
sort of there is latent disagreement and what you're actually optimizing for, but that's kind of
never explicitly surfaced and uncovered.
And so now, I guess, again, in decision-making, I place kind of more importance on making
sure that we have the right sort of foundational agreement such that the kind of disagreement
that tend to arise are of the sort of essentially more superficial sort, and their
agreement is actually less important.
Part of culture is learning from the decisions the organization makes.
What do you do at Strive to make sure that people are learning and what do you do personally?
to make sure that you're learning from the decisions that you've made both positive
and perhaps ones that you, in retrospect, would have wished you could make differently.
I'm inclined to say, I don't know if I actually believe this,
but I'm inclined to say in response to that question that decision-making and organizations
is slightly overrated in that organizations are not like investment entities or funds or managers,
in that organizations, well, with investing, it's fundamentally very binary.
There is a moment at which you either buy or don't or sell or don't or whatever.
And maybe it's somewhat more continuous in the case of say public market investing and so on,
but given sort of constraints on just decision-making time, I think you have to treat it as a bit more binary.
You assess this stock and you make a buy or not decision.
Whereas in organizations, everything is much more fluid and continuous.
It's much more about, I think, designing the feedback mechanisms.
Or biological.
Yeah, exactly.
And there's the famous sort of water model of the economy, you know, with the sort of circulating
fluids and you can vary the interest rate or the inflation rate or whatever, but just kind
of try to get a sense for the overall kind of biological apparatus.
And I think an organization is much more like that.
And so I think the things the things to optimize are the incentive structures and the mindsets
and the definitions of the goals and the feedback mechanisms from the outcomes to the inputs
and the work and the operations themselves and all of those things and less the binary decisions.
And I don't have kind of completely dismissed obviously the importance of decision making
in that there are times where you decide, well, are we going to launch this product or not,
are we going to start this project or not, or we're going to replace this system or not, and so on.
So there are, of course, real decisions, but I think it tends to be much more, well, I guess maybe
it doesn't feel like the right unit of analysis to me.
I think the right unit of an analysis is that of the cell.
And the question is, well, in an organization, what are the cells and what are the organs
and how do they interact?
what are the feedback mechanisms between them?
Let's geek out a little bit on the feedback mechanisms here.
What sort of feedback mechanisms do you try to make sure are in place?
What point in the process do you try to acknowledge what they are?
I really think that, and this is not to evade the question,
but I really think it's too early to answer that in the sense that.
I mean, I can kind of tell you what I think today and the sort of changes we've made over the last year
and things like that.
But, like, Stripe has been a thousand-person organization for,
or has been a more than 500-person organization for just over a year, right?
We're beginners at this.
And, you know, three years ago, Stripe was under 100 people.
and I think either to opine as if or to even more problematically believe that we kind of have it
figured out would be real hubris and so in what we've been talking about I think that's
maybe some of where our and my thinking comes from but but I don't.
don't know what the right answers are yet.
And we spend a lot of our time sort of scrutinizing other organizations trying to find
out and kind of reverse engineer what works for them and why.
And I think that part of what's interesting at the tech industry is that it's a kind of
pure knowledge work that we're still, I think, quite early in sort of figuring out in terms
of how to optimally coordinate it and collaborate on it, in that you can sort of draw a lineage
of HP and Intel and Microsoft and Google and Facebook and so on, WhatsApp.
And there are all these sort of suggestive examples that I think at least, again, suggest
that we may not have it all figured out.
I mean, the fact that WhatsApp was such a minuscule team and Instagram too, of course, despite
operating at such a scale, or the fact that the way of a new paradigm.
Yeah, yeah.
And the way kind of Facebook operates is very different to the way, you know, HP operated.
That's Patrick Collison from episode 32, earning your stripes.
Check out the show notes for a link to the entire episode.
Next up, we'll revisit our conversation with Annie Duke, who not only killed it in poker,
but has since become a celebrated author
focused on cognitive behavioral decision science
and decision education.
She's the author of Thinking in Bats,
How to Decide and Quit.
In this portion of our interview,
she's discussing best practices
for decision making in business
and your everyday life.
I think there's two pieces to it.
Piece number one is that you have to really get people
to feel like they shouldn't be afraid of the outcome
on any given try.
because on a single try, we don't have enough data to know whether it's any good.
You flipped one coin, right?
Like one time.
What does that tell you?
You don't have 10,000 coin flips.
So you have to get people to feel like it's okay if the one try doesn't work out.
And I think that that's as a leader, you just have to culturally, you really have to be good at
communicating that.
And then you have to live it.
You have to act it.
So how do you do that?
Well, work the decision process through with the group and actually memorializing.
the tree that you create. So how do you do that? You have decisions under consideration
and what you want to do is try to figure out as best as you can, what are the scenarios
that you think are going to result from decision A versus decision B versus decision C,
and then try to assign probabilities to those scenarios. And don't be afraid of it. Most people
are afraid to sign probabilities because they think there's a right answer. And it's true. There
there is some objectively right answer and it's likely that the guess that you give is not going to be
if we could if we had some mirror into the objective reality you know it's probably not going to be
perfect when i said that i was whatever 73 percent to get my food back correctly i don't know if i was
exactly 73 percent but the but the fact that i was making a try at it is better than not trying at all
because if you don't try it all you're either saying that well you're certain it's going to turn out in a
certain way. Or you're saying something like, well, it either will or it won't. I mean, the fact is
that it is probabilistic and you're probably more of an expert than anybody else because you have
your own experience in making these kinds of choices. You can certainly go and look at, you know,
there's all sorts of business cases and there's data that you can go look at to try to refine
that guess. And the desire to make the guess makes you very information hungry. So it actually
makes you more open-minded because now your goal is not like I have to have the right answer.
It's I want to try to get as close as I possibly can to what the objective probability is.
And I can only do that by getting lots of information or listening to people who have different
points of view so that I can start asking myself, like, why might this be wrong?
But why might I be off on this?
And going back and just constantly trying to refine and get that feedback.
So it causes you to be really information hungry.
It causes you to view other people's perspectives as incredibly helpful.
Again, it's that sort of identity shift around it.
And so what, if you can do that, if you can sort of get, okay, well, here's, here's how
what I think the outcomes of this decision might be.
Here are the probabilities that I think that those might occur with.
And then in that process, make sure that you're really wrapping in the dissenting voices.
And this is very important.
And you can do that in a variety of ways.
One is that you can do, you know, you can create red team, blue team.
So you just create a team of people who are supposed to argue against, you know, what the
prevailing opinion is. If you have people in the room who really disagree on what the probability
of a particular outcome is, which will happen, force them to change sides. And we disagree. You're
at 30 percent and I'm at 70 percent. Wow, we're way off. First of all, cool. We're probably
going to learn something through this process. That's actually much more exciting than us agreeing.
If we all agree. Right. Now, this is really exciting because you and I are pretty well informed.
We're on the same team. We're in the same company. We're kind of working with the same information.
and yet we really disagree about what the probability is, that's exciting and that's good for us.
Somebody has an opportunity to update their view. We might just not know who that person is
at this point. Or we might all update our view because we might all come now to something
that's more in between. Yeah. Right. But now make us switch sides. You now have to argue
my side that I'm 70 percent and I have to argue your side that you're 30 percent and the people
in the room are going to judge who wins the debate. In other words, not who's right, but who's the better
arguer of the other side, notice that makes me construct the best version of your argument.
My goal is to be able to argue it better than you could because I want to win.
Right.
I want the people in the room to judge me.
And what's really wonderful then is that now I'm going to start thinking about, well, why are we
so far apart?
Why does he think it's 30 percent?
I'm likely to probably moderate my view.
You're likely to probably moderate your view.
And everybody in the room is now going to learn because they're exposed to what our process is.
And that's wonderful. And now, let's say most of the room agreed with me and you were this outlier at 30 percent, you get a chance to have your voice heard and your voice is now represented in that scenario plan that we've now created. So that's a great way to do it. Another way you can do it in order to really make sure that you're getting those, you know, why might it go wrong and those dissenting voices is to do what's called a premortem, which is examining the patient before it's dead instead of after. So imagine.
Okay, we have this goal as a company. We want to increase sales by X amount. Let's imagine,
and that's say a goal that we want to have, it's a two-year goal. Let's imagine at the end of two
years that there's a newspaper that says we failed. We did not reach our goal. Let's all,
everybody in the room now write down five reasons why we failed. And now what happens is that
very often in these rooms, you end up with this team player problem. Everybody wants to be
seen as a team player. Everybody wants to be seen as agreeable. Everybody wants to be seen as agreeable. Everybody
be seen as sort of like, we're winning the game because we're also smart and like cheerleaders,
when you do a premortem, now you've changed what it means to be a team player, right?
Like the people who are going to get pats on the back are the ones who come up with the most
creative reasons for failure.
Right.
And that's going to allow you to now anticipate sort of where things might go wrong.
It's going to allow you to wrap that into the scenarios that you think might occur.
It's going to allow you to adjust the probabilities of success or failure.
Okay.
So you go through those kinds of exercises.
You create now, okay, we're going to go with Decision A. Here are the scenarios we think might
occur. Here are stabs at the probabilities. And you memorialize it. So now when the thing happens
that's right up there on the whiteboard, you took a picture of it that didn't work out. And
it's got that percentage by it, you know, okay, well, we thought 33% of the time this was going
to happen. It was going to be bad. It makes it very hard to result. Because how do you go
back and say our process was really bad. It's like, well, here it was. Right. We kind of knew it was
going to happen. So I think that that really allows you to now focus much more on what's the
process for getting us to understand what the decision is. And obviously, as the future unfolds
as it does, you can go back in and examine and say, you know, did we miss something in that
process? Or was it just, it was the 27 percent? You know, were our probabilities, right? Like,
you can ask that, but you've done it all together. Right. I like that a lot.
lot. One of the differences that I see between poker and maybe organizations is in poker,
you're making very individual decisions. You're the person making the decision. You're responsible
and accountable for those outcomes. Those lines in organizations tend to blur in the sense of
sometimes you're responsible for implementing somebody else's decisions. Sometimes they're
responsible for implementing yours. And sometimes you come to decisions as a group instead of
an individual. How does that affect how you think about this?
Well, first of all, I think that the process for coming to what the right decision is can be similar regardless, because what I was doing was certainly in the moment I was making my own decisions, but it was my responsibility to go talk to the people who were my peers that I was trying to learn with and work through and deconstruct those decisions and talk about strategically what I might be doing in the future.
So in some sense, that's a similar problem, right?
I have to go execute on my own.
but I'm interacting with a group in order to try to figure out what my best strategic plan is.
And I think that that's actually in large sense very similar.
That being said, obviously it would be really nice if I could go through a group decision-making
exercise at the table, but that would be bad because it's one player to a hand and you need
to play on your own.
But I actually don't think that it's that dissimilar.
And I think that you can sort of view as a leader the same thing, that yes, the person has to go
execute on their own. But it's my job as a leader to have worked through what the process is
and what the strategy is with that person before they go off and execute so that we're all bought
in. And they, and I understand what their reasoning is. They know that I've bought into the
process. They understand that I know that there's some probability that it doesn't work out,
but we've agreed that as they go and execute, that they're going to, you know, they're going to
execute on this plan and then they're going to come back and we're going to discuss later to see
how we might refine in the future, but they're not going to be held responsible for the bad
outcome. They're going to be held responsible for having worked through the process together.
And that's actually very similar, actually, to poker. So I'm executing on my own, but then I'm
going off to my group and saying, hey, can we deconstruct this session that I played so that I can
start thinking about what are different lines of play? And if I make these kinds of decisions in
particular situations, what are the different ways that I think that could turn out? And let's start
thinking about strategically what I should be doing so that when I go back and
execute on my own, I'm sort of executing what I've learned and what I'm sort of bought into
in terms of process. Do you keep a decision journal?
I actually don't. I'm not a journaler. And I wish that I were because I think it would be
way better. I'm much more of a... Yeah, I would love to go back and read your journals as you were
like a meteoric rise. Right? The highs and lows associated with that.
So I think, I think that people learn in different ways. And for me, the way that I really learned
was through conversation with others. The way that I found that I was best held accountable
to my own ideas was to expose those ideas to other people. And have them challenge.
And have them challenge. And that's just my learning style. When I imagine a decision journal,
The kind of decision journal that I imagine would be, you know, go talk to people about what you've
changed your mind about today, right? It would all involve, I think, go talk to other people,
go find other people to speak to, go, you know, and then I also, because I had that group
when I was on my own and I was kind of thinking about those things on my own, I was always
imagining it as if I were talking to somebody else. You know, so for me, that internalized
conversation of the group became my own journal, you know, as opposed to physically writing it
down.
It's interesting, though, because this, that, you know, even in the writing process of writing
the book, I don't write a whole lot of drafts because most of my drafting is in my head.
I'm like, you know, I'm like, you know, in yoga, and I guess this is bad because you're
supposed to only be on your mat, but in yoga I might be, you know, working through some ideas
or, you know, walking around or I'm driving or whatever.
it might be. I'm actually composing in my head and I'm working through the structure and the
arguments and then I'm sort of imagining talking about those with other people and trying
to refine them. And then I do talk, well, here's what I'm thinking about. This is what I'm
thinking about writing in this section. And I'm bouncing it off other people and trying to hear
what they have to say and how it's playing with them. And that's actually a lot of what I do a lot
of keynoting. And that's a lot of what that is, is you're putting these ideas out to the audience and
then sort of seeing how they respond back and getting that feedback. And that's allowing you
to figure out your narratives and the way that you're expressing it and whether you think the ideas
have fidelity. How are you being challenged when the people come up to you afterwards? So by the time
I ever get it down on paper, it's been really well journaled. It's just been journaled in this way
that my mind happens to work really well. What information would you require from somebody else to,
or would you want from somebody else to judge the quality of their decision?
not knowing the outcome?
Well, I think it depends on what the decision is.
What I think is really important that you bring up
is that there are details that I need to have.
And I think that you have to think for yourself
in whatever you're doing.
And you can work this through with a group of people
who are also experts in whatever you're doing
or trying to become experts in what you're doing
or what are the details that you always need
in order to construct a good decision.
because you know that for yourself better than I do.
I can tell you from poker, for example, there's all sorts of details that I need.
What would they be in poker?
Yeah.
Sure.
I need to know how many people were at the table.
Were you in a cash game or a tournament?
If you are in a tournament, what point in the tournament are you?
How close are you to the money, for example?
Have you already made the money?
These kinds of things.
In a cash game, have you been winning or losing recently?
What does your stack size look like?
What's the form of poker that you're playing?
Are you in early position, middle position, late position?
In other words, where exactly are you sitting at the table?
What was the action prior to the hand, prior to the point that you have a question for me?
Were you the pre-flop razor?
So this is all jargon.
Oh, keep going.
This is awesome.
Were you the pre-flop razor?
Did you call how many people were in between you when you called?
All of, you know, so because I can't, the problem is that if I don't know some very simple facts,
Like, were you first to bet or second to bet?
Or were you the razor or not the razor?
Or how had the people been playing around you?
Like, was the person that you were playing against, did they tend to be looser or tighter?
Any of those questions, what did the board look like?
Then there's literally no point in me giving you any advice because I'm not going to give you
advice of any fidelity.
I was actually talking to a friend of mine, Liv Boree, who very early on in her career,
I mentored her and was working with her in terms of she.
She's a very, very good player now, but way back when when she was first learning, I was
working with her.
And I was talking to her about it the other day.
I said, do you remember when you first started playing with me and you come and describe
a hand to me?
And I was a little bit Eric Seidelish here.
And I was like, well, there's no point because you're not, I have no idea.
Like, where were you in the action?
Like, you can't tell me all three cards that came down on the flop.
I don't know who raised or who didn't.
Like, you just described a hand where you were the pre-flop raiser and somebody
called and then somehow they checked in front of you but they weren't in a blind. It doesn't
even make any sense. So why should I give you advice? It won't help you. It was vague and lacking
information or context. Right. So that I, if I gave them advice, her advice in this particular
case, it wouldn't have helped her because I would have just been like making things up because
I didn't have the details. So you can think like you can imagine fitting this to whatever you're
doing, figure out what the details you are. If you're in sales, figure out what you need to know.
Like, I mean, I can imagine that there are certain things you need to know. Like, where in the sales
cycle are you? Where in the fiscal year are you? What was your relationship with the person in the other
company? Did you have other relationships with people in the other company? Or was it just you just
had one point person and how had things gone in the past? You know, what kind of, during the
conversations, what were they telling you about where they were?
like what how were they describing how they wanted to continue to have interactions with you was it
vague was it we're going to set a date to be able to do this what are the strategies that you were
using how did you figure out how to figure out when you were interacting with them and when
weren't you you know i mean i'm making this up but i assume that these are i maybe salespeople
can write you and tell me that was the most ridiculous thing that anybody ever said but the point
is like you should try to do that for yourself i think like understanding this
situation is great, knowing what information that the person had at the time making a decision
is super important. Are there anything else that comes to mind for you in terms of how you might
structure what you would want to know from people that may be less situation dependent, more
like what range of outcomes did you think was possible? So when you're actually working through
a decision with someone, you want context, for one thing. So I would want to know how things had
gone in the past for them with other decisions.
So in other words, first of all, I want to know how things have been working out for them
sort of over the long run.
So, like, I'll give you a really simple example.
If someone's coming to me and complaining about a problem that they're having in a
relationship, it's good if I know how things went in all other relationships.
So if they're telling me this person I've been dating is such a jerk and I know that the last
10 people have all been jerks, that changes the way that I'm thinking about,
the, what, you know, what kind of advice I might be giving them or how we might work together,
more importantly, through the decision than if the last nine people that they dated were all
amazing. And now they happen to be with a jerk. You can see how that really matters, that
context. Yes, I want to know when you were working through the decision, what information
was informing the decision that you were making. What were the possibilities that you thought
could happen? Because maybe I disagree with you. Maybe you say to me, I was 70% that this was going
to work out really well. And I'm like, ugh, that seems high. So now I can quiz you on it. Well,
why did you think it was 70%? Because gosh, it really feels lower to me. So let's explore that
together and see why that is. I want to know the, you know, particularly I want to know the
information that you were pulling mainly in order to drive the decision. Why did you make this
prediction in this particular way? Because maybe I have information that you missed that I can now
provide to you. Maybe I can start quizzing you. Well, do you think that there, can you think of any
information that you could have brought into it that maybe would have made you make a different
decision? What is it that you, do you think you could have found other information? In retrospect,
you know, did you go, you wanted to go toward a certain conclusion and so you drew this
particular information that you had or this detail or whatever it might be? Well, what about all
this other stuff that existed out in the world? Like, why weren't you considering that? Did you ask
yourself why it might not be true. Did you seek out the advice of other people? Did they tend to
be people who agreed with you? Were you asking people who disagreed with you? Did you ask
anybody why they thought you might be wrong? Like you know, these are all the kinds of questions
that you're always asking yourself if you really want to be a good decision maker. And hopefully
if we're in a decision pod, part of what I'm doing is asking you those questions. Because I know
that because of the way that your mind works, you're just less likely to ask them of yourself.
So am I. I'm less likely to ask them of myself. But it's easier to ask other people when you're
part of that group or tribe, as you mentioned earlier.
Because we have an agreement to it.
And I think that what's really cool about that kind of agreement is that going back to
the idea of how do you disagree without being disagreeable, again, I think it depends on
how you view the disagreement.
Because if we'd agree, if we, if we have an agreement that you're going to help me
with my decision making, you're going to help me construct an accurate view of the world,
If you withhold information from me, now that's hurting me.
You're violating our agreement by not giving me a perspective that disagrees with me, if you have one.
You're violating our agreement by not sharing information that you have that might moderate my opinion downward.
That's Annie Duke from episode 37, getting better by being wrong.
Check out the show notes for a link to the entire episode.
Last, we look back at a conversation with my friend Toby Lucke, the co-founder and CEO of the e-commerce giant Shopify.
Toby is back on the show again in a few weeks with a brand new three-hour sit-down conversation with me in episode 152,
but the clip you're about to hear is from episode 41.
Toby is one of the most thoughtful and intentional business leaders I've ever had on the show,
and his thoughtful process when it comes to decision-making and how he learns from some of the mistakes
he's made is a fitting way to end this themed episode.
Here's Toby discussing one of those times he made a mistake in decision-making
and how he learned from the process, taken from episode 41, The Trust Battery.
And I don't know what the hardest decision is what I ever made.
I can tell you the one I did the worst on.
It was the most important decision which I took too long to make,
which was
so again Shopify's
story is a little bit different
from most venture
than public companies
in the way that
it started with snowboard selling
so it was actually profitable
there and then I stopped
selling snowboards to focus completely on
building Shopify
and then through a lot of work
in many years
eventually Shopify became a profitable
company itself
but it was a
my goal was
I wanted to build
the world's best
20 people
lifestyle business
that was really my
goal with Shopify
you know
the way
like I just didn't love
the idea of venture capital
I'm
I'm
European so I
tend to think
that companies exist
to make money
at a certain point
you know like
it just seems like
so
using other people's money
to you know
just try to
growth over everything
is like
it just seemed wrong to me
but I had lots of evidence
that Shopify really was a growth company
like the venture capital model
is for a certain kind of business
and it's a really good fit for that kind of business
and I think I knew
that Shopify was one of those companies
and then I kind of artificially constrained it
so the decision I didn't make was
can I and should I transition Shopify
from being a lifestyle business to a growth business
and the reason
the reason why I ended up, like, so I feel now that I was the limit, I was the, the bottleneck on potential for Shopify for like a good year and a half period in which I just dragged my feed making this call.
And I'm so traumatized from that.
I never want to be a bottleneck of a company again.
And this was another one of those things that just pushed me into like I need to look after my own personal growth.
I need to be ahead of where the company needs me to be at and so on.
And, you know, eventually I made the decision in a very sort of data-driven way.
I saved up some money instead of investing it immediately into hiring someone new.
And once I had like $50,000 saved, I took five ideas we had that, you know,
like of marketing ideas or ideas to how to grow Shopify and just funded all of them at the same time
and said if two of them work, we are really a growth company that's being helped.
back by its resources. And they all five worked. So it became super bloody obvious. Yes. It became
very, very, very obvious. You mentioned you took too long to make the decision. How do you think
about speed when it comes to decision-making internally? Yeah, the most important thing, I tend to
talk with people about this a lot. I think the most important thing that people have to understand is
how undoable is a decision. If an idea is fully undoable, I want people to almost
you know make it as quickly as they can so the problem is that the you know you can never unvc
fund yourself so then when a decision is something that you can't take back when it's worth really
really understanding so in terms of like decision making i don't think i i can teach terribly new things
like it's the most important thing is get all the context and then make a decision if you just do that
you're already doing a better job than the vast majority of people in business
because almost everyone makes a decision and then gets data to support that decision.
So you're already out ahead if you do that.
And then your skill in decision making is directly proportional to your quality of information acquisition.
So how good are you at making decisions?
Like how good are you at acquiring information?
How far can you go?
How many resources do you have?
Do you have the ability to go directly to a database and ask it question?
Now you have the ability to call the right people up to ask them about their experience.
Did you read the books already, which allow you to sort of identify a situation as something that's like something else,
where you can go and reread it to figure out, are you considering the same fact?
So that most things are the things that you need to cultivate as a skill.
And then lastly, one thing I started really early.
which has been exceptionally useful is when and ever since this decision of turning shop
and growth company occurred, I tend to take, when I have to do a major decision, I have a small
log file where I just put one paragraph in about the decision I made and what information I consider
to be the most important one which pushed me into the direction.
and then I just sort of revisit that every half year and just say, was I right about this given benefit of hindsight?
Because eventually you know if your decision was right.
And so it's actually, if your job is to make decisions, it's worth treating it like any other kind of thing to get better at.
And so this allows you to do it.
What if you learned from going back and reading that, not about outcomes of decisions, but maybe more about the process.
by which you use to reach a decision.
Yeah, I think Kahneman calls it hindsight bias, right?
Like we have a very, very strong bias to underestimate how difficult it was to make a decision
and really just treat difficult decisions that were made as if they were obvious all along
because you now have obvious additional information afterwards, right?
So it cures you of that to a degree, which is really, really, really helpful for anyone who leads people.
It's also, like, I've just learned, you know, every single time I got a decision wrong, which just happens.
Like, I found that the piece of information I was missing was actually totally available to me.
And I just, you know, I just didn't go get it.
Is it because you didn't get it or you didn't realize it was going to be a relevant or a salient piece of
information. Usually I didn't, I just didn't pull it in. It's like, you thought about it. Yeah.
Yeah. Like, you realize like, hey, this was a thing that would have actually made me change my mind and
that person knew it already. And so I didn't go to ask that person. So I didn't. Do you think subconsciously
that's you going like, I know this bit of information might make me change my mind, but I've already
made my mind up and I don't want to have to do the mental labor of going back and then. And that happens.
And then you have to be honest with yourself saying, hey, you know, how would you want to make decisions based on what the best ideas or based on being right in a way or getting your way and so on?
Again, you need like this is why it's a good practice because it just forces you to recognize when you're making mistakes, right?
What's the most common mistake that you see people make over and over again?
Um, uh, it's, it's, it's, uh, people are terrible at deciphering, um, uh, cause and effect, uh,
or even correlation, um, causation, right?
Like, again, I said this earlier, um, systems thinking is the best cure for this kind
of thing, but, um, like, there, there isn't always a cause for things.
They're, they're much more, often there's a system that just reinforces something.
Like, everyone's complaining.
Why is everyone, like, why is all of them?
the word of business so short-term focused well it's because wall street wants quarterly reports right um
so you know it's it's a system reinforcesing the thing that you want to fix and then people love
putting hacks on easily identifiable problems um and then think the problem goes away even though
the full thing that's reinforcing this is not is not being addressed so that that's that's what i see
see a lot you keep bringing up systems thinking what does that mean
mean to you in terms of how you want people to apply it here at Shopify?
Yeah, I mean, systems thinking is teaches you to draw diagrams of a certain kind, right?
Like, which really are the, like, hey, let's zoom out.
Let's declare the little boundaries of our system, but all the stuff that doesn't matter.
But within it, let's really figure out what forces exist and how they, like, how they balance
the loops, how they reinforce the loops.
And once you do that, you can then, like, part of what is so great about just this exercise is it is almost impossible for a room of people to, like, everyone in a room can talk about the same thing and mean completely different things.
But if you're writing like a systems diagram on a right board afterwards, there is sync, like there's a sync.
Like people will, like if someone has an assumption about that system working differently, that will come up.
And so I think that's why it's so powerful.
So that's the actual way of how I wanted to...
By exposing, it forces people to expose how they're thinking about something in terms of interactions,
which allows people to kind of challenge, oh, I don't think it's that way.
And then you get to a deeper version of reality or understanding through that.
But there's also this entire other thing that's also acting on this.
Is that relevant?
And then everyone's like, oh, my God, you're right.
And then suddenly you made progress against coming up with a solution.
I think there's a great point to leave this, Toby.
This has been a fascinating conversation, and maybe we'll continue for part two next year.
Awesome.
Let's do it.
Take care.
Yeah.
Well, I'm happy that we made it work out and that you got to be in my hometown.
In Philadelphia, yes.
Philadelphia, yes.
Thanks so much, Fank attached.
This has been great fun.
you taking the time. The conversation was amazing.
Yeah, it was a lot of fun to be here. Thanks, yeah. Thank you so much. Thank you.
Maria, this has been a phenomenal conversation. I want to thank you so much for your time.
Thank you. It has been really interesting and thought-provoking. You've asked questions that nobody has ever asked me.
Thanks for listening and learning with us. For a complete list of episodes, show notes,
transcripts, and more. Go to fs.
slash podcast or just Google the knowledge project until next time