a16z Podcast - How to Decide, Convey vs. Convince, & More
Episode Date: October 8, 2020It seems like investors are especially obsessed with the psychology of decision making -- high stakes, after all -- but all kinds of decisions, whether in life or business -- like dating, product mana...gement, what to eat or watch on Netflix -- are an "investment portfolio" of decisions... even if you sometimes feel like you're making one big decision at a time (like, say, marriage or what product to develop next or who to hire).Obviously, not all decisions are equal; in fact, sometimes we don't even have to spend any time deciding. So how do we know which decisions to apply a robust decision process too, which ones not to? What are the strategies, mindsets, tools to help us decide? How can we operationalize a good decision process and decision hygiene into our teams and organizations? After all, we're tribal creatures -- our opinions are infectious (for better and for worse) -- so how do we convey vs. convince, and not necessarily agree but inform to decide? Especially given common pitfalls (resulting, hindsight bias, etc.), and "the paradox of experience", including even (and more so) winning vs. losing.Decision expert (and leading poker player) Annie Duke comes back on the a16z Podcast -- after our first conversation with her for Thinking in Bets, which focused mainly on WHY our decision making gets so frustrated -- to talk about her new book, which picks up where the last left off, on HOW to Decide: Simple Tools for Better Choices. In conversation with a16z managing partner Jeff Jordan (and former CEO of OpenTable and former GM of eBay among other things) -- so, from all sides of investing, operating, life -- Annie shares tips for decision makers of all kinds making decisions under uncertainty... really, all of us. The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
Transcript
Discussion (0)
Hi everyone. Welcome to the A6 and Z podcast. I'm Sonal. Today we have another one of our early book launch episodes for a new book coming out next week by expert decision strategist and leading world series of poker player Annie Duke. You can catch the podcast we did with her a couple years ago for the paperback release of her first book, Thinking in Betts. That episode was at me and Mark interviewing Annie and was titled Innovating in Betts, as is perhaps also one of the signature themes of this podcast. But in this episode, we talk about her.
her new book How to Decide, which picks up where the last book left off, and the discussion that
follows covers lots of useful strategies, tools, and mindsets for helping all kinds of people and
organizations decide under conditions of uncertainty. Annie is interviewed by A6 and Z managing
partner Jeff Jordan, who was previously CEO and then executive chairman of Open Table,
former GM of eBay North America, and much more. They begin by quickly covering common
pitfalls in decision making, then share specific tools not to do and to do, including how to
operationalize good decision hygiene into teams and when to spend time deciding or not, especially
when not all decisions are equal and some may seem bigger and more impactful, whether it's
investing in life decisions like getting married or business decisions, such as what product to
invest in or what strategy to pursue or what market or what investment. As a reminder, none of the
following is investment advice, nor is it a solicitation for investment in any of our funds.
Please be sure to read A6NC.com slash disclosures for more important information.
So, Annie, as the author of one of my favorite books, what motivated you to do a sequel,
your new book, how to decide? Simple tools for making better choices. How do you decide?
So when I think about what thinking of best was about, it was really the way that our decision
making gets frustrated by this kind of discorrelation between decision,
quality and outcome quality. And then toward the end of that book, I was kind of a little bit of
an exploration about how you might become a better decision maker given the uncertainty, but it was
mostly a Y book. And so this is really trying to lay out for people. How do you actually
create a really solid and high quality decision process that's going to do two things?
One is get pretty good view on the luck, which you need to do. You need to be able to see it for
what it is. Obviously, you can't control it, but you can see it. But then the other thing, and I think
this was something that was really fun. I got to really dig deep into this problem of hidden information
that when we're making decisions, we just don't know a lot because we're not omniscient and we also
aren't time travelers. And so I got to actually do this really deep exploration into how you
might actually really improve the quality of the beliefs that you have that are going to inform your
decisions, which was a topic I covered a tiny bit in thinking, but here we do like a super deep dive.
It is a super deep dive.
And why I love your books is it's so germane to what we do in our day job, which
is make decisions under extreme uncertainty.
So to recap why trying to learn from experience can go sideways.
Sure.
So, you know, both of my books kind of start a little bit in the same place and then they
diverge from each other.
But I think that's because it's the most important place to start.
What I talk about at the beginning of this book is what I call the paradox of experience,
which is obviously experiences necessary in order to become a better decision-maker.
You do stuff, the world gives you feedback, you do more stuff, the world gives you feedback.
Hopefully, along the way, you're becoming a better decision-maker given that feedback.
The problem is that any individual experience that we might have can actually frustrate that
process. We can learn some pretty bad lessons when we take any individual piece of
feedback that we might get. So experience while necessary for learning also is one of the
main ingredients that makes us worse decision makers. And it really just kind of comes from this
problem that in the aggregates, if you get 10,000 coin flips, we can say something spectacular
about the quality of our decisions and what we should learn or not learn from them. But that's not
the way that our brains process information, our brains process information sequentially one
at a time. And because we're sort of getting these outcomes one at a time, and we're just taking
really big lessons from something that's really just one data point. And the two main ways that
that frustration occurs is because of resulting, which obviously I cover quite a bit in thinking
in bats, where we use the quality of an outcome to derive the quality of the underlying
decision. You can run red lights and get through just fine. And you can run green lights and get
an accident. So these things actually are correlated at one, but with resulting, we act like they
are. And then the other problem is hindsight bias. We aren't really good at sort of reconstructing
our state of knowledge at the time that we made a decision. And so once we know the outcome,
we not only kind of view that as inevitable, but we'll also sort of think we knew that that outcome
was going to occur, none of which are true. So those two things combined are really problematic.
You had this beautiful imagery of decision forestry, which resonated with me.
I sort of think of them as cognitive illusions. What those illusions are creating for us is the idea
is if it's the only outcome that could have occurred. In reality, though, what we know is,
is that at the moment that we make a decision,
there's all sorts of different ways
that the future could turn out.
When we're at the moment of a decision,
we can see all those branches of the tree
where I become a fireperson
or I become a poker player
or an academic or whatever.
You know, we're sort of imagining
all the different ways that the future could unfold.
But then after the future unfolds as it does,
we take a cognitive chainsaw to that whole tree
and we just start to lop off the branches that we happen not to observe.
In other words, we sort of forget about all the counterfactual worlds.
And we end up thinking that there was this only this one branch that could have happened
because we sort of chainsawed everything else away.
We sort of forget that there were other paths that could have occurred.
How do you keep the forestry from lopping off the branches?
As you start turning to how, you started with some really useful tools.
So there's two tools that you can do when you're thinking about that actually,
The first has to do with trying to reconstruct what the actual state of knowledge that you were in.
When you think about what did I know beforehand and what did I know afterwards,
you can now start to sort of reasonably see what was the information that was informing the decision at the time.
When you actually go through this process, you'll spot, no, wait a minute, that was something that revealed itself after the fact.
That's one thing that can be very helpful.
Another thing that can be very helpful is to actually go through this process of thinking about this two-by-two matrix of the
relationship between decision quality and outcome quality. So there's a quadrant, which is good
decision, good outcome, which you can think of as like an earned reward, good decision, bad
outcome, that would be bad luck. Bad decision, good outcome, dumb luck. Bad decision, bad outcome,
I guess would be like you're just desserts. When you're thinking about any outcome that you've had in
your life, if you do that over time, what you're going to see is that you're going to have certain
patterns about which quadrant you're really filling in a lot. So if you're seeing that you're
really only putting things into like good, good, bad, bad, you need to start seeing how luck
is influencing you. And then the other thing you want to do is to start thinking about particularly
the good, good quadrant because we are asymmetrically willing to go and try to find some luck in there.
Let me explain what I mean. So if you have a bad outcome, you already feel bad. You're sad because
you lost. And it's kind of nice to go in and deconstruct that and analyze process and really
look at the quality of the decision that led to that outcome. Because if you find some bad luck in there,
you get a little relief. You kind of get out the hook. Right. It's like a door out of the room.
Luck is giving me a way out of this. So we're actually pretty eager to go around and explore
those bad outcomes. What we're not so eager to do, though, is when we have a good outcome to apply
the exact same process, to actually spend some time in there thinking about, well, you know,
what was luck? Or was there a better way? And the reason why we don't want to look at that is because
we feel pretty good. If you find out that you won because of luck, that's a door that you actually
don't want to have open for you. So I actually put a lot of focus when I'm thinking about using
this tool of really digging into that one quadrant. And what you can see is in order to actually
be thinking about which quadrant it fits in, you have to actually apply this other tool,
which you can do in retrospect, which is actually to do some exploration of like, what are the
other things that could have happened? Because if you don't understand those counterfactuals,
it's very hard to actually appropriately place any outcome into the right quadrant. So I have tools
in the book, which will help you sort of reconstruct these things retroactively. It's kind of
interesting. The investment community often tries to capture the thinking at the time through the
investment memo, which then, you know, records, okay, these are the potential outcomes that
we can envision. Here are the probabilities of the different outcomes. And in total, we're willing
to make this bet, even though there are some outcomes that are pretty unattractive, to say the
least. And that absolutely, if you think about a knowledge tracker, that's what you're doing.
It's like you're trying to reconstruct an investment memo. It's better than nothing. But what you really
kind of want to be doing this stuff prospectively. You want to have some sort of record of not only what
you thought at the time, but also exactly what you said, like, what are the ways that we think
this could turn out? Like, what are the payoffs to each of those possibilities? How probable do we
think those are so that you can actually look at generally two things. What's the expected value?
What's my downside risk? And then you can obviously compare options to each other.
What I think is actually really important, though, about thinking about this like evidentiary record
that you'd like to create at the time of the decision as opposed to try to reconstruct is that
it's not actually an extra step.
Like people talk about decision journals, which feels like work because it feels like an extra step
where you've done the decision and now you're trying to record everything.
The fact is that a really great decision process is going to produce this evidentiary record
naturally.
And obviously we'd prefer to have that because what the evidentiary record is giving you,
what that investment memo is supposed to give you is sort of what your expectations
of the world are.
Not just like, do I think I'm going to win or lose at this probability, but also like what
do we think is going to be true of the world in general.
What I find in my work is that when people,
people lose, they'll do these process dives. The problem is when there's a big win, they're like,
we won. Yep, exactly. When an investment goes bad, you do spend time trying to say, okay,
what can I learn? What can I do differently? And then when it goes well, you just spike the ball in the end
zone and do a dance. And we really are just like spiking the ball. But there's so much to be learned
from the wins as well. And I would argue actually more, particularly, particularly by the way,
when the power law plays. In a lot of ways, there's more to be learned from the wins than the
losses, right? Because the thing is, like, you know, you can win for all sorts of reasons that you
didn't expect. And yet we spend a lot more time in our decision process exploring the losses
that were for reasons that we expected than the wins that might have been reasons that were
unexpected. Maybe we could have cleaned up the process or there was information that we were missing
that we could have applied, so on and so forth. We're kind of losing a lot.
of the learning time. We're not being very efficient when we do that. And the other problem with
that is actually that that has downstream effects that are quite bad. I'm going to do things
that are very consensus. So I'm going to want everybody to agree with me. Yeah, that resonates a lot.
So you take on using a pro-con sheet. And it was funny, I was cleaning up offices a couple years
ago, and I found cheats in different places and aggregated by career decisions. And, you know,
I came to the conclusion that they were pretty much worthless. And so you come to the same
conclusion in the book. Why are pro-con sheets worthless? So let me just say, a pro-con list is
actually a decision tool. And if you have a choice between that and nothing, I think a pro-con list
is very slightly better than nothing. But here are the problems with a pro-con list. The first is
that it's flat. It lacks any dimension.
It's like a side-by-side list. Here are the pros. Here are the cons. And I don't really understand
how you would weigh one side against the other without adding some dimension to that list.
And that dimension would be two things. One is, how bad? What's the magnitude? The other
dimension that's missing, which is terrible, is probability. So in that sense, I'd rather just use
the decision trade. And for an option that I'm considering, I want to just think about what are the
reasonable possibilities, what are the paths for those, and what are the probabilities if those
things occurring? And then I can add that dimension back in. Without that dimension, it's not a great
tool for comparing one option to another, because again, I can't calculate like any kind of
weighted average here. If like I'm choosing between two colleges is the one with more pros,
like am I supposed to not go there? I really kind of don't know because I don't have this dimension.
And then the third problem, which I think is actually the most dire, is that what we're really
trying to do is to reduce the effect of cognitive bias. Pros and consulates actually amplify all
of that stuff. It's a kind of a tool of the inside view. And let me just say for people listening,
I imagine some people saying, no, when I go to make a pros and cons list, I haven't decided yet.
I have news for you. The minute you start thinking about a problem, you've already started deciding,
you know, regardless of whether you've made that explicit or not, you've already started to get yourself
to a conclusion. And now when you go to do a pros and cons list, this is going to amplify the conclusion
that you already want to get to. So I think it's just not a very good tool.
My worst career decision by a mile was joining a company called reel.com right at the beginning
of the internet era. It was being purchased by Hollywood Entertainment, which ran the Hollywood
videos swamps. And it was a bad decision. I unwounded in a year, I got scars. But when I went
and saw the pros and cons and the pros were aspirational and the cons were delusional, I clearly had
decided before I started the list. Yes, exactly. When we start to use something that feels,
objective, like a pros and cons list, we get that feeling of like, well, now I can have confidence
that it's a really good decision. So one of the things that I'm very wary of is that I think that
there's certain things that can come into a decision process that feel like it's certifying
the process. So we end up with this combo of a decision that isn't really better, but that we feel
is much more certified. I love the tools you describe using the decision trees, the prospective
of gathering of information. Then you took your how into an interesting direction. I really enjoyed
the part on spending your decision time wise. So it's a book, it's a book about, you know,
making great decisions. And then you start talking about all the decisions that you shouldn't
apply it to. So I spend the first six chapters really kind of laying out what a pretty
robust decision process would look like. And then I sort of take a hard left and I say,
okay, so now that you know, mostly you shouldn't be doing that, which I know sounds a little bit odd,
but it's this meta skill of understanding that obviously you can't take infinite time to make
decisions because opportunities expire and you're losing the ability to do stuff in between.
And so we want to really think about what types of decisions merit taking time and what types of
decisions merit going fast. And it just turns out that most of the decisions that you're going
to make on a daily basis are ones that you should be going fast on, much faster than you actually
do. And in some ways, I think that people sort of have it reversed. Throw out a couple examples,
because that's where it really came alive to me. Okay, so let me ask you this. What's your guess,
obviously pre-pandemic? What's your guess on the average amount of time that an adult in America
takes on what to watch on Netflix, what to wear each day.
I mean, at the moment, it's sweatpants, but we'll ignore that and what to eat.
If you're my mother-in-law, she's spent a half hour every time we went to a restaurant.
So, like, she's not even that much of an outlier.
If you add it all up over the course of a year, the average adult is spending between six
and seven work weeks, like literally on just those three decisions.
I'm sure she's looking at the menu and then it's quizzing like all the waitstaff
and asking everybody else at the table what they're going to order.
trying to go back to the chef.
Looking on Yelp.
So here's my question for you.
Let's say that we ate a meal together.
And you were trying to decide between two dishes.
Like, what are two dishes that you would have a hard time deciding between?
Fish and a good veggie stew.
Okay.
Okay.
So you're trying to decide between those two things.
If your mother-in-law, you're quizzing everybody.
So let's imagine that you ordered the veggie stew.
And it came back.
And let's imagine you got this bad outcome where the food was really yucky.
And you didn't even finish it because it was so gross.
So now let's imagine it's a year later and I say, hey, Jeff, how are you feeling right now?
Happy or sad?
So you remember that horrible veggie stew you had a year ago?
How much of an impact do I have on your happiness today?
Zero.
Zero.
Okay, so let's imagine I catch up with you in a month and I say, hey, Jeff, feeling happy or sad right now.
Do you remember that horrible veggie stew you had like a month ago?
How much of an effect on your happiness to have today?
None.
None.
What if I take you a week later, by the way?
None.
None.
Now, if it had been the fission, it had been bad, the week may have been.
Maybe, but not the veggies too, but not the veggies too.
Okay.
So what I just walked through with you is something I call the happiness test.
I use happiness generally as just a proxy for, are you reaching your goals?
Because we're generally happier when we're reaching our goals.
So you can substitute any goal that you have in there.
And this is a way for us to figure out how fast we can go.
Because basically, the shorter the amount of time in which your answer to the question is,
did it affect your happiness at all, is no, the faster you can go. Why? Because there's a
trade-off between time and accuracy. So in general, not always, but in general, the more time
we take with the decision, and there's more time for us to like map these things out and actually
calculate like expected values and figure out what the volatility might be or gather information,
get more data, all of those things. Generally with time, we should be increasing our accuracy.
So that's why we can speed up, I'm assuming no food poisoning here, that when we look at the worst of those outcomes, that it has no effect.
It's neither here nor there, which means that we can take on the risk of saying, I'm going to spend less time because I'm willing to risk the fact that I might increase the probability of the worst outcomes because it doesn't really matter to me.
And then you make you another point that you can repeat the decision next day at the restaurant and order the fish instead of they're tasteless stew.
that's the other thing that you can look at, which is when you have these low-unpacked decisions
that are quickly cycling and they repeat very quickly.
So that's like what to watch on Netflix, what to wear, what to order in a restaurant.
We should go really fast for two reasons.
One is you're going to get another crack at it in like four hours.
And then the other is that one of the things that we actually don't know well, although we
think we do, is like our own preferences.
We've all had that experience of having a goal, achieving it and realizing that wasn't really
what we wanted in the first place.
And then there are certain types of decisions where it's just really helpful to sort of get some feedback from the world.
So when we can actually cycle these decisions really quickly, I'm not really too worried about like making sure I'm making the best possible decision in terms of accuracy.
What I'd rather do is get a lot of cracks, get a lot of bads, so that the world can start giving me information back more quickly and I can start cycling that feedback a lot faster.
Then I'm going to build much better models of the world and what my own preferences are and what my own goal.
are and what my own values are and what works and doesn't work, such that when I do actually
make a decision that really matters, my models of the world are going to be more accurate
by having just sort of like done a whole bunch of stuff really fast and not really cared
whether I won or lost. That makes perfect sense. Now, one of the chapters that I loved was
decision hygiene. I found this book fascinating from the perspective as both an investor and a former
operator. I mean, yeah, investors, it's obvious. You're making two or three investments a year. You're
seeing hundreds of companies. How do you decide? But as an operator, there are a few decisions
you make each year that are super, super important. In particular, the ones that I used to labor over
was, okay, you have to commit, you have to invest your product resources, your most valuable
asset, your engineers into specific deliverables. Is it going to be A, B, or C, and that's the most
important decision I made all year other than possibly people decisions. Explain a little bit
how you can maintain great ID.
It resonated in both my professional experiences
in a really significant way.
I have to say like the decision hygiene stuff
and the ideas of predicting these intermediating states of the world
apply so much in a startup environment
because obviously kind of the nature of a startup
is that you do have very little information
and you're making pretty big bets on a future
that by definition is going to be somewhat contrarian.
So making sure that you don't,
don't get in this kind of group saying, like, I'm not saying don't believe in yourself,
of course, but this is actually a way to have more belief in yourself, because the quality of
the decisions that are going to come out of a good decision hygiene process are going to be so
much better. And that becomes much more important in a situation where we are at a posity of
information. And then it starts to actually close feedback loops more quickly for you, which also
increases the quality of your models and information. So I actually can't think of a place where
this is more important than in a startup environment.
So let me just start kind of the premise why you need some decision hygiene.
I don't have control over luck.
What I can do is I can make decisions that reduce the probability of a bad outcome.
You know, even if I make a decision that's only going to have a bad outcome 5% of
the time, I shall observe it 5% of the time.
And luck is what is determining when I observe that bad outcome.
So that's kind of one side of the puzzle.
The other side of the puzzle has to do with how you construct your decision process.
What do you think your goals are?
What do you think your options are? What do you think your resources are? What do you think those
possibilities are for any given option you're considering? What do you think the probabilities of those
things occurring are? Basically, your whole process is built on this foundation, like that whole house
is sitting on top of a foundation, which is your beliefs. And by beliefs, I don't mean things like
religious beliefs. I mean just like, what are your models of the world? How do you think the world
operates? What are the facts that you have? What's the knowledge that you have? And that foundation
that that whole process is sitting on has two problems. One is that a bunch of the things,
we believe are inaccurate. So it's like cracks in the foundation. And the other is that we don't know
very much. So it's like a flimsy foundation. The solutions to both problems are the same,
which is that we need to start to explore that universe of stuff that we don't know. That's where
we run into new information. That helps us beef up our foundation. And it's also where we happen
to run into corrective information, things that can correct the inaccuracies in the things that we
believe. The other thing that helps us do when we're, you know, talking before about the pros and cons
listening gets kind of caught in your own cognitive bias, is to realize that a lot of the cure to
those kinds of problems is to get other people's perspectives. So two people can be looking at the
exact same data, and they can come to very different conclusions about the data. That's what
a market is. It's different perspectives colliding. So having set that stage, one of the best things
you can do for your decision making is finding out what other people know and what their
perspectives are on the problems that you're considering. The problem is that without really good
decision hygiene, you're not actually going to be able to execute on that properly.
So let's figure out how do we get this into a team setting.
Basically, human beings are very tribal and we like to sort of agree with each other
more than we actually do.
And our opinions are really actually infectious.
So in order for you to know that you disagree with me, what is the thing that you need to know
from me first?
What do you think?
Right, exactly.
And this is where we get into this huge problem in an interpersonal community.
communication. When people ask for feedback, pretty much 100% of the time, they tell the person
what they believe first. I'm thinking about a particular sales strategy or whatever, and I will lay
out for you, not just the information that you need, but I also tell you my opinions on that.
You're unbiased opinion, right? Now that I biased you hugely, right? Right, exactly.
So the reason your decision hygiene point was so interesting to me is you called out one of my
tools that I used as an operator, which was quarantining in group settings. I found an open table that
if I walked in and had a, you know, put a strategic, you know, choice on the table, there was one
and a half people in there who would drive the discussion and their opinion would always carry the
day. Yeah. So I developed a tool where I would pre, very important big time strategic decisions.
I would ask everyone to send me their list of prioritizations, and then I would aggregate them,
and then feed that back to the group to heighten the contradictions, essentially,
the quiet person who didn't really want to put a contrary point of view and spar with the other person.
All of a sudden, the data's on the table because they quarantined the gathering of it.
And then I found the conversation was so much better than just throwing it open and having, you know, the charismatic, lequacious, opinionated person carried the day every time.
I could quarantine mine opinion, but as soon as someone else talks, as you just so nicely put, it's like everybody else is infected anyway.
I'm just a really huge fan of pre-work.
Figure out what it is that you're trying to get feedback about.
Give everybody the same information and then actually elicit those responses.
Now, the more specific, the better.
So I like them to rate it, right?
Give me it on a scale of zero to five.
Because then I can find out like Jeff is a four and, you know, Annie's a two.
And maybe Jeff and Annie need to have a conversation because it turns out that there's quite a bit of dispersion of opinion there.
What this allows you to do is, first of all, it actually disciplines your decision process because you have to think about what are the things that matter to this decision that I'm trying to elicit opinions about.
And let me be clear, it's not that I don't think people should provide rationales.
I think those are actually quite important.
It's just that they need to have something that's much more precise.
It's like a point estimate, right?
Because I need to see where the dispersion is and then let them give the rationale.
I used to give a hypothetical budget.
You have a million dollars.
Here are the 12 ideas you can invest behind.
Deploy your budget.
And each person would deploy.
And then all of a sudden you got something that's really powerful.
And you got, oh, you love this idea and you hated this idea.
Let's discuss the idea.
Right, which I love.
So, exactly.
So you can actually see that they disagree with each other or see that they do agree with each other.
It also makes you actually think about what are the component parts of this decision that really matter.
You can start to actually create for yourself almost like a little bit like a checklist,
but here are the things that we need to pay attention to and that I actually need to get the feedback on.
So what Connman would call these are mediating judgments.
You're thinking about what are the mediating judgments for any broader category that you might be judging on.
and that helps you to really discipline the decision process.
You then bring that together in one dock
and you sort it into here are areas of agreement,
here are areas where there's some dispersion.
People get to read that prior to coming into the meeting.
So they've actually seen sort of the full slate now
of what the different opinions are in the group.
This does really great things for your meetings.
It makes them much more efficient, much more productive.
You're not surfacing all that stuff in the room,
which just takes a long time.
Absolutely.
And by the way, you're not going to surface all of it anyway.
So that's bad.
But the other thing is that now you can come in and you can say,
here are areas where we generally agree, yay, us,
but let's not talk so much about the fact that we agree,
which is what happens in a lot of meetings where you'll say something, Jeff,
and then I'll go, I agree with Jeff, and let me tell you why.
And then somebody else is like, yes, and I have more color to add to that
because everybody sort of wants credit for that idea.
But we don't care now because we already found out we all agree, yeah, yes.
Right? The earth is round.
Cool.
Right?
But now it turns out that Annie thinks the Earth is flat over here.
Okay, so now what are we going to do?
Jeff thinks the Earth's round, Annie thinks the Earth flat.
And that's where you really want to be focusing your time on places where there's dispersion.
And you want to focus that time in a way where it's not about convincing anybody of your opinion.
It's about just informing the group.
And then if anybody sort of agrees with you all say, hey, you know, Sonal, you also agree that the Earth is round.
Is there anything you want to add to that?
So you'll get to say your piece.
And then, Annie, you believe the earth is flat.
Is there something you didn't understand?
Now notice in no way, am I, is anybody saying you're wrong or you haven't thought about it this way or whatever?
It's I get to tell you here's something I don't understand.
And then we sort of get to the point where I say, okay, explain your position.
There's really amazing things that come out of that process.
Thing number one is you get much more comfortable with the idea that everybody doesn't have to agree.
Number two is people have different mental models.
and so you get to expose everybody to those different perspectives
and the different facts people are bringing to the table.
So the whole group becomes more informed, which is awesome.
The third thing is that the person who is conveying their position becomes more informed.
Why?
Because in the process of having to defend why I believe the earth is round,
I discovered that I actually can't explain that very well.
So maybe I have to go Google some stuff or look it up.
And there's going to be good stuff that comes out of that
because I'm going to be more likely to actually moderate
because I'm sort of poking around in my knowledge a little bit.
And then the last thing I think that comes out of this that's really good is that once you get
into this idea of convey versus convince, you realize that you don't need to agree to decide.
You need to inform to decide.
And that the idea that all of you would be on equal agreement about whether you should do something
or not is completely absurd because we don't have to because that's a little point.
If you thought that that was the goal, why do you have more than one person on the team?
Yeah. You want a diversity of opinion. And if you don't tease out the different opinions, you make an inferior decision. I actually thought this was one of my management secrets. When you just outed it and you're soon to be a bestselling book.
Yeah, so actually, what's interesting about that problem, I think that teams often act like a pros and conslist, where we have the intuition that more heads are better than one.
So when we bring more heads into a decision, you have this decision that feels much more certified.
But what we know is that when you allow people to make these decisions in sort of committee style, like in a team meeting, that the decision quality often isn't better.
And there's lots and lots of science that shows this.
So one of the things in venture that is often cited as a challenge in decision making is the feedback loops can be forever.
What's your take on that feedback loop and decision making?
Yeah, so basically my take is that there's actually no such thing as a long feedback loop, which I know sounds weird, right?
Because obviously you're saying like we invest in a company, we find out how it exits like 10 years from now.
Isn't that a really long feedback loop?
But the thing is, I mean, going back to this idea that when you make a decision,
it's a prediction of the future, it's not like you're just predicting what the exit is going
to be. You're predicting a whole bunch of intermediating states of the world. And that might be
just like, for example, like what is the arc of the ability to attract talent for this particular
founder? Just like as an example, right? You know, obviously he's going to fund it the next round.
It's a good example, the funding management team. Right. If you knew for a fact that they weren't
going to be able to hire a good team, you wouldn't invest in them. So it's really good to sort of make
predictions about those things and make them probabilistically because as you're making these types
of forecasts, and now over the court in a much faster time period, you're starting to see when we say
that there's a 60% chance that this intermediating state of the world is going to exist, does it
absolutely exist 60% of the time? Because in the end, the thing that's so important to understand is that
you are saying that you're an expert at the market that you're investing in.
So you want to be explicit about the things, those predictions that you're making about that
market, both near term and far term, so that you don't have to wait around 10 years.
Because the thing is, you're going to have to make another investment in between.
You can't just make the investment, wait 10 years, get your feedback and then make another investment.
And now if you're actually being explicit in the way that you're thinking about those things,
you can actually create much tighter feedback loops.
It's disaggregated to get it into a set of milestones?
Right.
There's no reason you can't do that out in the world.
One of the knocks that people will say about poker is, oh, but you get really fast feedback,
and so poo on you.
And I'm like, well, yeah, except that it's really just a compressed version.
There's the end of the hand, which is what you're thinking about.
But in between the start of the hand and the end of the hand, there's all sorts of predictions
that I'm making in between.
I've been an investor for nine years.
The feedback loop is 10.
I've made 35, 40 decision.
If I deferred any learning to the end, it would be pretty wasteful.
And there's another psychological thing we fight.
The phrase is, your lemons ripen first.
So if your company goes for 10 years, there's a pretty good probability.
It's going to have a good outcome.
But the ones that die after, you know, can't raise the next round, can't have the management team.
That's when your negative outcomes manifest before your positive outcomes.
And psychologically you have to manage through that.
Yeah.
So this actually, I think, gives you a tool to be able to do that because you have no secondary
way to be right.
Like, how am I doing in terms of like calibrating around how likely I think this company is
to fail?
You know, in what ways is it going to fail?
What does that actually look like?
The other thing that comes from that is that when you make yourself sort of break this
into its component parts, when you actually force yourself to do that, I think it actually
improves the knowledge that goes into it.
to it because you have to start thinking about what are the things that I know, what are the
things that I could find out, what are the perspectives that I could consider, what are the
mental models that I could apply that will help me with this prediction because it is now
recorded. It is part of that evidentiary record, which we have already said, is incredibly
important that allows you to have that look back. And because you know that you're accountable
to it, I think it actually improves the accuracy of the original decision because it makes you
be more fox-like rather than hedgehog-like because you know that there's going to be a look-back.
Basically, fox-like thinking is looking at the world from all sorts of different perspectives,
applying lots and lots of different mental models to the same problem to try to get to your
answer. And hedgehog is like you approach the world through your one big idea. So you can think about
like an investing, you have like one big thesis instead of looking at it from all sorts of different
angles. Generally, what you find is that fox-like thinking is generally going to win the debt.
And this is something like Phil Tutlock, I'm sure a lot of people are familiar with super forecasting,
talks a lot about, so apart from the fact that you can speed up the learning cycle,
I think it actually improves the decision in the moment, the knowledge that at some point someone's going to look back at it.
I think that's absolutely true, and it's a good tool, and we may start implementing that at the firm really soon.
You know, as investors, we get the benefit of being able to make a basket of decisions, you know, diversification.
A lot of the people are making decisions are making like one decision.
What is the impact to optionality and how do you deal with that one decision?
So first of all, here's a secret.
Your decisions are a portfolio because you make many of them in your life.
And I understand one decision like this particular product decision.
But that's actually kind of like a false segregation because you're kind of working across different decisions.
But I do understand that some decisions you're making feel like they're much higher impact.
Like when we go back to the happiness test, obviously, like when you're sort of putting your eggs in one product basket,
this is something that if it goes wrong is going to have a very big effect on your ability to achieve your long-term goals.
But that doesn't mean that you can't think about how can we just sort of move fast.
And then how would we then apply this to making a higher quality decision about something like that?
So one of the things that we want to think about besides impact, when we're considering how fast we can go is optionality.
which is really just, if we're on a particular route,
how easy is it for us to exit?
Can we get off the route?
Because obviously, when we choose a particular option,
we're foregoing other options,
and there's obviously opportunity costs to not choosing those options.
And what we're doing is we're saying this option compared to others
is going to work out better for me a higher percentage of the time
than other options that I might choose.
But we know that after you choose something,
sometimes new stuff reveals itself
or the world tells you some things that maybe this is,
and a road that you want to be on.
So then the question just is, how easy is it for me to get off the road?
So one of the things that we want to look at is what people call type one or type two decisions
or Jeff Baysa says two-way door or one-way door decisions, that when you have a two-way
door, when it's easy for you to quit and either go back and choose an option that you previously
rejected or choose a new option that you hadn't previously considered, that we can go
faster.
because really it's a way to mitigate the downside, right?
If I'm kind of on a bad route, I can at least get off
and try to figure out how to get onto another route.
So that would be like going on a date, super quittable.
I can leave in the middle if I want.
Getting married a little harder, less quittable.
Yep.
Right.
So, you know, taking a few classes online,
much easier to sort of quit than like actually committing to a particular college
or renting, more equitable than buying.
By the way, it turns out doing online classes
and going to college is now the same thing.
It is.
My children will tell you that.
That is so true.
But the more quittable something is the faster we can go
because when we can quit,
obviously that mitigates the effective observing the downside outcomes.
The other thing we can do is actually think about portfolio theory,
but for decisions that we don't think of as investments,
even though all decisions are investments,
which is sometimes we don't need to choose among them.
So you can date more than one person at once, right?
I actually don't need to choose between these two options.
I could actually do both at once.
And then I can kind of figure out which one's working better.
And we do this with like A, B, testing and marketing.
That happens in software development,
where you're sort of trying to decide between two features
and you develop them in parallel
and you test them with one set of users
and another set of users are seeing different features.
The number of the businesses do business locally.
We'll add restaurants in San Francisco and L.A.,
delivers groceries in San Antonio.
You can charge, you can have different pricing approaches in the different market and just learn.
I mean, no one in San Antonio is going to know what you did in Montpellier, Vermont.
You know, just so try it out.
And you learn and learn and learn and learn and learn.
Then you go national.
Exactly.
When we can do things in parallel, obviously we're also better off.
And then the other thing is sometimes you have an option that isn't quittable, but you can still quit it because you can negate it.
So that would be like, let's say that I'm invested in a stock and it's totally a liquid,
have no ability to sell it.
If I could find a stock that's perfectly negatively correlated with the stock that I own
and I buy that in an equal amount, I've now solved my problem.
So I've quit it even though it wasn't liquid.
That's just hedging.
So if you can find something that's kind of negatively correlated with the first thing,
then you can actually go faster.
So that you have to think about in advance, right?
this thing is pretty illiquid. It's going to be hard for me to exit. Is there something where if new
information reveals itself, I can kind of just negate that decision? And if you can do that,
then you can also go faster. So now that we've sort of understood, like there's the impact of the
decision. And then we have this optionality thing, like can you quit? Can you hedge? We can now
get to this idea of decision stacking, which helps us when we have to make this big bet,
is to say, what are the things that I can do before that are going to help me to gather information,
so that when I do have to make that big bad, this is going to be hard to reverse, my model
of the world is going to be better. So how can I start to use this idea of making some little
low-impact decisions just to kind of see what's going on, to do some things in parallel?
I can blunt it in order to start building better models of the world so that when I do actually
put this out into the world, then I know something more about the market. So when you know
that you're going to have one of those on the horizon, I mean, they normally don't
just like hit you by surprise.
So like, oh, crap, I've got this decision to make.
It's just really good to try to stack these other types of decisions in front of it.
Because when you do actually have to make that decision trade,
when you are actually trying to figure out like what the user uptake of something is going to be
or, you know, whatever, what people are willing to pay for something,
your model is just going to be so much stronger for having thought about what are the things
that I could do in front of that really big decision.
De-risking.
You know, trying to get all these little nuggets of directional information to give you
higher confidence in the really big decision.
Yeah.
And so you can apply this in like all sorts of different places.
But, you know, the classic thing is dating before you marry.
One of the things that I find is that when people aren't like 90% sure that it's the
right path, that they're pretty reticent to actually execute on it.
But, you know, we have to make lots of decisions where we're 60%.
And by the way, when we estimate ourselves to be 60% on something, we're overestimating
that.
Because we're just deciding under uncertainty.
It's just kind of how it is.
We don't have a lot of information.
So once you have an option that appears to be significantly better than the other ones,
you just have to do a final step, which is to say to yourself,
is there some information that I could find out that would cause me to flip this option
in relation to the other options that I have under consideration?
And now it just becomes really simple.
If the answer is yes, you can just say, can I afford to go get it?
You might not be able to because of time or money.
And if the answer is yes, I can afford to go get it, go get it.
If the answer is no, look, this is the.
state that we're always making decisions under. I don't have a time machine. My decision
making would be much better if I had a time machine. Sadly, I have none. That's the next book,
The Time Machine. The Time Machine, right. I know, right, exactly. This has been a fascinating
session. Thank you for spending the time with us on the A16Z podcast, to paraphrase.
I am so grateful to have gotten to come on and to get to discuss this stuff with you. I had so much
fun. I've been looking forward to this conversation for quite a while. Well, I'm so excited because
it did get delayed a little bit due to a small misprint. That wasn't a small miss print.
That was a big misprint. And now I have an eBay collector's item, which I'm the perfect person to
know how to monetize. Yeah, right. So for people who don't know, is that books get printed in sort
of 20-page sections, rather, that get bound together. And really a lot to do with COVID. One section
and got printed twice, and one was totally missing.
I was just questioning my mental facilities while reading.
But no, don't worry.
It's been repaired.
Excellent.
October 13th, when the book is out, you will get an appropriate copy.
That'll be awesome.
I can't wait.