The Knowledge Project with Shane Parrish - #7 Venkatesh Rao: The Three Types of Decision Makers
Episode Date: January 28, 2016In this episode, Venkatesh Rao, founder of Ribbonfarm and author of the book Tempo discusses the 3 types of decision-makers and shares how to adopt useful mental models *** Go Premium: Members get e...arly access, ad-free episodes, hand-edited transcripts, searchable transcripts, member-only episodes, and more. Sign up at: https://fs.blog/membership/ Every Sunday our newsletter shares timeless insights and ideas that you can use at work and home. Add it to your inbox: https://fs.blog/newsletter/ Follow Shane on Twitter at: https://twitter.com/ShaneAParrish Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Welcome to the Knowledge Project.
I'm your host Shane Parrish, editor and chief curator of the Fernham Street blog,
a website with over 70,000 readers dedicated to mastering the best of what other people have already figured out.
The Knowledge Project allows me to interview amazing people from around the world to deconstruct why they're good at what they do.
It's more conversation than prescription.
On this episode, I have Venkatash Rao.
He's a writer, independent researcher, and consultant.
He's the founder of the blog Ribbon Firm, the technology analysis site Breaking Smart,
and the author of a book on decision-making called Tempo.
We talk about a host of fascinating subjects, including the three types of decision-makers,
mental models, the implications of the free agent economy, and how to process information.
I hope you enjoy the conversation as much as I did.
I want to talk about your book Tempo, which is on decision-making to start with,
and that's about the narrative we frame around decision-making.
Can you walk us through that a little bit?
So Tempo was kind of an interesting project compared to a lot of my other writing projects,
because I think it was the one thing I've done
that was almost entirely for myself.
So the work it's based on
was based on things I did in grad school
and my postdoc.
And it was kind of unsatisfying to do that work
and just have it published in the sort of academic literature
without sort of exploring the parts of the work
that really interested me,
which is how does it apply to thinking?
A lot of the things I learned along the way
to applying the ideas and tempo to things
like command and control in military situations,
that was the actual context where the work was done.
It was unsatisfying because what I personally enjoyed
the most from it was just a chance to sit and reflect
and think about how we actually make decisions,
how we frame questions, and so forth.
So tempo was an effort to capture that part,
which was sort of not the sort of thing
you'd publish in an academic journal,
So I needed to put it out in some other form.
So that's how I ended up writing it as a book.
And I think it shows because it's not, it's very idiosyncratic.
I didn't really bother to explore how other people thought of the same topics a great deal.
I just sort of wrote down my own conclusions from that work.
Does it make sense?
Yeah, I thought it was fascinating.
Can we dive into that a little bit?
I mean, if you were to synthesize today the knowledge that you had learned,
and that was the culmination of it and how you apply it today, how would you do that?
I would say that writing that down actually helped me move beyond it,
because right after I wrote the book, some of the more interesting criticisms that I encountered
actually helped me see what the book didn't cover well, and exploring that,
ended up being a very fruitful thing for me.
So I'll just point out a couple of those things.
One was the idea that there are a couple of major categories of decision-making styles, so to speak,
and the book really strongly focused on one of them.
And I was sort of unaware of the sort of structure of the other approaches to decision-making
that were equally big.
So the big one that I've concluded in my mind is there's this approach to decision making that you could say is based on not reasoning per se, but it's a very conceptual approach where you think in terms of, say, mental models, what frames am I looking through, what metaphors am I using, what is the sort of significance of my decisive actions versus my random actions, what narrative is sort of framing the unfolding of events.
So that's an approach that's very natural to me.
It's why I think I ended up becoming an engineer.
It's also an approach that's very natural to you.
Farnham Street, I think, explores that in a great deal of detail from a variety of different sources.
So that's, I don't know what to call it, but let's call it sort of conceptual reasoning as a framework for decision making.
And I would say maybe a third of humanity operates that way.
It's their operating system for life.
But the other two thirds does not operate that way.
And the two categories that I've realized that I'm very, very unlike are one is what I would call ethical reasoners.
And ethical reasoners very sincerely and honestly start with a very deep and intuitive sense of right and wrong.
And these people, in my opinion, right and wrong are in the sense of good and evil, not in the sense of true or false.
So these people who start with the framework of good and evil, not only do I not.
resonate with them i often really struggle to understand why they're thinking the way they're
thinking and invariably when i disagree with people very strongly about something it's uh usually the
fact that they're starting from good and evil premises and it's not as unsophisticated as we
tend to think that these are just you know like uh not very sophisticated religious thinkers for
example that that's a subcategory of people who use ethical reasoning as a framework but there's
can get much more sophisticated.
So I think that's a big blind spot in my own thinking
that I've only slowly become aware of and explored a lot more.
And the other, which I would say the second third
of the category of people I don't truly get,
is people whose entire decision-making process and framing
is based not on something that goes on in their own heads,
but in the sort of collective consciousness
of the group they belong to.
So these are what a friend of mine, Greg,
Raider, he phrased as affiliational thinkers, people for whom every decision basically boils down
to, which group do I want to be like? Which group do I want to belong to? And they do that by saying,
all right, let's take a issue like abortion or is Trump good for being president. And I'm not
going to process that through examination of the issue itself. But which group can I belong to that
their views on the topic are comfortable for me to be socially. Does that make sense? It's your
social skin. That's your first consideration. So those are my three big buckets of types of
human decision making. And you and I, I think, represent the first kind, which is explored
quite a lot in tempo and your blog. The second is these people who start with good and evil,
who I understand a little bit now, a little better four years down the line. And the third is
official affiliational thinkers or tribal thinkers whatever you want to call them
these are the people I understand the least because in a way understanding these
people individually is the wrong thing to even attempt you have to understand
sort of how their groups or tribes think and think of these individuals only in terms
of which tribes they choose to join so that's the only decision that ever matters in their
life which is which tribe should I join every other decision or thought process they go
through is really somewhere in the collective consciousness.
Do you think those tribe decisions are based on the particular decision that you're making
or are they based on the tribe at large?
So if I am I gravitating towards a tribe on particular issues or is it that I want to be
like that tribe in all issues?
I think it's the latter because if you're talking about being like the tribe on particular
issues, you're way too individualistic.
You're making decisions based on the merits of a particular case.
You might say that on capital punishment, I'm with liberals and I'm against it.
I'm referring to a U.S. context, of course, here.
So you might say on capital punishment, I've thought through the issues and I'm against it,
therefore I'm with the Democrats, but on gun control, I've talked through it,
and I'm with the Republicans on that.
That's too much thinking.
It shows that you're not a tribal thinker.
For a tribal thinker, it would be who do I want to be with as sort of the
operating system of my life. Who do I want to have barbecues with? Who do I want to hang
out with? Who do I want sort of as my friends in my bowling league? That sort of issue. It's not
explicit. They don't sit through and say, all right, these are the 50 activities and habits that
define my life. Therefore, I'm going to pick the optimal tribe to join. No, it's not like that.
It's more a process of emotional resonance. And after that, you basically are partisan
in a predictable way on all issues. And to people who are more individualistic,
and discriminating thinkers, this seems kind of stupid.
It's like, how could you possibly take this huge set of like 50 different issues
with very different contexts and considerations and basically agree with one tribe on all 50
of those issues?
But if you look at just how tribal reasoning works, it is possible and that's kind of how
it does, in fact, work.
And I would say that this is my total, I know nothing kind of guess that each group is
a third, a third, a third. It could be that tribal thinkers are actually much more numerous
than the other two kinds. And do you think that, I mean, between the framing of those, I mean,
it leads us to the obvious conclusion that the first one is better, you know, not putting it in a
good versus evil context. Do you think people approach the good versus evil in terms of I'm
the hero and I'm trying to write this evil? Or do you think it's more nuanced than that?
Well, first I would resist the temptation to conclude that the first approach to thinking and
decision-making is the best. It's the one most suited to certain personalities, certainly,
and in certain situations, it makes for, like, much higher probability of survival and success
and thriving, right? Like, in, say, the American context, because of whatever the social
operating system for the country at large is, for better or worse,
Americans tend to believe in individualism, the myth of individualism, even though Americans are not super individualistic.
But if you believe in the myth of individualism, the first approach to decision making where you kind of maintain this fiction of processing everything on your own and staying away from tribal pandering and so forth, that tends to work very well.
Whereas if you go over to like strongly traditional Asian cultures, the reverse might be true, where
everything is framed with respect to the context of the social environment and that might be a
much better survival strategy if you want to actually succeed in that environment. So I'd say
whether or not one is better than the other is a question of context and what you mean by better.
But to your other question of good and evil types, I don't know. I've been thinking about
it for quite a while yet and at a philosophical and a practical and a reasoning kind of level
and epistemological level of are they actually exploring the truth about the way the world
works. I think they're kind of full of shit on all those fronts. But there's something
hardwired deep in human nature that seems to work very well with good versus evil reasoning
frames. And I think here's my hypothesis on why that is the case. Why is it that this is buried
so deep in our firmware? If you think about most species of animals, their survival
concerns all have to do with their material environment, which is, can I get enough water? Can I
get food? Can I hunt my prey, right? Whereas humans, a great deal of their survival depends on other
humans. It's how do I get along with the group? Does the leader of the troop of monkeys like me
or not, what will happen to me in a tropical environment if I'm kicked out of the monkey
troop versus a temperate environment versus an Arctic environment, right?
So 90% of our consequential survival behaviors as a human social species depends on
things having to do with other people.
And good and evil, if you think about it, is a very, very good way of simplifying that
whole area of decision making, where if you simply decide that a certain group
good and other groups are evil, everything else gets massively simplified.
So that's how you get, I think, the abstract good and evil approach to thinking.
It's a rafite form of tribal affiliational thinking.
So if you want to stack them in sort of evolutionary primacy order, I think tribal affiliational
is the sort of most ancient of our decision-making frameworks.
The good and evil framework is slightly more recent in evolutionary history because you need
a certain capacity for abstract thought before you can frame good and evil as categories.
And then the kind that you and I try to promote in our writing and thinking is the most recent
of all. It might be, I don't know, no more than 500 years old.
I was just thinking that they were almost inverse from the way that you had mentioned them
from an evolutionary perspective. And the two and three, so the good versus evil and the
tribal tend to blend, right? More so than the other one. It was mental models that first
drove me to your book.
I mean, one of my friends read it, and
they pointed out that you were talking
about mental models in your book, and
at the time, and I mean, to a large extent
today, so few people are talking
about that. What's your
definition of a mental
model? Well, I have a
sort of technically inspired definition
in the book, as you may recall.
So I used something called
the belief, desire, intention model of Michael
Brothman, who's a philosopher at Stanford,
and it's been the basis of a lot of
artificial intelligence research. So that's one sort of effective way to get at defining what a
mental model is. It's a set of beliefs, desires, and intentions. But I think that sort of
definition is useful for certain narrow technical needs. UI people have similar sort of technical
definitions. Politics people have similar definitions like you've mentioned George Lakoff a couple
of times in your writing, I think, and Lakoff has one based on conceptual metaphor. All
these narrower technical definitions of mental models, they're useful for certain questions
that are honestly a little, I don't know, too detailed and deep for a general mass audience.
They're not interesting.
So for a mass audience, I would say the best definition of a mental model is a world
in the sense of science fiction or fantasy, right?
So you've got, say, a universe like the Harry Potter universe or the Lord of the Ring universe.
and so there's the world
and then there's the story that's told in the world
and look at the way the most popular
science fiction and fantasy is written
you're told the story
but through learning the story you also learn about the world
and stories differ in their ability
to do that elegantly
so a lot of the rings
has a poetic elegance to it
in that you don't feel like you're learning about the world
but by the time you're done with the trilogy
you actually know a lot about it
Whereas Harry Potter is a little bit more heavy-handed, where a lot of it is very clearly world-building.
And you kind of get the tedious sense of reading a geography book or being asked to memorize a list of countries.
That's sort of explicitly learning a world in which a story set.
But if you look at the movie versions, you realize that in a way, J.K. Rowling is a product of her time,
where she's not really the author of a book
so much as a media property
that she was at least at some level aware
would turn into a movie, an online world, a game, and so forth, right?
So it could be that she's just a product of our times
and there are two very different works.
But that's basically my idea of what a mental model is.
It's your sort of implicit understanding of what the world is
and it's very easy to see in the case of like fictional universes
with a few rules that are different from our own.
But the same thing is true for a much more realistic world as well.
So like take the Law and Order franchise.
You've got, I don't know if it is popular around the world as much as it is here,
but you've got this franchise of TV shows, criminal intent, special victims unit, and so forth.
And this sort of gives you a sense of an entire universe of police work
and crime and a sense of the world as a very dangerous place where you've got these brave defenders
protecting you. And that's a mental model, right? So while you're watching a law and order
episodes, that mental model is active in your head and it allows you to make sense of the stories
you're being told very efficiently. So mental models allow you to very efficiently make sense
of stories. And if you are not familiar with the mental model, then the author must build the
mental model, and that's what happens with science fiction and fantasy. But the interesting thing
is, when you read, say, extremely foreign fiction, so fiction that's very alien to you in terms
of mental models, the author may assume you understand the mental morals, but you may not. So,
for example, Japanese comic books, a couple of times I've tried to read them, they just feel so
bizarre to me. They're sort of conventions for indicating, you know, emotions and actions and so forth.
they're just so unintuitive to me that the world kind of, that should be in the background and
implicit and I should just be able to reference it like an operating system, it sort of becomes
a little too visible for me to read defection seamlessly. So it's like I'm trying to run a Windows
program on a Mac computer without realizing it. So I think that sort of is the best way to
understand mental models for, I don't know, people who don't need to deal with them in any
technical way. So when they're presented to you in these ways, how do you go about validating that
they are in fact the way that the world works? Or is it trying to hit on something almost at a
subconscious level in us that elicits a recognition? I don't think really mental models are so
much about how the world works as much as they're about internal consistency. So think of the world,
the universe you live in as an extraordinarily confusing place that's throwing huge amounts of
information at you in like an extremely high bandwidth way I think I read somewhere I think it was
in Daniel Dennett's consciousness explained where you actually looked at the raw information rate
coming through your eyes alone for example so your pair of eyes the retinas the amount of like
raw bitrate information they can take in across the frequency band in which eyes are sensitive
that's a certain like amount of raw information and it turns out if we actually
had to process that amount of raw information it would make our heads
explode basically so there isn't enough processing power in the brain to handle
that input raw so our brain is basically layers and layers of processing that
throughout most of it and map it to a sort of toy universe inside our heads and it's
this toy universe that we actually play with and the only thing we are
ask of the story universe inside our head is that it'd be much, much simpler than the world itself
and that it be internally consistent, which means if you close your eyes and shut off input
and sort of wind up your mental universe and run some simulations in your head, things should not
fall apart. They should be coherently put together so that you can say, close your eyes
and analyze the decision like which university should I attend and what major should I pick.
You're not processing that in the real world of information and bits about university
and majors and careers.
You're processing that in a little tall universe in your head
that's like a billion times simpler.
So that's sort of the function of mental models,
simplicity and coherence.
And I think that's the only way they can work
because the real world is,
there's just way too much information.
And the best we can hope for is that our mental models
don't become these perfect, idealized, leak-proof buckets
inside which we live in, like, you know, Taliban
or a religious fundamentalist, through which no reality data can leak at all.
And if we sort of have slightly loser mental models and universes,
there's hope that reality can occasionally seep through the cracks of your perception
and disturb your mind and cause disruptions, and then you learn.
Okay, I want to come back to the information and information processing,
but before we get to that, what role do you think that mental models play in
decision-making, I mean, to what extent do they play a role in that in your categories,
or just in terms of how we process you and I, maybe in that type one?
My views have evolved a little bit on this since I wrote Tempo.
And I would say the easiest way to understand what mental models do in our thinking
is they act like the blinders they put on horses.
You've seen those things, right?
Little side blinders that prevent the horse from looking on the sides and getting distracted.
So your mental model's job is basically to blind you.
It's to blind you to 99.999% of all the pertinent reality data that could possibly be salient to a decision
so that you're paying attention to an extremely narrow stream of information.
That's the purpose of mental models to blind you.
That's awesome.
I like the way of thinking of that.
And then the problem is if the world is shifted or changed, then you're blind to that change, right?
Exactly. And you have to hope that that change happens to leak through one of the cracks you've left open.
So does that go to information processing when you think about it and how we filter and how we process?
How do you leave those cracks open in your life?
I'll give you a short answer and a long one. The short answer is basically mindfulness,
just paying attention to the world itself, which is shut down the inner dialogue and look at the world.
Look outside your window. Right now there's a magnolia.
tree that's about to flower outside my window where I'm sitting here.
So the real world is actually, this might sound like stating the obvious, but it actually
needs to be stated and people need to repeat this to themselves frequently, that there is,
in fact, a real world out there that you can stop and pause and actually take a look at.
It's not all abstract categories inside your head.
There's any time you just pay attention to what your eyes are seeing or your ears are
hearing, that's how you sort of make sure the cracks stay open.
So that's the short answer, and I know it wasn't super short.
The longer answer, which is sort of perhaps more helpful, has to do with something that
took me a long time to realize, which is a lot of people think that creative and imaginative
thinking has to do with connecting ideas from different domains.
That's the function of, for example, metaphor.
That's the function of certain types.
of creative pattern recognition in academic research where people say, oh, I'm going to take
this idea I learned in mathematics and combine it with this other idea I learned from art
history, and I'm going to come up with this new way of doing mathematical art, right?
So a lot of people fall in love with that idea of forming connections at the foundation
of thinking.
The kind of combinator play, right?
Yeah.
And I think that actually is a very dangerous process.
That's how mental models sort of snowball and complexity and connectedness and increasing in their ability to blind you.
Because if you close your eyes, let's take this hypothetical person who spends 15 years in college and grad schools becoming like the world's most erudite academic.
They've read all the books.
They've read, they've watched all the movies.
they've viewed all the paintings and read all the criticism about everything.
So their head is completely full of information from mediated sources, so to speak.
Not direct reality data of looking at the world itself, but lots of these processed food for the brain.
So this hypothetical person closes their eyes at age 30, say, and their head is full of like 15 years of such data.
and then they live their life without any more data.
Now, I think two things could happen to such a person.
One is that information in their head that's already there,
it can erode or depreciate like a bank account.
It can start to lose value.
But the other thing that can happen is it can get more and more interconnected internally.
That's what happens.
When you close your eyes to reality data,
information that's already inside your head has a tendency to sort of get wired up
in more and more complex ways.
And people love that. It's an addictive process. It's like, oh, this idea from the Bible is actually very similar to quantum mechanics. And therefore, quantum mechanics was predicted by the Bible. That sort of process sort of snowballed and your head becomes full of this richly interconnected web of ideas. But the basis for the interconnection is blindness. I mean, it's not supported by more reality data. If there's like three objects and you've tried to connect them up, there's three different ways to do it. If there's four objects,
in your head, there's six different ways to connect them up.
It's almost a brute force kind of approach to.
But the sort of cautionary tale here is, if you don't have constant reality data coming
into your mental model, you'll make all possible connections.
And they won't actually contain any information.
There'll just be connections.
There'll just be patterns that sort of become evident like a mat.
And this person who is just a head full of information and it being wired up,
creates a set of mental models in their heads
that becomes increasingly leakproof over time.
So it becomes leakproof by becoming more interconnected internally.
You have an explanation for any possible thing you can think of.
Any way your thoughts and decisions turn,
there's like some sort of connections and ideas
that sort of frames the decision for you and makes it.
But the actual value of the information is depreciating slowly,
like a bank account that's sort of divorced from, you know,
the reality of the outside world.
So it's almost like there's the equivalent of a financial bubble going on inside your head.
So that's what a mental model that's divorced from reality data is.
It's a financial bubble inside your head where valuations are going weirdly and internal dynamics are overwhelming connection with the real economy.
So would you say to some extent that the knowledge in your head, there's almost a red queen effect going on where you have to do something to maintain and constantly update just to stay in the same relative position?
That's a very good analogy, yes.
So I think it's probably a sign of a very healthy mental process if you are able to create
and sustain a Red Queen's arms race between sort of your mindful engagement of external reality
data and sort of your eyes shut process of like making your mental models more coherent
and useful in terms of simulations and seeing ahead.
You actually mentioned something like this in one of your,
recent posts. The two-by-two of engaged versus thinking and good versus bad, this is roughly
that distinction and how to keep that process in balance because you let one overwhelm the
other, it gets really unhealthy. And the kind of decision-making that you and I seem to enjoy
has a bias. It tends to preferentially sort of create momentum and inertia in the internal part,
the thinking part and not so much in the engagement part. And there's other kinds of people with
the opposite bias where their engagement is overwhelmingly strong and their internal processes
aren't keeping up. So their arms race is imbalanced in a different way. What's your process for
reading? Do you have a process? Do you walk me through this? You connect so many things.
I don't really have a process. If I'm working on a particular
well-defined project like right now I'm working on breaking smart the second season
work and there's a set of books that obviously I have to sort of get through and read and
process so that's somewhat like you know academic work where you have to do a literature
survey and understand you know what some people call the idea maze of a domain
this is a phrase in the startup world that was coined by Balaji Srinivasan it's the idea
that you need to understand the map of the area you're exploring.
So that kind of reading is relatively well-defined,
where you have a goal you want to get to
and you have a rough idea of the path,
but you have to go about assembling a map
so that you can actually navigate your way there.
So that's one kind of reading,
and I think traditional education teaches that kind of reading very well.
The other kind of is probably, I don't know, 80% of my reading.
which is honestly pretty much completely random.
I follow trails on Twitter, on Facebook, people send me stuff.
I spot things on Amazon and it's, I don't read as much as I used to.
My sort of stamina for that is going down.
But this process is, yeah, pretty much random,
but that isn't to say that the effect of the process is random.
The effect of the process is this is how you discover new stuff,
like connecting back to our earlier conversation about blindness and so forth,
exploratory reading is what creates cracks in your sort of secure mental models
and allows you to see new things.
There's a serendipitous aspect to it.
So when you're reading, are you taking notes?
Are you highlighting?
I mean, what does that nuts and bolts kind of look like for you?
So in the first kind of reading that I'm reading towards a very deliberate end,
I might take notes or more recently, you know, take pictures of,
a particular paragraph with my phone, because I might quote that a bit later and so forth.
So that's for, again, it's a more academic citation-focused way of reading.
But the rest of the time, the random part, which I think is more crucial to my way of thinking,
that I don't attempt to control it in any way.
There's no meta process at all.
I just read.
And there's a reason for this, because if there's an actual idea that needs to pop into consciousness,
in a serendipitous way, as you point out, you can't force it.
And if you try to sort of, I don't know, encourage it in sort of structured ways where
you say, oh, I spot this one idea in this one book and it looks interesting.
Therefore, I'm going to clip it and put it in my Evernote file.
And if something else comes along, I'll connect it and so forth.
That sort of kills the golden goose of serendipity.
where on the other hand, if you just read and sort of trust the universe that you're exploring
to surface the connections that are interesting, that happens a lot more naturally.
And the really high value insights and connections that you can spot,
they only happen if you don't try to have too much of a meta process.
But you also have to filter quickly and rapidly in that world.
Do you not?
Not so much.
I mean, so long as I'm being entertained,
I don't ask for productivity or like a certain rate of insight.
So to me, it's actually the very natural heuristic that I used to practice as a kid,
which is just continue reading if something is interesting and keeping, engaging my interest.
It's important not to overthink this stuff where you get so obsessed with the productivity of your reading
that you're not able to enjoy it because enjoyment is actually not a nice to have peripheral
feature of the process of reading.
Enjoyment is actually an important part of how you do the filtering that you're talking
about, which is if you are enjoying it, continue reading.
If you don't enjoy it, set it aside.
And that's your brain's natural heuristic and it does a very good job actually.
So there's not that much of a reason as you might think to add more filtering criteria.
Now, sometimes there is because you can get into this addictive trap where your enjoyment filters are basically some sort of mind candy filters and you sort of filter out anything that's threatening or upsetting to you.
Now, if you notice such a bias, then you kind of have to, I don't know, rewire your habits so that you have developer tolerance or appreciation for a kind of content that used to upset you.
Like say, I don't know, let's take this back to being a kid and maybe you like adventure stories a lot.
but horror stories or romance stories upset you or embarrass you or something.
And you have to sort of become aware of that emotional reaction, start to manage it and learn
how to enjoy that new kind of content.
So it's like, I would reframe the problem of filtering information as learning how to
enjoy information.
I like that.
To what extent is you're reading physical versus digital and say books versus articles?
it's increasingly digital and increasingly articles over books because books are now a very big
investment and yeah I have to really and either enjoy it a lot or have it be very much on
the critical path of a project I'm involved in for me to finish a book these days
so let's talk about Breaking Smart season one do you want to maybe just give us a brief
introduction to that and yeah that's that was an interesting writing project because it's not like the
kind of writing that put me on the blogosphere map so ribbon farm is very much uh sort of my exploratory
laboratory of thinking where i basically do my own thing and breaking smart was much more of all right
here's a universe of ideas that certain people in silicon valley understand extremely well
and other people outside it see as like this alien idea space that's a new way of living in the world that I don't understand at all.
So there's this sort of the intellectual equivalent of a digital divide.
And so it was a very deliberate, focused process of, all right, can I explain fairly clearly to smart people what it means that software is eating the world?
And working with Andrewson Horowitz for a year sort of gave me an opportunity to,
spend a lot of time interacting with native thinkers in that world,
people whose everyday work involves basically software eating the world
and what to do about it, how to take advantage of it,
how to do startups in that world, how to invest in that world.
So it was very interesting.
It was kind of like embedded anthropology.
And I'm personally, I don't consider myself part of that world.
I don't know what world I'm part of, but I'm not part of that one directly.
So it was kind of a very interesting spectator experience for me.
And I tried to capture that.
and that's where those essays came from.
And it's been interesting because they're much more accessible than most of my regular
writing and a very different kind of audience has responded in a very different way than I'm
used to.
So you spent a year doing that in partnership with A16Z.
What do you think that has changed or what are you taking with you after this year
other than, I mean, the notoriety that it's drawn to?
thinking and the workshops that you're doing. But what is it that's changed your approach from what
you've learned? That's a difficult question. It's really hard to see change in yourself until it's
like 10 years down the line and then you think back and say, hey, that was a turning point and
I radically changed my personality. Like sometimes there's like cues that say that your personality
has changed suddenly and radically. Other times it's like, you know, the parable of the frog
being boiled and not realizing it.
Yeah, it's Galilean Relativity 101, right?
Yeah, I think it's more of the second case.
Being immersed in the thought space for a year and then writing about it carefully for
about four months and then spending another eight months doing workshops and explaining those
ideas to people, it's a very gradual process.
There's no sudden moment where you say, oh, I used to be this other kind of guy.
Now I'm this kind of startup culture evangelist guy.
it's not that overnight kind of transition.
It's more like a lot of ideas that might have been fringed for you suddenly becoming
more and more normalized.
So, for example, I would say one obvious effect that this work and project has had on me
is it's made me a little bit more libertarian than I used to be.
So previously I would say politically, I was pretty nonpartisan.
I would have described myself in 2013 as, say, a business conservative where my, you
economic and business thinking was mainly conservative.
Social thinking was liberal.
And libertarianism was this sort of a little bit of a fringe of nuts
that I used to laugh at, especially the Anne Rand fringe of it.
But through this year and a half, I've kind of learned to separate
the interesting aspects of the growing world of libertarianism
from what I think of as the loony fringe of libertarianism,
which I associate honestly with Anne Rand.
So that's one explicit sort of slow shift in my own thinking
that I've been able to detect.
But other stuff, yeah, I guess it'll become more visible as the years go by.
I mean, I'm only now beginning to understand transitions
that I went through when I was 25 or 15.
So I'm used to it taking a really long time for me to make sense of myself.
So with season two, what are you trying to do?
breaking smart? I'm going to be looking at the future of organizations, which is, it's a topic
that's been really interesting to me for almost 10 years now, and since the beginning of my
blogging on Ribbon Farm. In fact, what put me on the map as a blogger was the Jervais Principle
series, which I started in 2009 and finished, I think, in 2013. So a six-part series, that's now
an e-book. And that was all about organizational psychology and how organizations really work.
So that's one element of the thread that I want to develop in more careful and complete ways.
The other motivation, of course, is that there's things happening in the environment now
that are causing a big change in, how do I put it, the operating system of how organizations
are conceived and grown and run and, you know, operationalized.
And you've got everything from the extreme fringe of smart contracts on cryptocurrency
blockchains where there is no organization per se, but everybody has the sort of digitally
mediated peer-to-peer relationship with everybody else that they're economically engaged with.
So it's like the smart network of contracts that's doing economic work to the other extreme
where you might have like a really ancient organization
like the Catholic Church,
which might adopt digital realities
in a very sort of measured and slow way,
and it's probably not going to go away
because it's not becoming a blockchain-based organization or something.
And most of the world is somewhere in between those two extremes.
Reading, playing, learning.
Stellist lenses do more than just correct your child's vision.
They slow down the progression of myopia.
so your child can continue to discover all the world has to offer through their own eyes.
Light the path to a brighter future with stellar lenses for myopia control.
Learn more at slur.com and ask your family eye care professional for SLOR Stellist lenses at your child's next visit.
And it would be, I think, fascinating to really sit back and think about, all right, what's happening to organizations because of the impact of digital technologies.
on the one hand, and because of just a growing understanding of what organizations are and how
they work on the other, where we've had now a couple of centuries of experience running
corporate entities and various sorts of modern organizations, and we have like 30 to 50 years
of management science that have given us a lot of insight into how organizations work.
Can we put that together and sort of paint a portrait of the world of organizations that's
emerging now?
So that's kind of my big team.
So what effect do you think, broadly speaking, of course, will technology have on the way that we run organizations, the way that we manage people, the way that we interact with colleagues, employees?
So I haven't yet. So I'm just getting started on framing my hypotheses and ideas here. So let me give you the start of frame and where I'm hoping to go from there. So my starting frame is this idea from Alfred Chandler in the 19.
It's called Structure Follow Strategy.
You may have heard that phrase.
Yeah, we talked to put that in my MBA.
Oh, okay.
So he's written a couple of books,
Structure and Strategy,
The Visible Hand and so forth.
And at the beginning of,
I think, structure and strategy,
there's a list of hypotheses he lists
about the new kind of organizations
that were emerging in his time.
So remember, when Chandler wrote his works,
it was the Robert Barron Corporations
of the 1880s and 1890s had become established big companies and had created a whole way of life
around them.
Populations had moved from the agrarian hinterland to the evolving new cities.
A second generation of companies had come up.
And Chandler made a whole bunch of observations.
I think there was a list of about 10 or 12 hypotheses in Chapter 1 about the nature of this
new kind of organization that had emerged.
And among them, for example, was the idea that.
middle managers, managers of other managers who were not CEOs or senior leaders,
were the defining feature of the new kind of organization that had emerged by the 50s and 60s.
And the culture of modernity, the structure of cities, the structure of education,
everything could be sort of inferred from the single fact of the rise of the middle manager,
professional, large organization.
And that's, of course, being reversed today.
That's the layer of the working world, at least, that's increasingly being automated.
We no longer go through like four layers of approvals to get like a travel expense form reimbursed.
We go to a piece of software, enter some details.
Maybe one person takes a look at it and clicks okay and then you get your reimbursement in your next paycheck, right?
So that entire population, that was the sort of anchor element of an entire way of life that persisted for 40, 50 years,
that slowly shrinking and disappearing and with it, the middle class and so forth.
So my hypothesis is that not all of Chandler's sort of hypotheses about the 1950s era
are being slowly flipped one by one, and we've seen that for the last 20, 30 years.
But perhaps the biggest flip is that the defining archetype of the new world of organizations
is no longer the middle manager, but in fact the free agent.
So people like you and me, the people who don't actually live in organizations at all,
but live in the ecosystems of organizations
or as intermediaries between organizations.
So it's like in the 1950s, 90% of human,
I think it was about, yeah, almost 90% at that point
of humanity lived a paycheck lifestyle.
And this was the end of like a 200 year historical process
that I've written about.
Like in 1780, there were less than 20% of the American workforce
was paycheck employees.
And by 1980, it was close to 80 to 90%.
So that was a very long, two-century-long trajectory of increasing paycheck employment.
And if you flip that around, you see that what it means is the number of free agents,
the number of people living in the interstices of the economy, slowly shrank in the developed world.
Now, that process is being sort of inverted.
And now I would say, depending on whose estimates you believe, it's somewhere between 30 to 40%.
And, of course, some very naive ways of counting lead to the conclusion that it's no more than 5% or 10%,
which I think is bullshit.
So the number I tend to believe is somewhere between 30 to 40% appropriately defined
are free agents.
And these people may not work inside organizations because these people are sort of this new
emerging growing class, just like middle managers were in the 1950s, what they do,
the patterns of life they choose, where they choose to work and live, how they choose to
educate themselves, how long they stay in projects, their work styles, like, you know,
remote working, working from home.
balancing multiple gigs at once,
all these patterns of life
that they're sort of improvising and
establishing right now,
whether it's lifestyle design
thing in Bali and doing internet marketing
for a big Fortune 500 company here
or somebody like me
whose main way of working is writing blogs
and getting consulting gigs.
We are not the sort of inner core of companies,
but we kind of define the economy now.
And I think that's a huge sort of framing
idea that's emerged in the last 15, 20 years. And it's interesting for one reason, which is
if you look at the structure of modern tech companies, like, you know, Facebook and Google,
their market gap is huge relative to their headcount. So one useful metric for thinking about this
is divide the market capitalization by the number of employees. And it's for the fastest growing
unicorns and, you know, young companies, it's very, very high. So you might have like,
a billion dollar company with just 200 employees and so forth.
So it's clear that these people, well, they're very talented, very successful people who are
going to get very powerful and rich because they're doing the few remaining valuable long-term
jobs that remain in the economy.
But for the rest of us who don't have, say, ninja level coding skills, where we can be one of
the privileged few to be in one of these companies as a core employee inside, the question is,
what does it mean to survive in the ecosystems created by such companies?
What does it mean to be a citizen of, say, the Amazon ecosystem or the Google ecosystem
or an app developer for Apple or a driver for Lyft or Uber?
These people are kind of defining the structure of the world at the moment.
And if you don't normally think of this part of society as even the, I don't know,
the raw material of organizations, these are by definition the people who are not in
organizations. But I think actually they are the ones who are going to define the organizational
landscape of the future. And how do you think we're going to manage that? I mean, one of the key
things that I keep hearing over and over again, which I don't have an opinion on or a well-formed
one anyway, is like matrix management. And how do you think the structure of management plays?
And what do you think about matrix management? Well, matrix management is a very old idea, actually.
It's at least as old as I would say in early 80s is when it became popular. So with the first
a wave of deregulation and the creation of a lot of outsourced and subcontracting kind of
economy in the manufacturing sector.
So, you know, the Reagan-Thatcher era, matrix management came about when people realized
that you needed a line management access, which was traditional companies, along with
a project management access, because so many needs were transient.
So that is a very old idea.
And that's, I would say, actually, the incumbent traditional management now.
It's not the, that's not the new stuff.
Matrix management is the default in the old economy right now.
That's how projects get managed.
What is new is managing projects through multiple circles of contingent labor.
So think of a typical project.
So no good successful project in a software-eaten world can be really, really big.
At most you might have, let's talk about the software project as sort of a prototype
because that's a new kind of core work.
You might have, say,
an overall extended team of, say, 150 building a product.
And you might have a structure where the core team of employees with stock options
and careers and the ability to buy houses and all these, you know,
super talented people who've kind of gotten the golden ticket,
that might be a core group of 15 to 20 inside the company.
Then you might have another ring of like longer term contract workers.
of say 15 to 30 who are doing less critical tasks.
Then you've got like another layer of small boutique firms handling things like, you know,
social media marketing, doing a little bit of focus group research, things like that.
Then you've got another big layer of say a developer community that's beta testing
and you're trying to woo them to use your technology.
Beyond that, you might have a much broader ring of say 100 people who are really not even
part of your producer team. They're part of your earlier doctor consumer team. But because they're
like power users who understand your technology well and may do a little bit of hacking,
they're the ones who are going to discover the use cases that will actually work and establish
your product. So think of the outermost ring as sort of the prosumers, the people who are
partly producing in addition to consuming in return for things like discounts and the very
core being like the best compensated people who have a chance of becoming, you know, stock
millionaires or something, that's the structure of a team that makes something happen in today's
economy. It's not matrix management or any of these old ideas. It's this, I don't know, tribe of
say, 150 people with various levels of belonging in a fuzzy set that makes something big happen.
So as you were saying that, and let's make the assumption that we should pay people based on
the value they bring to the table, how do we compensate people in a system like that in a system
where you might be part of a team that creates a $300 billion product,
but your role in that team and your value and your contribution
versus somebody else's value and contribution in there.
How do you think about that?
I don't think there's a single universal answer.
It's a very sort of contentious debate right now,
and people are sort of exploding different answers.
So it's useful to think not at the general principle level,
but at the example level.
So the two examples I think that are driving the conversation forward the most
are Kickstarter and the right share economy.
So Kickstarter, if you think about it,
the early group of backers that gets a project off the ground,
they're actually not just paying with their money to support a project,
they're paying with their intelligence.
They're doing work.
They're reviewing like lots and lots of projects that they might be scanning on Kickstarter
and sort of making recent decisions about,
hey, this is an interesting new innovation that deserves support.
Now, they might be using any of the three decision-making processes we thought about before.
They might be conceptual thinkers.
They might be good versus evil thinkers.
Or they might be, I need to be part of this tribe that makes this happen kind of thinkers.
Whatever that approach, they're contributing intelligence in the form of information in addition to money.
And this group typically gets compensated with a bunch of like, I don't know, gift economy type artifacts.
get a t-shirt, you might get a shout out on Twitter, you might get like an early advance
instance of whatever it is that's being produced, like if it's a book or a little manufactured
widget, you might be one of the first to get one of those things. So that's the way compensation
works in that particular example. And of course, it leaves a lot of people very unhappy because
they want more compensation for what they see as more valuable input. And that's one of the reasons
we have this legislative process right now that's about opening up crowdfunding to equity ownership.
So very soon we might see in very limited and regulated ways the ability of crowdfunding backers
to own equity in the things they back early on.
So that's one example of how people might get compensated.
Another is Uber.
So there's a dull conversation about Uber and an interesting conversation.
The dull conversation is simply traditional 1920s labor thinking of, oh, these people,
are simply not being paid enough and effectively they might be even being paid less than minimum wage if you like account for their hours properly and they have a precarious income they need a safety net I think that's an uninteresting conversation that will go nowhere it's applying 1920s lenses to 2015 but the interesting conversation is these people are actually participating in innovation where obviously what everybody's seen coming down the road is automation driverless
cars and how are these driverless cars being trained?
Well, all the data that's coming from thousands of rides being taken by people and drivers
navigating different route, it's not, it's having two effects.
So when an Uber driver picks a passenger from point A and drops him off at point B, that
passenger gets a ride and the driver gets paid some money.
But the data that's generated that goes and feeds machine learning algorithms that, you know,
improve everything from our understanding of safety to navigation, to following
traffic rules. That stuff is really them contributing to the R&D of the next generation of product
in which they have no producer role at all. They're almost unknowingly sowingly sewing the seeds
of their own destruction. Exactly. So one sort of interesting argument I've heard is these people
are the equivalent of laboratory researchers and they should be paid for their research function as
well. And this sort of ties into the larger argument that anybody who's involved in producing
large amounts of data that go into these big platform-type industries, they really should
be compensated for the value of the intelligence they're pouring into the platform.
And this is why people, of course, are talking a lot about data monopolies and the new
algorithmic monopolies, because that's what these companies are doing.
They're generating a huge amount of data through their operations that is feeding the next
generation of machine learning based technology and automation, and they're in the position
to, of course, read the benefits of that. And to some people, it seems fair that the people
involved in generating the data should be compensated for it. And you're seeing limited
versions of that, where you can now buy a little device that you plug into your car that
your insurance company will use to give you lower premium. So you'll get a discount on your
premiums based on, like, transmitting data back to them. In exchange for your information, yeah.
So the beginnings of such an information for money, economy are starting to emerge.
But it's going to take hundreds of examples, probably a dozen or more court cases and lots of regulation
before we sort of figure this thing out.
Isn't Google kind of an example of that?
Like I go on for searching.
I'm not paying for it.
They're providing me a service and I'm giving them information that they can then use to sell.
Yeah, definitely.
It's a case of that.
the thing that trips people up about this conversation is there are too many zeros in the equation
where you are getting information that would in terms of like substitute products would cost you
thousands of dollars if you had to do it through a traditional network of paper libraries right
like I can just type in a search term and get an answer I want and if I didn't have Google
I'd have to go to my local library look for the reference there go through the interlibrary
loan system to get a book if they don't have it.
It would be, instead of 15 minutes of work processing search results, I would do 20 hours
of works.
The value to me is 20 hours of my own time.
But it's clearly ridiculous to value things that way because the economy is not a close
system where you can value things, value the cost of a substitute, sort of in a vacuum.
So because a lot of these things, when you actually let the market decide what the cost is,
cost is so close to zero that we rounded down to zero, that makes it very hard to actually
do meaningful computations here. And that's one of the things that, you know, people think
crypto-socurrencies might be helpful there, where it becomes possible to meter even the
tiniest of cash flows where each individual transaction might be worth like a fraction of a penny,
but if you put on enough automated circuitry in all your transactions, it might build up
to something more meaningful. So that's a vision. That's a vision.
people have. What do you think the future holds that no one's talking about?
If I knew that, I'd be out there making money of it, wouldn't I?
Any guesses? No, I don't play that game. This is something that I've,
that's almost become, I don't know, a philosophy of mind where I say this in breaking smart
where I say I'm not going to attempt to predict the what and when of the future. I'm only going
to try and predict the how of the future. What ways of working are going to be more
more effective in the future as opposed to in the present because the moment you get sucked
into this game of trying to predict what's in the future and you're not doing it in a systematic
way like being part of a hedge fund that's doing very recent, you know, calculated bets on
your predictions.
It sort of sucks you into a sort of utopianism of the new.
And I talk about this and Breaking Smart as well, where you get attached to a particular vision
when you say, oh, the thing that's going to happen and must happen is flying cars with
bio-nanotech circuitry, and then that doesn't happen, and then you go through some
pathological thought process of mourning for that lost utopia. A lot of people are going through
that right now, where they're upset that we didn't get our 1950s flying car. So that's the
reason I don't play the game of trying to predict the future. Okay, so a different version of a similar
kind of question is, what do you think that people are focusing on today?
that's a waste of time.
That's a dangerous question because the moment you answer 10 years later,
it turns out to have been a very productive thing to have been doing.
What are they wasting time on?
Honestly, I don't know.
Okay, that's a tough one.
Yeah, that's a tough one.
What book would you say has had the greatest influence on your life?
That's another game I don't play, I think.
Okay.
Yeah.
If you had to pick a few, like what would you say?
Well, I have a page on my blog.
It's RubenFarM.com slash now reading, so now dash reading,
where I maintain kind of an active radar of things in my pipeline, so to speak.
And I do have a list there of top books and they're top in the sense that I referenced them a lot.
They're like foundational mental models.
So Lakoff is one of the metaphors we live by.
We've got Gareth Morgan's Images of Organization,
James Kars, Finite and Infinite Games.
So there's a bunch of books that I reference a lot
and sort of bread and butter frameworks that I use a lot,
but I wouldn't say that they are the books
that have influenced me the most
because that's actually a question that's kind of problematic
because influence is a very hard quantity to measure
where you might say that when you were going through
a very angsty teen crisis at the age of 17
and you read the little prince or catcher in the rye
and that totally, I don't know,
saved your sanity back when you were 17
and, you know, kicked your life on a 90 degree different course.
That's very influential.
It might have steered your life that way.
Whereas another book might be the sort of little trickle of a drop
at a time seeping into your brain
because you read it every three years,
like say, Lord of the Rings.
For a lot of people,
A lot of the Rings is something that we reread every three years.
I'm not one of them, but there's a lot of people who do that.
So there's that kind of book.
So for me, an example of both kinds would be Catch 22 is a book that I read as a teenager
and I've never read since.
And that was kind of a short-turn influence.
I don't know what influence it's had on my life, but it has had an influence.
Whereas Douglas Adams, hitchhiker's Guide to the Galaxy, is more of the trickle influence
where I definitely reread it every three years or so.
and each time I unpack a new layer of sort of philosophical cleverness and humor in the book
and it sort of reshapes my thinking all over again.
So there's that kind of influence.
And then there's other stuff like, I mean, school is underrated.
The things we learn in school, like I spent two years in high school becoming very good
at solving trigonometry and calculus problems.
And that sort of really shaped and forged my brain.
And these are not books that you would typically put on a list of the sort you're suggesting, where the books would be, let me try and see if I can even remember.
There's a series of books by a British mathematician called S.L. Loney, and these are books written in the 30s.
And all they are is huge books of trigonometric identities that you sit and solve, like several hundred problems.
And this is like, you know, mental weight training.
And I have worked through like several such books.
Another is a book by a Soviet scientist called I-E-R-D-O-D-O-V.
It's an obscure little book of math and physics problems that was very popular in Soviet Russia
and was very popular in India when I was studying for exams to get into university.
And Irodov is a book that has been massively influential on my thinking because I spent
two years of my life, probably some of my smartest years, hours and hours a day,
simply sitting and solving calculus and physics problems from the book and beating my head
against the wall with that book. And that's obviously been a huge influence. But I barely ever
think of it. I don't have a copy now. I don't go back and reference it. It comes up maybe in conversation
once every 10 years where I might be reminiscing with an old friend and we say, oh, remember when we
beat our heads against that difficult book. So influence is a very hard thing to quantify. And I think
what we end up doing when we talk about influential books is almost a social signaling game
where you're trying to like advertise the identity you most want to inhabit right now to
others. So right now I might be thinking of myself as, I don't know, a mid-career blogger slash
management consultant and I want to come across as say Wise and somebody who has everything
together and then I might list three books that sort of reflect that perception I want to
project. And that would be kind of a bad faith exercise and hypocrisy, which is why I don't like
this question of what books have influenced you, because, you know, influence is like we discussed
are very complicated phenomenon. Are there any other books that you've reread that you want to
mention? Books that I've re-read. Douglas Adams is probably the most consistent kind of rereading.
Oddly enough, TV has had a very weird effect on my rereading. Like Agatha Christie, I love her
mystery novels, and I used to reread at least a handful of them every few years.
But once we got streaming Netflix and all the television versions of the Poirot Mysteries
have been available, I just rewatch them instead.
Yeah.
Thanks so much, Fankatash.
This has been great fun.
I really appreciate you taking the time.
The conversation was amazing.
Yeah, it was a lot of fun to be here.
Thanks, Shane.
Thank you.
Hey guys, this is Shane again, just a few more things before we wrap up.
You can find show notes at FarnhamstreetBlog.com slash podcast.
That's F-A-R-N-A-M-S-T-R-E-E-T-B-L-O-G.com slash podcast.
You can also find information there on how to get a transcript.
And if you'd like to receive a weekly email from meat filled with all sorts of brain food,
go to Farnhamstreetblog.com slash newsletter.
This is all the good stuff I've found on the web that week
that I've read and shared with close friends, books I'm reading,
and so much more.
Thank you for listening.
Thank you.