The Knowledge Project with Shane Parrish - #55 Scott Page: Becoming a Model Thinker
Episode Date: April 2, 2019On this episode, Scott Page, 5x Author and Professor of Complex Systems at the University of Michigan explains the power mental models have in how we view the world, discover creative solutions and so...lve complex problems. Go Premium: Members get early access, ad-free episodes, hand-edited transcripts, searchable transcripts, member-only episodes, and more. Sign up at: https://fs.blog/membership/ Every Sunday our newsletter shares timeless insights and ideas that you can use at work and home. Add it to your inbox: https://fs.blog/newsletter/ Follow Shane on Twitter at: https://twitter.com/ShaneAParrish Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You yourself are not going to sort of solve the obesity endemic.
You yourself are not going to sort of create world peace.
You yourself are not going to sort of solve climate issues.
Your brain is just going to be big enough.
Collections of people, creating a larger ensemble model,
I actually have a hope of addressing these problems.
Hello and welcome.
I'm Shane Parrish, and this is another episode of the
knowledge project, which is a podcast exploring the ideas, methods, and mental models that
hope you learn from the best of what other people have already figured out. You can learn more
and stay up to date at fs.blog slash podcast. Before we get to today's guest, I get emails
all the time from people saying, I never knew you had a newsletter. We do. It's called brain food,
and it comes out every Sunday morning, usually 5.30 a.m. Eastern time. It's short,
contains our recommendations for articles we found online, books, quotes,
and more. It's become one of the most popular things we've ever done. There's hundreds of thousands
of subscribers, and it's free, and you can learn more at fs.blog slash newsletter. That's fs.blog slash
newsletter. Most of the guests on this podcast, The Knowledge Project, are subscribers to the
weekly email, so make sure you check it out. On today's show is Scott Page, Professor of Complex
Systems, Political Science, and Economics, the University of Michigan. I reached out to Scott
because over Christmas, I read a book that he wrote called The Model Thinker,
which is all about how mental models can help you think better.
And as you can imagine, this podcast is a deep dive into mental models,
thinking tools, and developing your cognitive ability.
It's time to listen and learn.
You just wrote a book called The Model Thinker,
and I want to explore that with you.
What are mental models?
So what a mental model is, is just a framework that you use to make sense of the world.
So what the model thinker is, the book, it's a book that contains really three things.
One is sort of a general philosophy of models.
One is a collection of models that you can sort of play with and understand.
And then a third thing is sort of this, some examples of how one in practice would apply a variety of models to a problem.
So when I think about a mental model as opposed to maybe a standard mathematical model is in a mental model, what you have to do is you have to sort of map reality to the mathematics, right?
So I may say, this would be one thing if someone to say, well, you should use a linear model here to decide who to hire, right?
Take your data and just put it on a linear model.
But the thing is, you have to decide what are the variables, right?
So, you know, because a linear model contains things like maybe grade point average,
maybe work experience, maybe personality tests.
So you have to think about what are the things, the variables that I use to sort of attach reality into,
you know, sort of connect reality to the sort of mathematical framework that exist out there.
So what I try and do in the book, but also in my work, is think about the mathematics
is beautiful because it's logical, it's right, right?
but reality is kind of messy and confusing and complex.
And so what I see mental models in doing in some sense is mapping reality to the sort
of clean logical structures of mathematics.
And we all have mental models, whether we're conscious about it or not.
How did you land on this approach?
So what the approach is is this, is that, you know, when I was trained in school,
I mean, even though like starting in sixth, seventh, eighth grade, you learn a bunch of very simple
models like force equals mass times acceleration or pv equals k you know in physics and in economics
you learn things like s equals d's supply equal demand and these models are very simple and sort of that
the whole idea was i can explain patterns in the real world or i can sort of make sense of the
variation we see in the real world using a single simple equation then what happened is sometime in
about 1990s i went and visited east 1983 i went visited the santa fe institute which is a
think, think, on complexity. And this is a place where they've been trying to encourage my
advisors at the time who were very good game theorist. Roger Meier said one in the Nobel Prize in
game theory, along with Leo Hurwicks, who's another my advisors, and Stan Ryder was in that group
as well. And they were sort of these people who studied rational choice and how do people
sort of optimize in social situations. And the Center Institute was all about the fact that the
world was so complex, there was going to be hard to optimise. And so I wouldn't say,
that I had some sort of intellectual crisis. It was more the case that he felt this intellectually
fascinating. There was this disconnect. And the disconnect is that I'm trying to make sense of an
extremely complex world using very simple models. And so what social science has done, I think,
typically sort of said, okay, the world's really complex. Here's my model. And I can explain 30%
of the variation. Or I can explain 10% of the variation. Or I can explain why these stocks went up in
value or something. But that means you're missing the other 70% or 90%. And so what not me alone,
but a bunch of us have kind of happened on is this notion of collective intelligence is the
idea that one way you can make sense of complexity is by throwing ensembles of models at
problem. So one of them may explain 20%, another 15%. And it's not that they add up to 100% that
they're explaining everything. In fact, if there's overlap, there's even sometimes contradictions
and what they might explain and what they might predict.
But by looking at the world through an ensemble of logically coherent lenses,
you can actually make sense of that complex world.
And what's fascinating about this to me is there's a group of people who are, you know,
some philosophers, some economists, some statisticians, some biologists,
kind of playing in this space of collective intelligence, right?
You might think biologists, what are biologists doing in this space?
But if you think of ants, each individual ant has a mental model,
so it's a map of the terrain.
of where the food sources are, and they can sort of aggregate that collectively within the
nest, and bees can do the same thing within the hive by doing these things called waggle
dances, which sort of explain where the food is, right? So, B, will come back and dance
and say, look, I think there's food here, and another B will come back and dance, I think there's food
here, and they can kind of aggregate their sort of crude maps of the world. At the same time
that people were thinking about collective intelligence for a purely theoretical perspective,
there was a set of people in computer science who are creating things like random
and forest algorithms and these giant sort of artificial intelligence algorithms that also
were sort of constructing or creating collective intelligence by combining all sorts of very
simple filters.
And so, and I think you've learned, I think there's sort of a growing consensus that
our heads aren't big enough.
No individual's head is big enough to sort of make sense of the complexity of the world.
So you're going to have a set of models of how you think the world works.
I'm going to have a set of models how I think the world works.
But collectively, right, in any one of us is just two.
small to make sense of the craziness, the complexity, the just sheer dimensionality of the world
that sits in front of it. But collectively, we can kind of make sense of it. So let's take
something outside of finance for a second. Let's look at the obesity epidemic, right? You could
blame that on infrastructure. You could blame it on food. You could blame it on the bacteria
in our gut. You could find it on, you know, changes in work-life balance, the lack of physical work,
all sorts of things. And to understand any one of the dimensions that contributes to or
obesity, you'd probably need to have, you know, not necessarily graduate, you might take
five, ten years of study to just understand one piece of it. But if you tried to fix the obesity
by just changing that one piece, by just sort of climbing that one little hill, you're not going to
get very far because there's going to be probably systems level feedbacks. And so there's going to be
no silver bullet that's going to fix something like that. What you can do is by having a collection
of people who each know kind of different parts and his knowledge overlaps and have different kind of models
of how things work, you can get a much deeper understanding, and you might be able to get
just sort of, I think, a more holistic approach. We can talk about this later. I think it leads to
sort of a different way of thinking about policy when you think about going at these problems
from a multiple model perspective. So to what extent is it fair to say that cognitive diversity
is then a group of people who have different models in their head about how the world works?
It is. I think that, you know, this is where I occupy kind of this strange choice, because the book I wrote
before this was called the diversity bonus. And that book talks about the value of having
diverse people in the room. And the reason you want diverse people in the room is because
different people bring different basic assumptions about how the world works. They construct
different basic mental models of how the world works. And they're going to see different
parts of a problem. So if you want to, like just look at this, you know, if you look at sort
of fluctuations in the stock market, or if you look at the valuation of any particular company,
There's so many dimensions to a company like Amazon or Disney, right, that there's no way any one person can understand it.
And so what you want is you want cognitive diversity and what that cognitive diversity means is people who have different, you know, literally different sets of models or different information.
And so one of the things that sort of leads off the book, and I use a lot when I teach this to undergraduates or general audiences, something called the wisdom hierarchy.
And you want to think at the bottom out there is all this data, right?
all this, you know, just whether you want to call it a fire hose of data or a hairball of
data, you know, choose your favorite metaphor. It's all just floating out there. On top of the
data is information. What information is is that this isn't some, we structure the world. So you may
say unemployment is up. What you're doing is shaking tons and tons of data about people having
jobs and you're putting that into a single number. You're sort of categorizing with unemployment's up,
inflation's up, and you're using those as your variables. Or someone else might have a very
geographic view of things and say, boy, Los Angeles is doing well, right?
Texas is doing well or something, right?
But there seems to be, you know, the Midwest, the economy is not doing this well.
Then what you do on top of information is knowledge.
And what knowledge is, is understanding either correlative or causal relationships
between those pieces of information, right?
So if a piece of information is mass and a piece of information is acceleration,
then knowledge is that force equals mass times acceleration, right?
And if a piece of information is unemployment and a piece of information is,
is inflation, then you might understand employment. Unemployment is very, very low. You often get
wage inflation. Again, that's a piece of knowledge. What wisdom is, is understanding which
knowledge is to bring to bear on a particular problem. And sometimes that can be just selecting
among the knowledge. Other times, that can be a case where what you're doing is, you know,
sort of combining and coalescing the knowledge. Let me get two examples from finance here,
one of them that devolves my college roommate Eric Ball, who is treasured Oracle.
And this is one of my favorite stories in the book where he was, someone comes into his office
and says, Iceland just collapsed.
And two models sort of, kind of came and they said, one is you can think of the international
financial system as a network of, you know, loans and deposits across banks and across
countries.
And other model is just a simple supply and demand model.
And so Munger has this wonderful quote about, you want to sort of array your experiences
on a lattice work of models.
And what Eric does in the situation, those are his two models,
complicated network of loans and promises to pay and simple supply and demand.
And he looked at the person who walked in his office and Iceland is smaller than Fresno,
go back to work.
So that's his experience.
It's a tidy country.
It's not going to matter.
Whereas if the person had walked in his office and had BlackRock just failed, right?
He would have said, oh, my goodness, I'm not going to use the supply and demand model.
I'm going to use this networks of contracts and promises to pay model.
And so what you want to think about is you as an individual.
And one of the fabulous things about your site, Farnham Street, is that it's all about
all these metal models, all these ways that people have sort of making sense of the world.
And one of the reasons people go to your site, one of the reasons people read business books,
one of the reasons we gather knowledge, is to some sense accumulate knowledge in the form of,
you know, ways of taking information, understanding relationships to it.
and what we hope to gain is wisdom by having more knowledge to draw from.
But the point of the sort of core philosophy of the model thinker is,
even if you do the best you can, even if you're a lifelong learner,
even if you're constantly amassing models,
you're still not going to be up to the task of solving anyone.
You know, you're not going to, you yourself are not going to sort of solve the obesity
epidemic.
You yourself are not going to sort of, you know, make, you know, create world peace.
You yourself are not going to sort of, you know, solve climate issues, right?
But what it's going to take, because you're just not going to, your brain just is going to be big enough.
But collections of people, right, by having different ensembles models, can be creating a larger ensemble model, I actually have a hope for addressing these problems.
The new Mitsubishi Outlander brings out another side of you.
Your regular side listens to classical music.
Your adventurous side rocks out with the dynamic sound Yamaha.
Regular U owns a library card.
Adventurous U owns the road with super all-wheel control.
Regular side, alone time.
Adventurous side journeys together with third row seating.
The new outlander.
Bring out your adventurous side.
Mitsubishi Motors, drive your ambition.
Okay, there's so much I want to dive into there.
Let's start with, in the hierarchy from data to information to knowledge to wisdom,
it sounds like we're applying sort of mental models at the knowledge stage.
And then wisdom is discerning which models are more relevant than not.
Is that an accurate view of that?
And if not, correct me.
I puzzle over this a lot.
Every time I think I have an accurate view of it, I then reframe how I think about it.
So I was giving a talk the other day.
And someone said, I think the real space where mental models come in is in this move that's
very subtle between data and information, which is true, right?
Because when you think about, you know, how I might think about, like, if I visit a city for the first time and somebody says, you know, well, tell me about Stockholm, you know, I immediately start putting it in categories. They might say, well, you know, it's a lot like London or something, right? Or you might, you know, you might sort of say, well, the people are friendly but reserved or something. So again, you're taking all these sort of experiences and putting them in boxes. So there is a sense in which just the act of going from your raw experiences into the information does.
it's almost leaning on the models you already know.
And so this is the thing that I've been puzzling over the last few weeks,
which has been fun to think about,
is that if I have a set of, like, models in my head that we think of it,
that knowledge space, does that then bias how I filter the data into information?
Probably does.
Of course, because, I mean, the models are helping you pick out which variables
are which variables you think will be more relevant and how those variables will interact with one another.
Right.
So here's a really great example of sort of,
of that.
So there's a phenomenon called The Wisdom of Crowds, right?
The Sir Wiki book where you can have groups that people make accurate predictions.
Well, the reality is that sometimes groups will be successful and sometimes they won't.
And one of the reasons we write down models is to figure out what types of diversity will
be useful and what types won't be.
But there's been worked by a number of people, Kay at Chen at HP Labs, Michael Sanchez-Berk
at University of Michigan, where they sort of compared how to, you know, sort of suppose I have
lots of actual data out there and run a linear regression to try and predict
something, and I have that compete against people. What you find is the regression does a lot
better than any one person because of the fact that the linear regression can include a lot
more data. It doesn't suffer from biases, all sorts of stuff. But oftentimes when you have
groups of people compete against the linear models, the groups of people can beat the linear
models. And when they do, what you find out where they beat them is when the linear model, the person
constructing the linear model, because the linear models actually constructed by some sort of person
typically, doesn't have a way of including something in the model.
So one example involved a consumer product, was a printer.
And the linear model said, this printer is going to sell, let's say, 400,000 units.
And when they used a crowd, the crowd was like, no, it's going to sell like 200,000 units.
So it's a huge difference.
And so they went back and they interrogated people in the crowd saying,
why do you think this printer's not going to sell?
It's, you know, handles as many pieces of paper, print quality is this good,
the toner cartridge is easy to change, you know, all the,
sort of attributes that were no linear long. And the first word out of the person's mouth was,
but ugly. That is a but ugly printer. Well, there was no but ugly variable on your regression
because that's sort of like, you know, a design feature. And it wasn't a very attractive
printer. But the difficulty with data in those situations and the form of the linear model is that
it only looks backwards, right? It can only look at sort of what's happened in the past. Whereas
people when the constructing models often are kind of forward-looking look. How are people going to
respond to this new design protocol? Now, what's going to work best, ironically, in all these
situations, is a combination of the linear model and the model with people. And this gets to this
sort of step from knowledge to wisdom that I find really fun. You could say, oh, so what you should
do is you should average the linear model and the people. That seems to not be true. What you should
do instead is if the linear model and the people are close, you know, the predictions,
you probably should go to the linear model because it's really well calibrated, right?
It's probably going to, you know, be better.
But if they're far apart, if the linear model and the humans are giving very different
predictions, then you want to go talk to them.
I mean, talk to the people and talk to the linear model.
Now, you could say, how do you talk to the linear model?
Well, you look and you say, what variables are in there?
What variables are the people using that the linear model is not?
what do the coefficients look like has the environment changed yeah right no that's the key thing
like the linear model is assuming stationarity and what's fun and this is like MIT just started this
this new school right this this new sort of data science school right that they're they get their first
um I think it's the first one they started like 30 40 years they got like raised a billion dollars for
this and one of the things is they want people who are bilingual who can communicate between
between these sort of really sophisticated artificial intelligence models and the real world.
Because the thing is, people are afraid of sort of just throwing all this information into this
giant AI model that spit something out.
But if you're using relatively simple models, you actually can, it is easy to be biased.
It's easy to sort of look deeply at this, you know, whatever model you've done and say,
why is the model saying this, right?
I like that a lot.
I want to sort of explore something with you, which is when we were talking about models and
and how we apply them, whether we're applying them at the information,
sort of like filtering state or data and information stage or the knowledge stage
or the knowledge to wisdom stage.
It seems to me like we can probably agree that the more models you have is a good thing
in general, but only to the point where they're relevant for the specific problem that you're
facing.
Having extra models, if they're not useful, it is not good, but the more tools you have
in your toolbox, the more likely you can accommodate a wide variety of jobs.
I think that's right, but I also think it's, there becomes this interesting challenge when I think about building teams, they're also building your own career. So in your interview with Atul Gawande, he made this fabulous point about his method of making a contribution to the world was sort of being able to communicate across different types of people in different areas, right? So he brought sort of a, you know, he'd been trained by doctors. So he had, you know, his parents were doctors. And so he, he'd sort of absorbed what the medical profession was all about. But at the
same time, he had this deep interest in science and this deep interest in sort of political
philosophy and literature and sort of public policy. And that enabled him to fill what
Ron Burt calls a structural hole, right, in terms of like there's a network of people studying
medicine, there's a network of people studying sort of politics and public policy. And he can sort of
stand between those two things and make sense of them. Right. So one of the things I talk about
in both the diversity bonus and also in the model thinkers, you can think of yourself as like this
toolbox, and you've got some capacity to accumulate tools, mental models, ways of thinking.
And what you could decide to do is you could decide to go really deep. You could be the world's
expert on, or one of the world experts, on random forest models or the op-and-off functions,
or you could be, you know, one of the world's leading practitioners of sort of signaling models
and economics. Alternatively, what you could do is you could go deep on a handful of models,
right? There could be three or four things you're pretty good. Or you could sort of be,
someone who, I think, in the financial space, I think a lot of people are really successful,
like Bill Miller, a friend of mine, is by having just an awareness of a whole bunch of models,
right? So having 20 goals that you have at your disposal, you can think about. And then when you
think, when you realize, like, you know, this one may be important, then you dig a little bit
deeper. But also, those, that variety of models gives you, I think, two things. One is it
gives you sort of a robustness that you think of and instead of a portfolio set.
You're not going to make a mistake.
But it also can give you this sort of incredible bonus in the sense that two models, rather than giving you the average of the performance of the two, often give you much, much better than the average, right?
You get this sort of bonus from thinking about a variety of models.
And so the book, what was super fun about the book and what's been really rewarding about it is what I do is I lay out this philosophy of like, okay, this is why to confront the complexity of the modern world, you need a variety of models, right?
sort of using this data information, knowledge, wisdom,
that what I do is I take what I think are, you know,
the 30 most important models that you might know,
Markov models, linear models, Colonel Blotto models,
Leophanov functions, systems dynamic models,
simple signaling models from gate there, just a whole variety of things.
And it was just a great exercise was,
how do you write these in each one of these in seven to 12 pages
in a way that everybody could understand them and then use them?
And that's our real challenge.
And the people at Basic, the list of Verna, who's my editor of the, T.J. Keller was the editor of the book, but she was the person who sort of wordsmith this with me. That was a real challenge because there were times when she would just say, no one is going to understand this. And that was, is a fun thing to do. If you pick up my book and pick up Markov models, for example, right? Markov models are these models where the states of the world and then transitions between the states. And you read the book, it's like, you know, better to 10, 12 pages. And I think most people can understand it. And all the math. And all the math.
math is in a box. If you go to the Wikipedia page and you type in Barka
model, you'll just go, wow, I should like get a PhD in statistics. That's your
only hope of understanding it. And so I think that what I'm trying to do in the book,
in some sense, is the same thing that the tool is time to do in his work life is to say,
here's a way to get kind of knee deep in these things, to understand where they work.
And no one is going to master all 30 models in this book because you could write a whole PhD on each one of them.
You write 20 PhDs on each one.
But the thing is the awareness is really useful because you might say, you know, this Colonel Blotto game or these Markov models or these signaling models or these power law distributions, this is really interesting to me.
And then you can go a little bit deeper.
And so I think that it's really meant to be, you know, just kind of a reference in a way, but also,
just, you know, sort of an awareness document where you can sort of say, hey, wow,
I was just looking at here's a, here's a super distinct model I've never even heard of and it's
really fun to think of that.
So I want to dive into some of those in a little bit, but before we get there, I just want to
I want to talk a little bit more about acquiring mental models.
Like how do you pick which models to learn if you're working in an organization or your
student or how would you go about having a conversation with somebody about which mental
models to prioritize and why?
Yeah, so I think one of the first things you want to ask is who are the relevant actors,
right?
So is this a single actor who's sort of just making a decision or is this a strategic situation
where someone is taking an action and they've really got to take into account what the
action is of someone else, what choice somebody else is going to take, right?
So if you're thinking of like what action I'm going to take in a soccer game, I'm going to
taking in investing, I've got to think a lot about what other people are doing.
Or you might want to ask, is it the case that I'm taking some sort of action and I'm embedded in a much larger structure where things are sort of moving and I'm taking cues from that larger social social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, economic system. So you're not even maybe aware of the fact that you're drawing signals about what other people are doing. Right? So you want to think about what am I on. I'm not like a person. And I'm like,
making an individual sort of isolated choice by modeling a strategic situation or
remodeling something that's much more sort of social ecological like that.
Second thing you want to ask is how rational is the person making the decision with the alternative
not being irrational, but the alternative being sort of rule of thumb-based.
So there's a guy Geigerenser who's a German social scientist and Peter Todd of this book,
you have this work on sort of adaptive toolboxes, you know, which is this idea like I've just got
a set of mental models or tools, and I apply those, kind of like, oh, this is problem
A, I apply tool A. This is problem B. I brought tool B. So there's a lot of stuff when I think about
like where I'm going to go do my laundry or what coffee shop I'm going to go to, but I probably
don't sit around and rationally think about it. I just kind of like just follow some sort of
routine and maybe I adapt that routine slowly. Maybe I learn a little bit, but for the most part,
I might just follow rules. So you want to ask how are people who have been, then you can ask
yourself, is my logic correct? So Colin Kammer, you know, Richard Thaler, people study behavioral
economics would say, if it's repeated a lot, that should move you a little bit more towards
the rational behavior, because people should learn. And if the stakes are huge, that should
move you towards rational behavior. If it's being made by an organization versus an individual,
that could move you in either way. If it's a big decision by an organization, you could imagine
and like, okay, this is going to be done rationally, right?
Actually, you have a committee of people think about it in a very careful way.
But if it's some sort of standard operating procedure within a large organization,
then it could be way over the other extreme.
This is just how we do it.
This is how we've always done in vain.
This is how we're always going to do it at pain.
So now you've got this idea of, okay, how do I, you have to,
now you're thinking about, okay, how do I model the person following a rule or optimizing
or maybe suffering from some sort of, you know, human bias,
if it's a human.
And then is it just a decision?
Is it a game versus some sort of social process?
So those are sort of in some sense the two main questions.
You know, what context is the action taking place?
Who's making it?
And then I think the real interesting challenge is if it's not a decision,
if it's a game or if it's some sort of social process,
is making yourself really aware of the challenges of aggregation,
you know, in the sense that like oftentimes things don't add up the way they're
supposed to, right?
or there can be just a fundamental paradox in the assumptions you're making.
So, for example, Barabasi has a fabulous new book out called The Formula.
And in this book, The Formula, he talks about how these are the lessons for success
by looking at tons and tons of data.
And some of it is about some of the things we're talking about before.
With Gawande's, you want to make sure you use your network really well.
You want to seize opportunities, those sorts of things.
But there's a circular reasoning in there in the sense of everything.
everybody followed that formula, it's not clear that everybody would be successful, right?
And so the thing is, oftentimes these systems can contain feedbacks within them, right,
that make them logically inconsistent at the level of the whole.
And in fact, his book, which is, again, it's a fabulous book I'm not done right because I think
he's right.
And I think if you read his book, you'll be able to be more successful.
But if everybody read his book, which he would like, then people wouldn't make successful.
Right.
And so when you're thinking about constructing a model, again, it depends on the context.
you want to think through, is the whole thing sort of coherent?
One of the things I love about goodies of sort of moving, when you ask me the first question,
like, what do I think of as a mental model?
I think of it is, again, I'm a real outlier here.
I think of it is like a way of saying, I'm going to use this mathematical model to make sense of reality.
So what are the great examples of this thing is, I think the value of the formal mathematics
is Markov model.
So Markov model is there's a set of states, like you could be happy.
board, sad, whatever. And then there's transitions between those states, right? Or a market could be
volatile, not volatile. And then if those transition probabilities are fixed, and if it's possible to get
from any state to any other state, then that system goes to an equilibrium. And it's a unique
equilibrium. So what that means is history doesn't matter, right? One-time interventions don't matter.
There's just this vortex drawing it to one thing. What that model forces you to do then is if you want to
argue, the world is complex. If you want to argue for path dependence, if you want to argue that
a policy intervention is going to make a difference in some way, you then have to, you have to
either be saying, I'm creating a new state that didn't exist before, or I'm fundamentally changing
these transition probabilities. And so you get this idea that, so when somebody's constructing
a model, oftentimes they'll say, well, I'm a systems thinker. And then if you have them
write down their model, you'll say, wait a minute, that's a Markoff model.
That is the unique equilibrium.
Do you think your system has the unique equilibrium?
And they're like, no, no, it's very contingent, fat dependent.
And then you'll say, okay, well, then your model's got to be wrong, right?
I mean, you're missing something.
There's got to be some way of changing these transition probably.
So I think that I view it as a very deliberative process within yourself of constructing a model.
First, you kind of ask, what is the general class of models?
So there's a system, is the decision, is it a game?
And then once you write it down, then you can kind of go to the mathematics.
And the mathematics will often tell you, given your assumptions, what must be
true about the world. And then if it's not, you can, you don't have to kind of go back and say,
well, let me rethink my model, right? Do models become a way of surfacing assumptions?
Oh, absolutely. No, I think models are, models force you, they force you to get the logic right.
They force you to sort of say, what really matters here in terms of driving people's behaviors
or firm's behaviors, how do those behaviors interact, right, in terms of, you know,
how do they aggregate? And then how should people respond to that? There's a famous,
quote by Murray Galmon, where he said, imagine how hard physics would be if electrons could think,
right? And I'd written that in a paper, and I, um, it was attributed to Murray and I, and someone
said, I don't think Murray's ever said that. And so I went to Murray and I said, Murray, have you
ever said this? And he read it. And so then he just said, imagine how difficult physics would be
if electrons could think, right? And so he goes, there I've said it. You're saying. But one of the
things about modeling it, especially like I think in the space, a lot of your listeners are in,
because they're people who in some sense define the world is imagine how difficult physics would be if electrons could think
and if electrons could decide on their own laws of physics.
So if you're running a large organization or if you're a secretary of the treasurer or if you're in any sort of policy position,
you get to decide the laws of physics.
You get to decide what's legal, what's not legal, what the strategy space is.
So when you think about, when you're saying when someone constructs the model, what do they decide their assumptions are?
One of the things we have to keep in mind is one reason people construct models is to build things, right?
To build buildings, to build policies, to build strategies.
When you do that, you're defining in some sense the state space.
You're defining reality.
So if you tell your traders, you're looking at these ratios, you're defining the game for them, right?
And so I think that it's, I think the design aspect of models is often overlooked, overlooked, right?
underappreciated. So within the field, I mean, I'm in economics, political science, business,
whatever, there's been, because there's so much data, there's this huge shift toward
empirical research. If you count the number of papers in the leading journals that are
empirical versus theoretical, it's been this massive shift towards empirical research,
which in some sense is, you know, I applaud it to work is much better, right? There's much more
data. We can get a causality. Huge fan of it. But I think there's a cost to that because what
that a lot of that empirical work is doing is really nailing down exactly what's the what's the
size of this effect what's the slow right of that line what's the size of the coefficient how
significant is it right and so we can suss out whether improving teacher quality matters more
than reducing class size and by exactly how much right so that's great I'm all for it however
that's taking the world as it is and one of the really cool things about models I was trained by
these people who did mechanism design is thinking about can we based on our understanding of how people
act, redefine the world, construct mechanisms and institutions that work better. So if you look
at the American government at the moment, it's kind of a mess, everything from like sort of
gerrymandering to the fact that, you know, we had this electoral college that made a lot of
sense when states are all equal sized or roughly equal sized. You know, now some states that
are tiny and still have the same number of senators that states that have 50 times as many
people. But even how we vote on things, what law is under the purview of Congress. Like,
why do we have a separate, in some sense, like a financial system, you think of like the
Federal Open Market Committee and the Federal Reserve System, that's quasi-governmental.
The FDIC is quasi-governmental.
But NASA and the NIH are not quite as quasi-governmental.
You know, you'd like, there's a deep question about what institutions we use where that is underappreciate.
So, Davis, Jenna Bednar, and JJ Prescott and are running a thing in February of Michigan called Markets,
hierarchies, democracies, algorithms, and ecologies.
We're just sort of saying, look at all this stuff we have to do.
We used to sort of think, okay, look, we can use markets, hierarchies, democracies,
or just kind of let her go, right?
You can see what happens.
Like with the roads, we just kind of let it go.
You decide to go somewhere, I decided to go somewhere.
And then it's a total mess for the most part, right?
But when we made these decisions about where we have markets,
hierarchies, and democracies, that was made in a world where there was no data,
no information technology, where we were exchanging beads as opposed to
setting bits through the mail. But now there's this fifth thing, right? There's these
algorithms. And a lot of stuff, a lot of things can be done by algorithms as opposed to
markets, hierarchies, and democracies. And there's a question, because the sort of the cost
of change for all these institutions, should we be reallocating problems, right, across these
different institutional forms? That's a question you can't touch, really, with, by running
regressions necessarily, other than to identify the places where it's not working, right? But
you can use models to help you kind of think through. What if we made this a market, right? What if we made
this a democracy? What if we handed this up to an algorithm? Yeah, it sounds like we're using
multiple models to sort of construct a more accurate view of reality. We might never ever be
able to understand reality completely, but the better we understand it, the better we sort of know
it to do. And yet it strikes me as odd that we're often, one of the ways that we learn to
apply models unconsciously is through school. And it's usually,
like a single model, right?
Like you're reading a chapter in your grade 10 physics book on gravity,
and then you get gravity problems.
And then you know that I will apply, you know, this equation to this problem.
It's almost an algorithm, right?
I know what the variables are, that the school is going to give me the variables,
and I'm just going to apply this.
And we're taught with this sort of like one model view of the world.
Why are we taught that way and why is that wrong?
I think it was right when we had a much simpler world, right?
I think it was right when we thought, let's get taken in the context of a business decision.
Like, you might think, okay, here's how you make it a business decision.
You figure out what the cost is going to be, and then you think about the net inflow of, you know, profits, right?
And you think, okay, do the profits outweigh the cost, right?
Do the revenue?
I'm sorry, revenues outweigh the cost, right?
So is it going to be positive cash flow?
Then now when you make a business decision, there's a recognition that there's a recognition that there's
environmental impact. There's an understanding that it's going to affect your ability to attract
talent, right, because it isn't going to be an interesting problem. There's a question of how
does that position you strategically for the long run. There's a question of what it does for
your capacity. There's a question of what it does for your brand. And so these decisions are just
so much more complex than they were just an increased awareness to the complexity of all these
decisions that there's no single model is going to work. So when you're in seventh grade, we're teaching
you very simple things. And we're trying to teach you that there's some structure to the,
to the world. And so we want to say, look, here's the power of, you know, these physics models.
They explain, not only they explain things that you see every day, like, you know, why objects
fall to the floor. They also sort of explain things that you wouldn't have predicted beforehand,
like the two things of different weight fall to the exact same time, where they can predict things
like the existence of the planet Uranus, right, which they, you know, you know, they didn't know
was out there, right?
So you get, so I think the simple models we teach people because we thought, like Plato,
it was with Plato's famous quote about carving nature at its joints, right?
I think there was a belief that we could carve nature at its joints.
And then for each one of those little pieces, you can sort of apply this model.
And so people will sometimes say, oh, many model thinker, it's like the parts of the elephant.
And I'm like, no, no, it's almost exactly wrong in the sense that you want,
Each model, you know, there is a sense in which you have different models look at different parts, but you need that overlap, right?
Because you can't carve nature at its joints.
That's what we've learned over the last 50, 100 years, right, is that it's complex.
The world is a complex place.
And so I think that the challenges to become a more nimble thinker is to be able to sort of move across these models.
But at the same time, if you can't, like if that's just not your style, that doesn't mean there's no.
place for you in the modern economy. To the contrary, it means that maybe you should be one of those
people who goes deep, right? Specialize. Yeah. You know, so you need this weird balance of
specialists, super generalist, quasi-specialist, generalist. I mean, there's even people who I've
heard describe themselves as having, that their human capital is in the shape of a T, right, in the sense
that, like, there's a lot of, there's a whole bunch of things they know a decent amount about,
and then one thing they know deep. Or other people describe themselves as like the symbol for
pie, right? Where there's two things they know pretty deep, not as deep.
deep as the T person, and then a range of things that sort of connect those two areas of knowledge
and then a little bit out to each side. And I think that it's worth having a discussion with
yourself, I mean, not you, but your listeners, is to think, okay, what are my capacities? Am I someone
who's able to learn things really, really deeply? I'm able to learn a lot of stuff? And then
think about a strategy for, you know, what sort of human capital you develop, right? Because I think
You can't make a difference in the world.
You can't go out there and do good.
You can't take this knowledge and this wisdom and make the world a better place unless you've
sort of acquired a set of useful tools, not only individually, but also sort of, they've got to be collectively useful.
Because you could learn 15 different models that are disconnected, that applied in different cases,
and never sort of have any sort of gestalled any sort of hole, and that might make it hard for you to sort of make a contribution.
Or you could say, I'm going to be someone who learns 30 different models, but if you're not
someone who's nimble and able to move across them, that may be more frustrating for you.
I think as we're talking, one of the things that strikes me is if you're going to
prioritize which models to learn, obviously the ones in your domain or discipline, the common
ones are good to have an understanding of. And then these general knowledge sort of models
that apply across disciplines because those are less likely to be, other people are less
likely to bring those to the table so you can become your own sort of in a way and not to the
extent that other people would, but your own cognitive diversity machine, almost, if you
will. How do you go about iterating these models once you have them? How do you put them
into use? Are you recommending a checklist sort of approach? How do you mentally store them,
walk through them, pick out which ones are relevant and not.
So this gets back to, I mean, you asked a really prescient question earlier,
which was, how do you know how to model something and how do you think about what
assumptions to make?
And I think what you do when you think about which models to use and how do you play
them off on another, you want to again ask, what is the nature of the thing I'm looking at?
And then not so much, you know, sort of a checklist, but you can sort of like page through
the book or page through your collection of models and think, which,
ones here might be relevant. Let me give an example that I find my students love to sit around and
play with, which is there's two models that have to do with sort of competition between two high
dimensional things. So one of them is a spatial model. And in the spatial model, there's an ideal
point. So let's suppose that you have like your ideal burrito, which said it's like weighs about a
pound and a half. It's got this proportion of sort of meat and rice. And it's hot, but not so hot that
you've got to, like, you know, you have a giant cup of water right next to you.
So you can think of that as a point, like sort of four-dimensional space, right?
There's a size, there's a heat, there's a amount of, you know, beef and amount of rice or something.
That's your perfect burrito.
Well, then you can imagine all the brittos that are for sale in, you know, Toronto or Ann Arbor,
New York, or you could put each of those in that same space.
And then you're going to sort of choose the burrito that's closest to you.
So then you're going to say, oh, this is the best burrito in Chicago.
Well, if my ideal point's different than your ideal point,
then I may think a different thing is the best brief.
Well, that same model you can use for,
it's actually the workhorse model in political science
for thinking about which candidate do I vote for.
And if we aggregate, then like nobody's happy.
Yeah, so nobody might be happy.
But then there's another model that model is the same thing
called the Colonel Blotto game.
And in the Colonel Blotto game,
there's a whole set of fronts.
You can think of those as dimensions.
But instead of it being a spatial characteristic,
it's hedonic in the sense more is better.
Right. So if I think about buying a car, it could be like more miles per gallon is better. More legroom is better, right? Higher crash test scores are better. Less environmental damage is better. So and so now when I think about comparing two things, I could just sort of say it's not which one is closer to me. Like this breedo is better because it's near my ideal point. I can sort of go across all these different dimensions and say wins. Well, what's cool about both those models is that if there's a big set of people deciding in the first one, right, there's a
bunch of people who have different ideal points and they're trying to decide. There's generally
no winner. So there's those sort of best things. So if you think about, okay, I'm going to go
to University of Michigan. I'm going to go to Northwestern. I'm going to go to Western Ontario
and get a degree and I'm going to apply for a job and I'm competing against seven other people
or maybe I'm, you know, up for a road. But I think, how did I not win this? I'm so great.
Well, the thing is, it may be that that's, you could think that's a spatial model and I just
wasn't what people liked. Or you could think that's a hedonic model, like, somebody just
happened to beat me on some collection of fronts. But one of the nice things that both those models
sort of tell us is that there's kind of no best answer. Because like you're going to win relative
to how someone else is. So it's, it's a strategy, it's more like a game, it's strategic. And there's
no best thing you can do unless you happen to know where the other person was. And so the nice thing
that comes out of that, there's sort of this calming sensation. My undergrads always feel like,
if you don't get a job, if you don't get a scholarship, you don't get a grad school, it's not because
somebody was better than you. No, it just happens to me that they were positioned better than you,
and that's fine, right? It's just going to happen. But strategically, when you think about
maximizing your chance of getting one of those things, you need to think about, is this a spatial
thing where I want to sort of make sure I've got the characteristics that are near what they're looking
for? Or is it hedonic where, you know, I want to beat my competitors on as many things as
possible, right? So I would have like, I would have the most undergraduate research done. I would
have the strongest letters. And so, and a lot of things are sort of a combination of the two,
but what's really useful is having both those models in your head is what you're thinking about
and in the same story, if I'm an advertising firm and I'm pitching, right, an advertising
player. If I'm trying to be a supplier to a large auto company, right? It's multidimensional
competition. And so what you'd like to do is have both these models in your heads and say, let's think
about this as a spatial model. Let's think about this as a purely hedonic competitive model and think
about how would we position ourselves. Where are our competitors? And it gives you, I think,
I think it's calming in a way, right? Because it gives you a way to structure your thinking.
And it also lets you know that if you lose, it's not because you're necessarily worse. And if you win,
it's not necessarily because you better. So it's also, it's calming. It's also humble. It's also
humbling, right? It's easy to think. One of the things I deal with a lot in trying to present
diversity, the value of diversity, is people who are successful, by definition, have always won,
and they're in power, and they think, I'm good because, I'm here because I'm good. And they typically
are, and they tend to think they're there because they've had a lot of ability. They have a lot
of ability, which means that they've got flexibility in terms of, you know, what tools they've
acquired, but the point is getting them then recognized is for the group to be better,
right? You want people with other tools, right? So it's tricky because these people think
that, you know, people in successful think because they've won, you know, because they're good,
when in fact, you know, maybe they've won because they just happen to have the right
combinations of talents at the top. I kind of think of that in an evolutionary sense, right?
Where we have considered gene mutation today that might be selected as valuable.
but a million years ago, the same gene mutation might have been negatively selected or
filtered, if you will, because the environment has changed, the situation has changed.
And we apply stories to these sort of random gyrations.
And that's not to say that success is completely random, but there's an element of luck to
everything.
But how that's weighted varies depending on the circumstances, right?
So you get into this really complicated view of the world.
And I find that really interesting when we're thinking about how to learn models and how to make better decisions and how to how do you teach your kids about complexity?
Like how do you teach your kids, not necessarily university students, but them as well.
But like how do you teach them, hey, the world isn't really this simple place.
And, you know, here are some general thinking concepts that you need to learn about.
And how do you go about instilling that in children?
It's such a fascinating question.
I think that especially, you know, the article in the New York Times last week about how the upper, you know, the sort of upper quintile was spending so much more money on their children than those below with the idea of them being economically successful.
So let's go back to a question and ask earlier about like sort of in school you learn or sequence mass time's acceleration.
In the economy of 100 years ago, it almost depended, it depended a lot more on sort of you yourself being really, really good.
Like, you're a really good lawyer, you're really good furniture maker.
Go back 200 years ago.
Like, you were successful if you ran your farm well, right?
So it was all about your individual ability and hard work, right?
So I was reading this fabulous book called The Rise of the Meritocracy, which is an old book
like 50 years ago.
So it talks about sort of like, you know, success is intelligence plus effort, right?
And it's actually where the word meritocracy came from.
If you imagine the world as a collection of individual silos and the, you know,
and sort of the amount of grain in your silo depends on how intelligent you are and how hard do you work.
Then it is all kind of about like your ability, work hard, get A's, right, in class, develop these skills.
And that's a very instrumental view of the world.
But in a complex world, your ability to contribute.
And again, I'm going to go back to your amazing interview with a tour of Gwanda.
In a complex world, your ability to succeed is going to depend on you sort of filling a niche that's valuable, right?
which, you know, as in Barabasi's book, it could be connecting things, it could be pulling
resources and ideas from different places, but it's going to be filling a niche. And that niche could
take all sorts of different forms. And so I think when I, when I talk undergraduates about this,
when I talk to my two sons about this, what you want to think about is finding something that
combines three things, you have to really love it. You know, you've got to, you know, that's
to be your passion. You've got to love the practice of it. So if you, you know, a great basketball
player isn't someone of great ability. It's someone who loves practicing basketball. The great
musician is someone who's got some ability there, but they love practicing music. So you've got to,
you've really got to enjoy the practice of the thing you're doing. The second thing is you've got to
have some innate ability, right? So my younger son's actually a reasonably good dancer when you was a younger
kid and there's not many adult male dancers. And the guy who runs the dance studio is that after
I dropped him off and they chased me down and said, is that you're a son. You're a son. You're a
son. And I said, Dan, he goes, well, we need adult male dancers. And I said, yeah, that comes
from the other side of the family. He says, no, no, it can't be coming in. And he, like, watched me
dance for like 30 seconds. And he's like, you're right. It comes from the other side of the fan.
And, you know, even if I love dancing, my upper bar dancing is going to be pretty low, right?
So you've got to have some ability there. And then the third thing is, you have to be able to, in
some sense connect those things to something useful, meaningful, right?
Some of you think that it's going to make the world a better place, right?
So the question in and of itself, the thing you're going after has to have some meaning
or purpose or value.
You've got to be able to convince yourself of that and convince others of that.
Because otherwise, one of the things I find fascinating about the academy is that
people would be in small departments and they'll study something and it gets to
really interest to them and they're the world expert in that. And that's great because we're
advancing knowledge. But outside of their small circle, no one may find that interesting.
And I think that it's incumbent upon them to sort of think about, you know, are they using their
talents in a way to, I think are they making that interesting to other people or at least
intriguing to other people? Because I don't think you're adding that much value if only 30 people
lead to work. When that's a great conversation to have it with maybe like a 14 to 20 year old
What if we go younger, right?
Like, how do we teach an eight-year-old about not only compounding, but power law distributions?
And, like, how do we, we might not use those names and we might not use the mathematics behind them,
but how do we start instilling models that are, like, the way that I think about this is if,
if the world is changing, there's a core set of models that are probably unchanging, right?
Mathematical ones that either cross sort of human history and biology,
in perhaps sort of like all existence, right?
Reciprocation is a great example of one, right?
Like, it works on human and social systems.
It's also a physics concept.
Like, how do we teach our kids, or should we teach our kids?
Is maybe even a different question on this.
But should those models be learned in school as models
so that you start developing this lattice work
or this mental sort of like book in your head
where you're flipping through pages and going,
oh, this model might apply here?
Nope, it doesn't go to the next model.
Like, how do we instill that?
in our children, even if they don't understand the mathematics behind them, so that we start
understanding the world is more complex than single models. And part of your goal is to just what
you said, right, to fill this niche. But one of the ways that you're going to fill that niche is your
aggregation of these models and how you apply them is going to be more valuable or less
valuable in a group setting in a particular company. And your understanding of how other people
are applying models is also going to be a key element of strategy.
in the future, right?
So if we can anticipate that our competitors are following models that they learned in
business school, well, now we know how they're likely to respond to what we're doing
and we're not likely to be surprised.
And we can use that information to make our business or our company more competitive.
Yeah, it's a great question.
Two things come to.
One is, I think we can do a little bit more of sort of meta teaching in the sense that
one of the things that people really like about, I did an online course called the
model thinking, which is a MOOC.
And one of the sort of tropes in there is that when you,
there's something that's called,
they borrowed from my colleague Mark Newman,
when he talks about distributions,
which is logic, structure, function.
So if you see some sort of structure pattern out there in the world,
there has to be some logic as to how that came to be.
And then you also want to ask yourself,
is there some functionality of that structure?
Does it matter?
Right.
So when you talked about like normal distributions versus power law distributions,
So we'll teach kids the bell curve, but what we won't do is sort of say, here's the bell curve, and this is a structure.
Think about all the other structures you can draw, but now we want to ask which structures do we actually see in nature, and we don't see that many.
You know, we see bell curves, we see sort of stretched out bell curves, which are long normal, we see power of things, but we rarely see things that have like five peaks to them.
So why is that?
And so then we need a logic that explains the structures we see.
So what logic underpins normal distributions, which are you adding things up,
what logic underpins log normal distributions, you're multiplying things,
and what logic gives you power laws?
Well, in the book, I said there's a bunch of them, right?
There's per functional attachment, there's self-organist criticality,
but there's logics that will give us those powers.
Then you want to ask, does it matter?
Do we care?
And then that's sort of an easy thing to convince kids up,
because you can say, well, if incomes like heights are distributed normally,
So that's nice and predictable and seems fair.
But if heights were distributed by a power law,
there'd be 10,000 people as tall as giraffes.
There'd be somewhat as tall as the Burge Tower.
And there'd be, you know, like 170 million people in the United States
seven inches tall.
And they're like, whoa, that would be pretty bad.
It would be really hard to design buildings as well, right?
Because you'd need them for tiny people in traps.
And so I think this logic structure function thing is really important.
I think the other thing that we need to do is give them experiences
of using the same broad idea across a variety of disciplines.
So one of the things I took a class that I'm hoping to teach again
because the students just absolutely loved it,
but it just didn't work out this year,
called Collective Intelligence,
where we just sort of did a whole bunch of different sort of in-class exercises
to sort of explore where collective intelligence comes from.
So here's one example that was just, I think, just most sort of...
What is collective intelligence?
Collective intelligence is where sort of the whole is sort of smarter than anyone individual in it.
So you can think of that in a predictive context.
This could be the wisdom of crowd sort of thing where, you know, people guessing the weight of a steer, the average, the crowd's guess is going to be better than the average guess of the person in it.
That's just a mathematical fact.
But here what we're doing is you're looking at sort of collective intelligence in terms of solving a problem.
So here's the setup.
I had a graduate student make up a bunch of problems.
Problems defined over 100 by 100 grids.
So, mentioned a checkerboard that's 100 by 100, and each one of those cells has a value.
So one of the problems was really simple.
It was like what we call a Mount Fuji problem.
There's just one big peak.
Not necessarily right in the center.
It was just kind of in the upper right.
There's a huge peak, and that had the highest flight and everything kind of fell off from that.
Another problem had like five or six little peaks over the landscape, with one being higher.
And then another one was really, really rugged.
So he just created a bunch of problems and I didn't know what they were, right?
That was part of the key.
No one knew what the value is.
So I created three teams.
One team was the physicist.
What the physicists did is they got to sit around and first say, okay, which eight points do we check?
And then they would get the values from those points.
So it's kind of like the game Battleship where they would say like, you know, D7.
And then they'd come in and we'd say, this is the vibe from D7.
And then they'd get another.
So they got like five rounds where they got to check.
Here's 10 points.
So five rounds where they got to check 10 points.
And the goal was to find the highest point.
Another group was the Hayekians, the decentralized market where each person went to pick a point and then they'd just come back and they would say, here's the point I picked and here's the value, but there's no coordination.
So the idea was then you can see value of question by comparing those two because you could kind of see where other people picked and you might want to go near where they were, but you also wanted to build information for yourself and the group by trying other points, right?
So there was all sorts of cooperation and competition going on that group.
The third group was the bees.
And so the bees would point at a square.
They couldn't give a number.
Like they couldn't say, you know, A-26.
They just could point somewhere on this big square.
And we would approximate what we thought that was.
We would show value.
And they had to go back and waggle dance, right?
Now, the thing is, it turns out undergrads won't waggle, they won't, like, waggle dance.
They're just too insecure.
So we had them just dance with their hand.
What they did do is they had to, like, kind of like, point the direction it was in.
And then the longer they waggle, that was kind of the better the value.
Okay.
And then we compared the waggle dancing bees to the hackians to the physicist.
And on the two easy, on an easy problem and on the problem with five peaks, the bees did just physicists, right?
In fact, you know, the five peaks they did, ironically, just a tiny bit better than the physicist.
And so we're talking about this afterwards.
And someone says, well, that's because the bees can take a derivative.
And everybody's like, what?
And he goes, well, no.
Like, to solve this, you just got to got to take derivatives.
that they could sort of find the highest,
they could find the highest point,
and then they could take derivatives
because they could see who was wagging on, right?
And it was only on the really hard problem.
Physicists did the best.
And so what you learn from that is that bees, markets,
and problem-solving teams
are all dealing with high-dimensional problems
with, like, you know, things that they solve, right?
If it's not super hard and finding food isn't super hard,
then bees are just as good as physicists,
because it's as if they're derivatives, and markets are just as good.
But when it gets super hard, the market's not going to work.
Right.
Because you need all sorts of coordination.
So that's going back then to this thing we talked about before, about when do
use a market, when do you use a democracy, when you use a hierarchy, when you just kind of let
a rip, that probably depends on the difficulty of the problem.
But what's cool about that, and this is where you can do things with young kids as they see,
well, here's this idea, collective intelligence, that spans disciplines.
If you want to teach, like, so in the same class, I just give one more example because it was just so fun.
There's this amazing game called Rush Chowler.
I don't know if you've ever seen it.
We have little cars and trucks, and you slide them around and you've got to get this red car out.
So you get that, yeah, yeah, so you get the kind of gives your configuration.
And these configurations are like called easy, medium, hard, and very hard or something.
And what happens is, so here's what I, here's the,
experiment I do in class. And again, the numbers are too small to say this is any sort of scientific
result, but it's always worked, and it's always been really fun, is that people play rush hour
and they play like an easy, a medium, a hard, and reason and we time how long it takes them
to do each one on average, right, and the harder ones take a lot longer. Then I have them
write down models for how to play rush out. So one model might be solve it backwards,
which is think about how that car is going to get out. Another model is, like,
get the big trucks out of the way, right?
Another model is move forward, then move backwards.
So, you know, move forward as far as you can, right, and then move it backward.
And then what I do is I have another set of people, read the mental models from the first
set of people, and then play not the same games, but different games, but of the same
difficulty, and compute how long it takes them.
And what happens is they're just a lot better.
And what you see is that this is something where it's not tacit knowledge.
It's actually learnable knowledge when rush out.
What I've been struggling with, and you probably have a good idea, and this is I'm trying to come up with a game where you can't, it's purely tacit.
Like, I can't communicate.
So my friend John Miller always jokes that, like, this weekend, he's going to read a couple books on tennis and then go become a professional tennis player.
You know, you can't.
Yeah, yeah.
So I'm trying to find my acre.
Yeah.
Right, right, right.
Yeah, so I'm trying to find like a really cool example to juxtapose with Rush Hour.
Maybe one of your listeners will email it.
You could do it in a classroom setting some new game where people can learn it, but then there's nothing they can.
It'd be nice if it didn't involve physical skills to you, just mental skills.
That's what makes it.
Right.
So these other things.
It was Adam Robinson actually told me that Rush Hour was one of the best games that he knew of to teach thinking.
skills to young children.
Oh, really?
Yeah, and we spent the summer playing that this year on vacation, and we would take it.
It's a great game to take to, like, restaurants and stuff.
And my kids were at the time, eight and nine, and, you know, we would sit there for, you know,
they would sit through a whole two hours and just play this.
And it was a totally awesome game with, it was fascinating.
I mean, and as a parent, I just promised them 30 minutes of iPad if they got through all 40
problems in like three weeks.
And they were like, oh my God, this is amazing.
It's amazing how hard they'll work for that 30 minutes of us.
It's right.
Get the insiders right.
No, but it's, it is funny, though, how I think it's because it's a physical game.
When I'm doing this in class, I'll say, you only have 10 minutes.
Sometimes I'm just extracting the game from my students' hands.
You know, and then I'll like, look, you could take it home, bring it back the next day because
it's so much fun.
Let's talk about a few of the models that you have in your book before.
we finish up here, I want to, can you actually, how about, I'm going to mention three of them
and you walk me through sort of how you present them and how you use them.
Right.
Let's start with power law distributions.
Okay, so power law distributions are distributions that have, so let's start with what they're
not.
So a normal distribution is something like human height, where the average person's 510, there's
some people 5, 8, there's some people 6 foot, and it falls off really fast.
A power law distribution, most events are very, very, very small, and there's occasional,
you know, huge events.
If you look at earthquakes, there's thousands and thousands of tiny, tiny earthquakes,
and there's an occasional huge earthquake.
If you think of the distribution of city sizes in those countries, there's tons and tons of
small towns, there's an occasional New York, London, Tokyo.
If you look at book sales, music sales, right?
Most books sell three or four hundred copies.
There's occasional books that sell millions of copies.
And there's a question of, you know, what causes these, what causes power laws?
So unlike normal distributions, which come from just kind of adding things up or averaging
things, power loss have a bunch of causes.
So what I do in the book is I go back, let's go back to this logic, structure, function.
It's a structure we see a lot, right?
This long-tail distribution.
The question is, what causes it?
And so I talk about three models in the book.
One is something called a preferential attachment model, where imagine things kind of like walk.
You imagine that, like, there's a set of cities or there's a set of books, and the probability
I moved to a city where the property I buy a book is proportional to the number of other people living in that city or buying that book.
We can see right away there's positive feedbacks there.
There's more people move to New York, right?
More people move to New York.
Whereas more people buy the tipping point.
Ironically, more people buy the tipping point.
So the tipping point sells a million copies.
There's 10 million people in New York.
If nobody buys somebody's boring book, then nobody buys the boring book.
But another way that power laws form is through random walk.
So imagine that each firm, a firm starts by somebody, you know, joining their firms.
They've got these startup firms.
They're one person.
Now, suppose that they're equally likely to sort of fail or hire a second person.
And I suppose they're equally likely to go back down to one person or at a third person.
With the life of that firm, the firm's going to exist as long as there's a positive number of workers.
Well, if it's a completely random walk, like a coin flip, you can imagine that most firms are going to die really quickly, right?
You add an employee yourself, then you fold.
You had two employees, then you go down one, up one, down one, down when you die up.
So that would say that the lifespan affirms, some of them can be really short, but if you happen to get really big, you're going to last a long time.
That should be a power line. And it is. It's also true that the lifespan of species phylog genera in ecology, you can think of that as perfectly random, also satisfies a power.
And then a third way to get these power laws is from something called self-organized criticality. So if I drop grains of sand over a desk, eventually I get a big sand pile.
And then if I look at how many grains of sand fall off the floor, most of the time when I drop the grain of sand, once the pile is formed, it'll be very few, but occasionally I'll get these giant avalanches. And so what's happening there is the system is sort of aggregating to this critical state. So think of like traffic in Los Angeles or traffic in Toronto or New York. What happens is it gets itself organized despite where cars are spaced pretty close. And all of a sudden, there's one accident, boom, there's three hour delays. So most of the time, things are kind of fine, but one accident can really delay.
So now what you've got is you've got, okay, now we have a logic that explains the structure.
Why does it matter?
Well, it clearly matters in the case of things, you know, for, you think of things like, you know, book sales, music sales, those sorts of things.
It means that there's going to be some people who are wildly successful and how much people who are not that successful.
And that may not be, we may decide that's not fair, right?
We may decide that like, or if I'm Malcolm Gladron, I shouldn't necessarily, I think, wow, I'm amazing, because I sold four million books, because no, you just happen to be the New Yorker books because you benefited from those.
positive feedback. So it actually could change how we think about, you know, how we tack people.
If you thought, no, this person solely books because they're just so much better, right?
That's a very different story than if you say, no, just the natural process of people buying
books leads to big winners. Then you start realizing, though, the big winners are as much luck
as they are skill. That's really interesting. Let's go to the next model I want to talk about,
which is something that when I was reading it in your book, I was like, oh, first year physics,
which was concave and convex.
Yeah.
Walk me through that one.
So what are this is?
I got these wrong on my physics.
Like the first assignment.
I got them mixed up.
And my like, it was hilarious.
It was, yeah.
All these memories came back.
Yeah, no, this is a challenging thing because there are certain things you almost,
you have to cover.
Otherwise you're sort of, it's a disservice, right?
And so the basic idea of sort of linearity, right,
is that something has the same.
slope always. So the next doll, you know, the next thing is worth just as much as the previous
thing. And fundamental to so many models throughout the book is some assumptions of either
concavity, which is sort of diminishing turns, or convexity, which is increasing richard.
So we just talked about preferential attachment. That's a form of convexity. Instead of the odds
that somebody buys your book increases more and more people buy your book. So the odds that
the first person buys the tipping point are low, but then the odds that the million and first
buys that are much higher, right? Because some of the odds that,
people abut. So convexity just means that the odds of something happening or the payoff
from something increase as more people do it. So many things in the world, though, are the opposite.
They're concave. So if you think about like, so concavity means that the added value of the next
thing diminishes. So for example, like, the next bite of chocolate cake. Yeah, the next bite of chocolate
cake, the next scoop of ice cream, right? There's just diminishing returns to that. So if you think
about like adding workers to a firm, right, as you keep adding workers, like the value,
to those additional workers go down.
And that's true of people in teams as well, right?
So when you think about, suppose I decided I've got an important decision to make,
you know, the second person is going to add a lot to the first,
the third person I want a lot to the second, right, and so on.
But at some point, you're just not going to add much value.
And so there tends to be in sort of team performance on a specific path,
a certain level of concavity.
These, I think the challenge for me in writing that was like,
how do you make concavity and convexity, even really?
remotely exciting, right? And because it's just kind of like mainstream math. And the easiest
way to teach it is almost in terms of derivative, right? So linear function has a constant derivative
and a concave function. They would fall off. But so you should try and make the case that
these are in some sense fundamental to not recognizing in particular concavity can lead to really
flawed assumptions. In the 1970s, Japan had this really fast growth. And there are all these articles
saying Japan's going to overtake the United States in eight years. But the thing is if you construct
the model, you realize that as you sort of industrialize, you're going to grow pretty fast,
but there's going to be diminishing returns to that industrialization. The same is true of China,
right? So if you do a linear projection of China, you know, five years ago, you'd have said,
oh my gosh, you know, by 2040 China's economy is going to be used to enormous. But the reality is
growth's going to fall off because what the model shows, in order to maintain anything even close
to linear growth, you have to innovate like crazy, you're massive levels of innovation. So I think
that the idea behind the cancaveny and convexity chapter was to try and get people to recognize
that there's just diminishing returns, there's diminishing returns to so many things, right,
that linear thinking can be dangerous, linear projections can be really dangerous.
And the last model I want to talk about, I guess it's actually more than one model, but local
interaction models.
Yeah, so these are fun.
These are like super fun.
So local interaction models, there's some simple computer models, not that convexity and can
cavity aren't fun, but they're fun for a smaller set of people, I think. So these local interaction
models are models where you think of, like, people first off, like I may be on a checkerboard,
but eventually can put them on a network. And what you imagine is, is the set of things, my behavior
depends on the people around me. So one of the examples, a simple example I give often is sort of
like, how do you greet people, right? So do you shake hands? Do you bow? Do you fist bump, right?
it doesn't matter what you do, but you do the same thing that other people do, right?
So if you go to bow and I go to shake hands, I'm going to poke your eye out.
So it's not going to work.
So what you want is these are in some sense pure, what we call it, it gives like a pure coordination game.
What I'm trying to do is I'm trying to just coordinate with the people I'm interacting with.
This happens on so many dimensions.
So in an earlier book I wrote it called The Difference, I talk about where you store your
ketchup. Do you store your ketchup in the fridge, or do you store your ketchup in the cupboard? Again,
it doesn't matter what you do. No, it does matter. Just always the fridge. Yeah, and the cupboard people
think the fridge people are crazy. Once a doctor said to me, Scott, I think, you may think this is funny,
but you have to store ketchup in the fridge because it has vinegar in it. And I said, where do you
store your vinegar? He said, in the fridge. And like, the whole room is like, what are you a crazy person?
Like, you know, you don't strike vinegar. The same is to a soy sauce.
There's soy in the fridge, soy in the cupboard people.
And it doesn't matter what you do, but whatever you do takes on a lot of importance.
It defines who you are.
So one of the fun things I do in class I also talked about in the book is you can imagine
you're actually playing a whole series of local interaction model.
And that collection of local interaction solutions, you can think of as comprising a part of culture.
So my wife, Jenna Bednar, we mentioned before, is a political scientist.
She didn't have some papers on this where you can think of cultures like,
a set of sort of coordinated behaviors across a variety of settings.
And so I'll do this in class.
I'll say, okay, do people use their phones at the dinner table?
Do people take their shoes off in your house?
Is the TV on?
Do you hug your family, right?
Just a whole set of things.
And then I'll have people sort of, I have the students vote, like, you know, using a Google
form, like, you know, which ones they do.
And we sort of have like, you know, here's the modal response across all these things.
Here's the people who are correct, which are the people who do what I.
do, right? And I'm like, you know, these are my people. You can move to the front and you all get
A's. And the best part about teaching this is one time, this is like 10 years ago, this kid comes up
after class and he goes, oh my God, oh my God, this explains it. And I'm like, explains what? He's
like, my girlfriend's family. And I was like, what? And he goes, everything I do, they do the
opposite. And what's weird about it, what's great about these local interaction models is that
prior to that he had thought intrinsically they were just weird people right right he just thought
these are crazy people who have he goes they have their own napkins like you always had their own napkin
with a napkin holder they take their shoes off like you know they always have the radio out of the house
nobody's it was just a whole set of things that they did like they hugged each other right and he's like
well they're hugging me right all these things that they did he thought that they were just like
part of their genetic makeup, an essential part of their character.
When in fact, it was just a series of coordination problems that their family had solved, right?
The other example I have in this space that was great is somebody told me this story about how
at New Year's Eve one year, she'd been married into this family for 20 years.
She said, you know, look, I love the family.
They're great.
But I hate the boiled cabbage and beet soup on New Year's Eve.
You know, 20 years ago, I think I can say that.
It turned out everybody hated it.
And nobody mentioned it.
And it turns out, like, I guess somebody, like, dead for, like, 15 years, supposedly, like, did they think, right?
Yeah.
And then they decided, like, going forward that they would make, like, one ceremonial beat or something, but then they had a part.
No, and so I think that, like, you don't realize, though, is that you can, a lot of who we are and what we do comes to these local interaction models.
Now, let's make this serious for a moment away from, like, the ketchup and the bowing.
when I go work for a firm, or if I'm working in an organization, you know, stock analyst,
psychologist, whatever I'm doing, mental models that we use are like local and rational.
Right.
I mean, like, it's like, oh, you're using that metal model.
It's easier for me to use that metal.
And that, that then works against this diversity, right?
So there's, it really becomes a super important thing that, and what's also very funny is
that your mental model is better than mine, but it's still worth it for me to hang on to my
metal model because it's giving that diversity.
So collectively, it's worthwhile.
But there's going to be, so again, back to the point you raised earlier about evolution,
and this is where the many model thinking becomes fun is you realize like, so I've
been, I go work in some organization, I'm working in some community of practice, and I've
got a collection of metal models I'm using.
It just becomes easy for me to start coordinating on other people's mental models, right,
using other people's terminology, because it's more, it's more efficient.
And I'm easy to, you know how to appeal to them, how to persuade them, how to interact
interact with them, how they see the world, and then they're predictable. This kind of goes back to,
have you read Ender's Game? No. One of the key moments in Ender's game is like, Ender, who's this kid,
who ends up saving the world, totally fictional book by Orson Scott Caret. We just read it with my kids.
And one of the key moments is like, he's like, I can defeat my enemy, but only when I really understand
them and how they think and how they view the world. And I always thought that that was really
interesting, right, because I'm trying to teach my kids that part, you know, one model that I want
them to have, if you want to call it a meta model, is perspective taking. Right. What does this problem
look like through the lens of this person? What does it look like through the lens of this person and sort
of like mentally walk around the table and then sort of like have a hierarchy too? What does it
look like to shareholders? Like, what does it look like to the government? What does it look like to all the
people that sort of interact with the system? And through that, you can get this more nuanced view of
reality. And if you see the problem through everybody else's lens, you know how to talk to them
in their sort of language or in a way that might be more able to appeal to them.
That's such a great point because one of the things that I struggle with in this whole space
and I think it's a good place to struggle is as you move from sort of very formal models,
like, you know, I'm fitting, you know, some sort of hierarchical linear model versus some like
abstract perspective taking versus sort of some sort of some notion.
of sort of like a disciplinary approach to a problem.
So let me give a very specific example that I find cool to think about,
which is the drug approval process.
So if you look at a company like Gilead, Genentech, right,
somebody constructs a molecule.
Then they've got to decide, okay,
is this molecule something that we can use to improve people's health?
One perspective to take on that is just purely a pharmacological perspective, right?
So the body chemistry, how does it work, right?
Just pure signs.
But then there's also sort of a sociological perspective
in terms of, like, you know, will people, you know, how will people take this?
How will this get passed on?
Will it get abused?
Could it be abused?
What uses we think?
There's also sort of a purely almost organizational science business school perspective
in terms of how do we, if it's complicated, to explain, how do we educate the doctors
in terms of how to use this, right?
Then there's also people who understand just the political process, which is like, you know,
what's the likelihood it'll get approved?
Even if it works on all these other dimensions, can we get this through the government
an approval process, if it's somehow something different, given that they've got boxes that
they use. So what you've got is you bring all these different disciplines to bear, and you've got
it, just like you're saying in this book, if I'm the CEO of Yulia and I've got to make the call,
do we take this drug to market? I actually have to hire people who can take all those different
perspectives, right? Otherwise, you know, I probably won't be CEO for very long because I'm not
going to do. But then you realize, just let's make things just a tiny bit less abstract for a moment
and think about traditional arguments for a liberal arts education, right? The reason you want to
read literature from a whole bunch of different vantage points. So the reason you don't want to just
read sort of the great man view of, you know, Canadian history or U.S. economic history or something
like that is because there's all these other people who experience that same thing and saw it
from a very different perspective. But so what's funny here is that that I'm,
I'm kind of making this point.
When you think about many models, you could think, like, I'm making this point.
Like, oh, gosh, people should be spending more time learning technical stuff.
And on the one hand, that that's kind of true.
People should be learning technical stuff.
But the core argument I'm making is very similar to the argument that people at the other extreme are making in terms of, like, the reason why liberal arts education is so important is the ability to do perspective taking, right?
To sort of learn to see the world through different odds.
I think where the difference is is that I'm a, you know, I think I'm a pragmatist in a way, right?
I mean, I just see so many opportunities.
And so I feel like I'm coming at from a much more sort of pragmatic perspective in terms of going out there and making a difference in the world as opposed to just purely appreciating all these different ways of seeing things.
And the reason that distinction matters is if in literature, it could be that every perspective is worth considering, right?
in engaging and thinking about because there's no there's no end game ironically given the name of the
story you can imagine but the but if I'm making an investment decision if I'm worried about drug
approval if I'm trying to write a policy to you know reduce inequality if I'm trying to think about
how do we teach people there is an end game in a way there are things we can measure there is
performance characteristics and so it it could very well be the case that you could say I think
we should think about it from this perspective I think we could use this model and then we can
kind of beta test that perspective in that model and think, no, we shouldn't, right?
So there's a difference, I think, in the approach I'm promoting in that, yeah, you throw out
a whole bunch of models, but if the spaghetti doesn't stick to the fridge, the spaghetti doesn't
stick to the fridge, and you let it go, right, it may be something you, it's probably something
you keep right, because there's going to be other cases where it does work, but the point is there's
going to be cases where it doesn't, and so.
You don't want to force it.
No, you don't want to, so there's limits of inclusion, right?
in the sense that, like, you only want to be inclusive to things that are actually going to help you to do
do whatever is you trying to do. I think that's a great place to sort of end this conversation.
I feel like we could go on for another few hours. But I want to thank you so much for your time,
Scott. This has been fascinating. Thanks. It's really fun to have these open-ended conversations
and I really appreciate the format, you know, as opposed to simple, you know, answer, respond,
but to give me time to elaborate on the book and what I've been thinking. Thank you.
Awesome. We'll have to do part two at some point. Thanks.
Hey guys. This is Shane again. Just a few more things before we wrap up. You can find show notes at FarnhamstreetBlog.com slash podcast. That's F-A-R-N-A-M-S-T-R-E-E-T-B-L-O-G.com slash podcast. You can also find information there on how to get a transcript. And if you'd like to receive a weekly email
for me filled with all sorts of brain food, go to Farnhamstreetblog.com slash newsletter.
This is all the good stuff I've found on the web that week that I've read and shared with
close friends, books I'm reading, and so much more. Thank you for listening.
Thank you.