Freakonomics Radio - 494. Why Do Most Ideas Fail to Scale?
Episode Date: February 24, 2022In a new book called The Voltage Effect, the economist John List — who has already revolutionized how his profession does research — is trying to start a scaling revolution. In this installment of... the Freakonomics Radio Book Club, List teaches us how to avoid false positives, how to know whether a given success is due to the chef or the ingredients, and how to practice “optimal quitting.”
Transcript
Discussion (0)
If you are an American of a certain age, you may remember when Kmart, the discount department
store chain, was everywhere.
Kmart is more than any store you have known before.
Kmart means you get quality.
At one point, Kmart had more than 2,300 locations in the U.S.
It was a famous brand with one very famous in-store promotion.
Sure, sure. The Blue Light Special. Yeah, the old Blue Light Special just about cost me my marriage.
John List is an economist at the University of Chicago.
His Blue Light Special story goes back to when he was a graduate student at the University of Wyoming.
I'm sitting in our house and it's like mid-October and 10 degrees and snowing.
You can imagine a cattle town.
You know, there goes some hay going down the road.
And my wife is in a long rant about how much she hates Laramie, Wyoming.
And then she looks out our front window and down the street, there's a Kmart, which is having a blue light special and says, I can't even get
away from the bleeping blue light special at Kmart. That's how big a deal the blue light special was.
It had been invented by an assistant store manager at a Kmart in Indiana.
The Blue Light Special is what Sam Walton, the famous entrepreneur who started Walmart,
said is like the greatest innovation in the world.
At least in retail, maybe, right?
At least in retail, absolutely.
People are milling around the store, and all of a sudden, this blue light, like on the top of a police car, right? The blue light is going off. There's also an announcement over the public address system.
Attention, Kmart shoppers!
There's a blue light special happening right now.
It's a flashy blue light that is going to be seen.
Hurry now.
When they're gone, they're gone.
And what that means is everyone should run like cattle toward the blue light because there's going to be a great deal.
And I guess it did a couple different things, right?
It helped them clear out the stuff they didn't want, and it gave shoppers this rush of adrenaline that kept them in the store longer and
made them spend more money. That's right, Ant. This works on both the supply and the demand
sides. Absolutely. The blue light special became such a phenomenon that Kmart did what any
right-minded company would do, probably what you or I would do too. They took their success
and they tried to make it bigger.
They tried to scale it up. What they essentially did is they centralized the Blue Light Special.
The headquarters just outside of Chicago would decide where and when and what product would be placed on the Blue Light Special and that the Blue Light Special in Laramie, Wyoming's Kmart
would be the exact same as the one in Honolulu.
This meant the manager of a given Kmart store
no longer had the autonomy to sell off their particular surplus,
nor could they cater to the local customers they knew well.
You can see why this might be a problem.
If you are a customer in the market for a cheap snow shovel,
a blue light special on swimsuits may not be so attractive.
Kmart headquarters also began setting the sales schedule months in advance.
So how did the blue light special do once it was scaled up?
Yeah, it was ruined.
So, John, is that really a failure to scale or just a failure to have common sense?
Well, both. So what my book is trying to make the claim about is that all of our failures to scale, and the world is replete with failure after failure after failure,
I want that to be viewed as a failure of common sense.
Look, my book is kind of the common sense checklist.
The book that John List just published
is called The Voltage Effect. How to Make Good Ideas Great and Great Ideas Scale.
Here's a short passage read by List.
Most of us think that scalable ideas have some silver bullet feature, some quality that bestows a can't-miss appeal.
That kind of thinking is fundamentally wrong.
There is no single quality that distinguishes ideas
that have the potential to succeed at scale from those that don't.
And that's why, as List argues,
the world of scaling needs a common-s sense checklist. Kmart today is nearly extinct. That's probably not because they ruined the Blue Light Special,, it's more complicated. List wrote his book to help people identify potential problems before you scale up and to look for solutions that will, if not guarantee success, at least improve your odds.
The first secret is use incentives that can scale.
This applies to making health care or education policy, trying to grow startup, or just helping your community get along better.
In this installment of the Freakonomics Radio Book Club,
an economist who has already revolutionized
how his profession does research
is trying to repurpose that research for the rest of us.
A scaling challenge, if ever there was one.
We all have lifelong ambitions, don't we?
Today on Freakonomics Radio,
how to know if an idea is scalable
before you actually try to scale it,
how to avoid false positives,
how to know whether a given success
is due to the chef or the ingredients,
and maybe my favorite,
knowing when to quit.
All that is coming up right after this.
This is Freakonomics Radio, the podcast that explores the hidden side of everything with your host, Stephen Dubner.
There's one sentence in your book that I read over and over and over because it was so powerful,
but honestly, I don't quite get it. You write, we need to move from a mentality of creating evidence-based policy
to one of producing policy-based evidence. Can you explain and maybe give an example?
Effectively, what I'm calling for here is a revolution. It's a small revolution,
but it nevertheless is a revolution. It's a call to researchers and to policymakers and business
people to imagine, if I scale an idea, what are the constraints that I should take account of
when I'm actually doing the original research? Let's talk about the Chicago Heights Early
Childhood Program. The Chicago Heights Early Childhood Program.
The Chicago Heights Early Childhood Center, or CHEC,
was a program that List set up with several other economists,
including Steve Levitt, my Freakonomics friend and co-author.
They created a preschool in a low-income Chicago neighborhood in order to explore the incentives for students, parents, and teachers
that would produce the biggest
educational gains. Essentially, in that program, which we started from scratch,
we have to hire teachers. And my goal that entire time was, let's hire teachers in the exact same
way that Chicago Heights would hire teachers. In other words, let's not go cream skimming and hire the best Teach for America type candidates I can find for this pilot program.
Exactly. Let's do it exactly like Chicago Heights. Now, that sounds good. And I think it's the first step, but I should have taken one more step. Wait, let's back up. It sounds good because you're saying in a lot of research
that looks great on paper, that policymakers try to scale up, it fails because the way the
original research was done on a small scale, you use the best of everything, the best leaders,
the best methodology, right? But you can understand why researchers want to do that
because they want the research to look successful. You are consciously saying, I'm not going to try to hire
the best teachers I possibly can because even though that would make my initial research look
better, it will make it less likely to scale up. That's 100%. Researchers tend to start with the best case scenario, best of everything. Now,
that's great for an efficacy test. But the problem is, after researchers in the social
sciences do an efficacy test, they forget to tell everyone else that it was an efficacy test, we should have actually taken one more step
in oversampled bad teachers. Because if I want to scale that up around Chicago,
I probably have to hire 30,000 teachers. That means I want to know, does the program work with well below average teachers? When you have to hire a
bunch of people from the same city, you're going to have to have your program work with not so good
teachers. If you want to keep the budget the same, sure, you can bust the budget, but with the same budget, will I have the same voltage?
And the fact is, I won't.
What does List mean when he talks about voltage?
After all, his book is called The Voltage Effect.
Here's another passage from the book.
A pharmaceutical company develops a promising new sleep medication in its lab,
but the drug doesn't live up to its promise in randomized trials.
A small company in the Pacific Northwest successfully launches a product,
then expands its distribution only to find that it sells poorly on the East Coast.
These cases are all examples of a voltage drop. Voltage drops
are what happens when the great electric charge of potential that drives people and organizations
dissipates, leaving behind dashed hopes, not to mention squandered money, hard work, and time.
And they are shockingly common.
And here again is List in real life.
So what I'm calling for here is to reverse the notion of evidence-based policy.
When I talk about policy-based evidence, I'm saying at scale, with all of the flaws that the program or the institutions or the implementers will have, does my program still work?
What you're talking about now is what you call in the book, the chapter is called, is it the chef or the ingredients?
The example you use is a restaurant, which I think everybody can identify with.
There's a great restaurant, and it's got a great chef.
And then that chef tries to replicate and open, let's say, 10 or 20 restaurants of the same.
And if it was the chef's magic that was making that original restaurant fly, then that's going to be hard to do.
But if it's more the style of the menu that can be replicated and so on. Now, the chef versus the ingredients
issue would seem, however, to be at play in almost any scaling project, including your
Chicago Heights Early Childhood Center. So how do you get around that?
There are different actors within every idea. Now, there are some cases where when you think about chef versus
ingredients, in the Czech case, I think of the chef as a teacher's, right? And the ingredients
might be the curriculum or the kids themselves. When I first started to think about scaling. I started to consume every idea and policy that scaled and
didn't scale. So there are kind of these moments where it keeps coming back and back. One of them
was fidelity, where you scale a program that was never tested to begin with. That's obviously a bad thing. But the other one was any time
there was a human at the center of the program, every time that would fail.
And it got me thinking, you know, humans just don't scale. Think about smart technology.
A lot of your listeners might have smart thermostats in their home.
So you have a bunch of really smart engineers,
and they make a bunch of these smart thermostats
because they're saying, look, we're going to change the world.
We're going to conserve a bunch of energy.
But what happens is I have one of these smart thermostats in my own home. What I do is I undo the presets.
The technology is the technology.
But people like me are so dumb that we undo the original settings so much that all of the good stuff is gone.
Because the engineers assumed it was Spock,
essentially, you know, this unswervingly rational being,
when really you're selling that product into a bunch of Homer Simpsons.
John List's own vocation, academic economics, is heavily populated with Mr. Spocks, people who've spent their entire lives thinking hyper rationally economic experiments out of the lab and into the real schools for both their undergraduate and Ph.D. programs, Liste went to the University of Wisconsin-Stevens Point and then the University of Wyoming. Many of his colleagues have parents who are professionals, even economists themselves.
List is the son of a truck driver and a secretary. He and his older sister were the first two people in the family to go to college. When you look at my background, I think the lived experiences
are exactly my secret sauce. When I was in high school and as an undergrad,
I would set up every weekend at baseball card conventions and I would buy, sell, and trade
baseball cards to make extra money. And that dealing in that market led me to start to do field experiments, which in the early 90s were a new innovation in economics.
And looking at behavioral economics in markets, as a kid growing up, detasseling corn, helping my dad with his one-man trucking company, washing dishes, working in warehouses. These types of experiences allow me
to look at people and markets in a very different way than most of everyone else within the field
of economics. List's combination of lived experiences, as he calls it, and his original research have also made him a hot commodity for private firms like Uber, where List once served as chief economist.
I asked him why a firm like Uber would even want an academic in that role.
They want to bring in an academic because two reasons.
One, the academic is not afraid to instill economics around every part of the company.
The other one is firms are beginning to recognize that there are a lot of secrets that are hidden in academic journals and in academic minds that might actually be fruitful.
So was that the case with Uber's then-CEO, Travis Kalanick, who brought you in?
Did he feel you knew stuff that Uber could seriously profit from?
I think that's right.
So when I first met Travis, it was sort of shocking to me how the initial interview went down.
Meaning he was a little challenging.
Yeah, he was quite aggressive.
He didn't treat you with a lot of deference.
No, no, he didn't really give a damn that I was a Chicago economist,
which I appreciated, actually. On the other hand, it was normal for me.
Yeah, for people who don't know, an economic seminar at the University of Chicago is about
as cutthroat as any corporate boardroom, right?
That's right.
But the one thing that Travis had was confidence.
In fact, I think he was probably the most confident person I've ever met.
To take a startup, you know, which is basically zero, and to take them to $66 billion in seven years or so,
you have to have a relatively large amount of confidence, I think, in your ideas and above
all kind of your instincts. So you end up leaving Uber for Lyft, their smaller rival,
but Kalanick, the CEO, was ultimately kicked out of the company by the board.
In your book, John, you write, I don't believe Travis Kalanick is a bad person. He's a good person who made several bad calls at scale. And you write about
Uber's problems under Kalanick, including a toxic work culture. Quote, as someone who isn't a woman,
you write, a queer person or person of color, I wasn't acutely aware of the power imbalances that many employees faced.
So, John, reading that, it struck me that when you're scaling up, the people in charge of the
scaling are very often not, as you put it, aware of power imbalances because they are the power.
And I'm wondering if there's any lesson you can draw from that for all of us. So what I did observe is a very aggressive environment. And it's an environment that's
very similar to what the world of academia is like. And it's really hard if you've never gone
through it to use theory of mind. So theory of mind is you can put yourself in the shoes
of someone else. When I'm sitting in a room right now with a woman or a person of color,
it's very difficult for me to understand exactly what they're feeling and what's going through
their mind when somebody is presenting
at the front of the room and just getting killed. That's an environment that I've been
raised in in the academy. But now in retrospect, you can see that that kind of environment
just will not hold up at scale. Most people, they're just not built like that.
Imagine that you are a small startup firm or institution, maybe it's a nonprofit,
and then you have some success and you want to get bigger. You want to scale up.
Based on what you just said, what are a few things a firm can do as they're growing to be better at
recruitment and hiring especially?
I would say be as diverse as you possibly can as early as you can. What are the types of diversity you're talking about and what are the advantages of diversity?
So the types of diversity are both observable characteristics like gender and race, for example, and unobservable characteristics
like your socioeconomic status of how you were raised. When people look at me, what they see
is a white man. They don't see underneath that this is a person that has very different experiences
than many other white men in the academy. And I think that that's useful,
these types of diversity from a very early phase in our organization,
because it is not only a way to welcome future candidates from all walks of life,
but also you have a much deeper and more diverse set of ideas and solutions to problems.
And I believe organizations are much more productive because of that.
John, give me an example from your own experience of what you would consider a serious voltage drop.
Sure. A good example is, let's say at Lyft, where I'm the chief economist and say I'm trying to
raise the wages of our drivers. In one case, in the Petri dish, let's say, I end up doing an
experiment where I give 5% of the drivers the wage increase. Now, in the end, they're all happy.
They work more. They end up taking in more money per hour.
Everyone's happy.
Now, if I roll that out to the entire group of drivers, you know what happens?
That completely undoes the dynamics of the wage increase.
Everyone works more, but guess what?
They drive around with an empty car more often, and in the end, they make about the same.
So that's a voltage drop,
but it's a voltage drop for a specific reason. It's because the market comes to a new equilibrium.
If I were to ask you the single best example in the history of the world of the voltage effect
of an idea or a product that scaled up unbelievably well. What's your example?
I would say Jonas Salk in the polio vaccination. He was like a lot of other scientists where he has an innovation, he then tests it out on his own kids, and then he ramps it up to
test all kinds of different kids. So it works for all kids. He finds that it's not a false positive.
Boom. We leverage the healthcare system and the delivery mechanism is taken care of.
What's an idea or product that seemed so great, there was no way it was going to fail,
and yet it did?
In the public policy world, it's the old D.A.R.E. program
that Nancy Reagan was really pushing.
You might remember that from our high school days.
I do.
Drug something.
Exactly.
I don't even remember what D.A.R.E. means anymore.
Here's what it stands for.
I'm getting this from your book.
It's the Drug Abuse Resistance Education Program,
and this is where police officers would go into school classrooms and teach kids about the dangers of drugs.
Exactly. It's basically an information program.
You know, there's kind of a cool study done in Honolulu.
And Nancy Reagan said, you know what?
I want to effectively stamp out drug use amongst teens.
So she took the DAREA.R.E. program and blew it up,
and essentially she looked into every possible television set in America and told teens,
just say no. Say yes to your life, and when it comes to drugs and alcohol, just say no.
That was effectively her campaign. Now, in the end, her drug abuse
resistance education program ended up being not very effective because it never had voltage to
begin with. It was simply a false positive. False positives, List writes, are a common
cause of scaling failures. False positives arise for all sorts of reasons, List writes, are a common cause of scaling failures.
False positives arise for all sorts of reasons.
Bad measurement, wishful thinking, but also simply because the world is messy and it can be hard to establish cause and effect.
This is a particularly big issue in the social sciences, where you can't just hold all factors constant except for the one you're trying to measure. List says the original finding of the DARE program in Honolulu wasn't fraudulent and
it wasn't meant to mislead. It just turned out to be wrong. Unfortunately, this didn't become
clear until after Nancy Reagan had taken her message to the airwaves. The science, in effect,
tricked her and she wasted a lot of time and
money. I remember reading a paper, it must have been 10 years ago, about these other mass media
anti-drug ads. I'm not sure which ones exactly, but do you remember that Partnership for a Drug-Free
America TV ad campaign, This is Your Brain, and then they crack an egg in a sizzling pan.
This is your brain on drugs.
And this is your brain on drugs.
Any questions?
I do remember that.
And the paper, as best as I recall, showed a similar result. And this was a nice experiment
because if I recall, the ad would roll out in different markets at different times, which let researchers like you measure the effect. And it turns out that drug
use or abuse or whatever was being measured didn't fall. And in some cases may have actually
increased. Do you know that research? I don't, but it sounds fascinating.
And I was trying to figure out like why on earth could it have, you know, backfired.
But then I was thinking about it, like, if I'm a 16-year-old high schooler who sometimes
goes to school and maybe a little bit more frequently smokes weed, and I'm getting up
in the morning, and I'm thinking about going to school, and then I see that TV ad, and
really what they're showing me is a fried egg.
And I think, ooh, that looks good.
I think I'm going to smoke some weed and eat some eggs.
But I mean, it's hard to tell cause and effect, isn't it, in the real world?
Look, your story might be right.
I would bet on false positive.
I think one of the reasons why we don't make as big of an impact in the policy world that we probably should is because of these dual
circumstances of, one, you know, I'm not really sure that this is going to ever scale. And two,
are we sure that this is a correct result? This makes me think of a different example,
research from maybe 10 years ago that was recently challenged. This was work led by the Duke behavioral scientist
Dan Ariely, and it showed that if people are asked to sign and verify a tax form before they
fill it out, they're more likely to be truthful than if they fill it out first and sign at the
end. But now it's come out through some detective work from other researchers that the underlying data were maybe manipulated, perhaps even faked.
Now, Ariely has acknowledged the data weren't what they appeared to be, but he says it was
essentially an accident or a mistake.
What's your reckoning of that sort of situation where whether or not the deception was
intentional, there is a finding that policymakers may have acted upon, but is now
called into question because of academic shenanigans. I would consider this episode a
black eye for economics and more broadly for science. And to me, the solution is, first of all, quick replication. Journals are now starting to demand the data
upfront. And it does make sense that the people who are actually reviewing the paper itself for
publication have access to the data. Had they not before? As far as I know, it is very, very rare for any referee
to have access to data. So it's not just about this paper. I'm making a more general claim.
I think one of the key unlocks for economics and for the social sciences more broadly
is essentially a partnership or a group of partnerships with organizations
to help us understand what's going on in the black box. Security checks and guardrails
that need to be put in place. For example, use Benford's law to see whether the data
have been fabricated. This is looking for some kind of pattern in a final digit or something. Is that right?
Exactly. So Benford's law is that humans tend to give numbers that are very different than,
say, a machine. There are different safeguards that you can set up. And I think we need to do
those. And I think we will do those as a science. Because let's be clear, we're kind of in the first inning or two of data generation,
especially in economics.
You know, you can call it a crisis.
You can call it a revolution.
But it all points to the same thing, that we need to do better.
And the reason why we need to do better is because science can be so powerful. Look what
just happened with COVID. The scientists stepped up and gave us great vaccinations. Merck is now
stepping up to give us something that's going to look like a pill that's going to help us after we
get COVID. I think we can do the same thing within the world of social sciences, in particular
economics. And we're moving in that direction, I think.
Coming up after the break,
several pilot programs that give out a universal basic income
have looked promising,
but will they scale up?
Facebook has scaled up,
but is that a good thing?
And how do you achieve optimal quitting?
I'm Stephen Dubner.
This is Freakonomics Radio.
We'll be right back.
John List is an economist at the University of Chicago and the author of a new book called
The Voltage Effect.
He was inspired to think about scaling problems by the simple fact that a lot of research that he and other academics produce often fails to translate into the widespread gains they envisioned.
We spent most of the first part of this episode talking about failures,
so let's start getting into solutions.
Absolutely.
Since List is an economist, you won't
be surprised to hear where he starts. The first secret is use incentives that can scale.
Here's an example from the voltage effect. This one uses a pair of incentives that most of us would like to avoid. The Dominican Republic had a problem.
Millions of citizens weren't paying the taxes they owed. In 2018, the country's equivalent of the IRS
put into action a campaign to increase tax compliance. And when they came calling for help, several colleagues
and I offered to join the fray. With the caveat, of course, that we could run a natural field
experiment. The main thrust of the campaign was a series of messages the government sent to
citizens and companies. Almost every person or entity that decides not to pay
their taxes is essentially weighing the benefits versus the costs of that decision and concluding
that the possible gains outweigh the possible losses. The goal of the messaging campaign was
to tip the scales so that the potential costs became more salient in
people's minds than the potential benefits. One of our messages sought to do this by informing
or reminding people of the jail sentences for tax evasion. Another message informed or reminded
people about a new law that made any punishments levied for tax evasion a part
of the public record. In other words, the names of those who got caught not paying their taxes
would be made available to anyone in the Dominican Republic. List and his colleagues sent out more
than 80,000 messages to self-employed Dominicans as well as firms.
Half received the message about jail time,
and half received the message about tax offenders' names being made publicly available.
The benefit-cost analysis for those deciding whether or not to pay their taxes
suddenly looked very different.
Ka-ching!
Our intervention worked.
This simple messaging program brought in an extra $100 million in tax revenue. Which of the two
messages do you think was more effective? The one threatening public exposure or jail time?
Yes, jail time. Now, if the government had to actually put all those
tax cheats in prison, you'd run into a scaling problem right there, finding enough prison
capacity. Fortunately, the mere threat of imprisonment was effective. It's also pretty
much free and therefore eminently scalable. In addition to the precise targeting of incentives,
there is another basic insight from economics
that List says is a key to successful scaling.
Make all of your decisions on the margin.
Give me a half sentence explaining what it means
to think on the margin for non-economists out there.
What that means, basically, is don't be a prisoner
to looking
at averages in the business world, in the policy world. Whenever data are presented, they're
presented in averages. And anytime you can turn those averages into, you know, what happened with
the last dollar that I spent or what's going to happen with the next dollar that I'm going to
spend, that's really what you want. So at Lyft, we try to bring in more drivers by advertising on Google and Facebook.
So the marginal way to think is for the next million dollars I spend on Facebook or the
next million dollars I spend on Google, how many drivers am I going to bring in?
As opposed to I've spent $50 million this quarter,
on average, how many drivers did that bring in? Absolutely. Average is looking at the entire pool.
So those can be very, very different. There might be some really low apples to pick right away in
average. And then as you scale, what you're scaling over are the marginal people. You're
not scaling over the 2 million people who have already consumed it in the past. So that's why marginal thinking becomes super important. had a market cap of around a trillion dollars, and there are nearly 3 billion users on Facebook.
So that's a big company. But as we all know, it started as a tiny little thing called
The Facebook by Mark Zuckerberg and others when he was an undergrad.
Do you see Facebook as a scaling success story or as a cautionary tale about scaling?
I see it as both. Let's start with the success part first.
Facebook is a great example of a good that has network externalities. If only two of my friends
are on Facebook, it's actually not a very important service for me. But if all of my friends and all of their friends and all of my relatives
are on Facebook, that becomes much more valuable to me. Now the dark side. When a company grows
as large as Meta, now Meta of course includes both Facebook and Instagram, Instagram and those consumers are getting information about elections or vaccinations
or what have you. And when the good stuff is spreading, like the truth is spreading,
this is great, right? There's a high voltage, you know, but when the bad stuff is spreading,
like you might think about a wildfire and And there we need to take extra care,
because then you're undoing potentially some of the good stuff.
On this program, we've discussed the potential benefits of a UBI, universal basic income,
which is an idea that economists have been thinking about for decades, including Milton
Friedman going way back. There are a number of pilot programs around the world, and generally the results that are reported,
and we can account for the results that aren't reported, tend to show success. To me, this is
a really fascinating and pretty high-stakes case where scaling really, really, really matters,
and where it could be very hard to
make an accurate prediction based on small pilot programs.
If I'm looking at a series of successful UBI pilot programs, what questions do I want to
ask to ensure that it might be scalable?
With UBI, my biggest concern scaling-wise would be I want to make sure that there are not important general equilibrium
effects. And what I mean by that is you can do a small scale study and have a small group of people
involved in UBI and that's great. It will show great results. But what happens when 10, 20, 40, 50% of the local labor market is part of UBI. If we scale it up, what I'm talking
about here now are spillover effects on the local labor market or spillover effects in the local
community and whether we understand those at scale. Okay, let me ask you about something we've discussed on this show
several times, and that's the upside of quitting. Now, that very phrase would seem to be
at cross-purposes with the very American notion of never quitting anything. It also would seem
to be at cross-purposes with the notion of grit, which is the name of the book written by our mutual friend, Angela Duckworth. But in your book,
you've introduced the phrase optimal quitting. How do you know it's time to quit, whether it's
a startup firm or some vocation or a passion of your own? Yeah, it's a great question. And the
reason why it's a great question,
because it doesn't have one answer. Each of our situations that we're talking about in terms of
what should I quit? How should I quit? When should I quit? There is no silver bullet. But what I can
give are a few features that a lot of times people don't think about when they're talking about quitting. So the first feature is,
if I quit, what am I going to do? Usually when we think about quitting, it happens because
something in the workplace, in our life has gone wrong. So then you say, I'm going to quit.
But you should as often, or maybe even more often think about what is my opportunity that I'm giving up if I stick with this job.
If that opportunity changes, like it's gotten a lot better, I should quit more often.
Just as often as you're pushed to quit, you should think about being pulled to quit.
Maybe every three or four or six months, look and see what jobs are out there that suit your needs.
Look at what apartments are available or what houses are available every six or so months.
Should this go for girlfriends and boyfriends as well? Well, that's where I'm going to leave that alone because I don't want hate mail from a million boyfriends who say,
because of you, my girl has just broken up with me. So, John, I really appreciate the wisdom of
acknowledging that every quit opportunity is unique, essentially, right? Every person is
different and the situation they're in is different and it's plainly hard. You write in the book that
you've failed to optimally quit many times. Can you maybe give an example?
The one example that I start out the chapter with is when I think I quit at the right time.
And that's on my golf dream.
It was a realization that I just wasn't good enough at something. I played with, you know,
Steve Stricker, Jerry Kelly, two PGA pros when I was in high school, we played in the same
tournaments. But then when I went to college and played golf in college, there was that two or
three year period where their games grew in very important
ways and mine just didn't. And you really hadn't realized that until you played with them same
course, same day and could compare your performance. Yeah. Yeah. I saw their scores. I saw mine. I
thought I played reasonably well. So you might say, look, the guy's a quitter. He's a loser.
I ended up realizing that my dream of becoming a golf professional was fleeting, and it was
just a dream.
And I can well imagine I might be one of the most popular golf club professionals in Eau
Claire, Wisconsin, but I would have missed out on a lot in life.
So your experience suggests that one thing you can do if you're thinking about
quitting or maybe questioning your longevity in something is to really force yourself to compare
your performance with your peer group. I think that's right. Anytime that you
are in a competitive market or in a situation where only some people will win and many others will lose, I think you always have
to be looking to your right, looking to your left and saying, do I really have a comparative advantage
at this particular activity? And that's hard because nobody wants to say that they're not
good enough at something. But we need to do more comparison
shopping if we want to be serious about changing the world. The secret to high voltage scaling is
understanding when to quit. And the general lesson there is that people don't quit soon enough.
So, John, it strikes me that you might have a lot of enemies, because a lot of accomplished academic researchers spend a lot of time thinking about creative and interesting solutions to big societal problems.
But in your work, you point out so many cases where those ideas that look great on paper and are published in the best academic journals just aren't that practical in the real world. And I can imagine that if I'm on the other side of that critique,
I'm a bit annoyed at John List. I think that's fair to say. On one hand,
I'm probably one of the most hated, I think, amongst a group of most hated. And on the other,
I'm probably the most unfortunately named economist in the world. Wait, you say you're unfortunately named because if I type in John List
and Wikipedia pulls up the serial killer by that name?
Is that what you mean?
Yeah, my parents didn't have great foresight.
They couldn't predict that there's going to be a guy who murders his entire family in New Jersey.
And he was pretty famous for a while, yes? It was a big case.
Oh, really famous?
I've never been able to shed that.
You type in Google John Liss,
unfortunately you get this really, really bad person.
I don't know.
You know, the way SEO works, theoretically,
if you keep writing books and keep appearing on podcasts,
then maybe someday John Liss, the academic researcher,
will overtake John Liss, the serial killer.
We all have lifelong ambitions, don't we?
That was The Economist, John List.
His new book is called The Voltage Effect.
If John sounds a bit familiar, that's because he's been one of our most frequent and fascinating guests since we started this show. If you want to hear about some of the other research he's done, I would recommend episode 353, How to Optimize Your Apology, episode 141,
How to Raise Money Without Killing a Kitten, and episode 405, Policymaking is Not a Science Yet.
That last one is where we first heard about some of the scaling issues that are featured in
List's new book. A lot of that research was conducted with his wife, the pediatric surgeon
Dana Suskind, who is about to publish her own new book. It's called Parent Nation,
Unlocking Every Child's Potential, Fulfilling Society's Promise. For the record, Susskind is not the wife who freaked out about the blue light
special in Laramie, Wyoming. Different person, in case that matters to you.
Coming up next time on Freakonomics Radio.
When a boss is a bad boss, have you ever wondered why?
There's no reason to believe that a great salesperson will be a great manager.
This relates to an old business theory known as the Peter Principle.
The Peter Principle states very simply that in any hierarchy,
an employee tends to rise to his level of incompetence.
It's a funny idea, but it also rings true.
There's new research
showing that the Peter Principle
is still alive and well.
And how do most firms
feel about this?
It's a problem that they
purposely choose to live with.
Why bad bosses are bad
and why that probably won't change.
That's next time
on Freakonomics Radio.
Until then, take care of yourself and if you can, someone else too.
Freakonomics Radio is produced by Stitcher and Renbud Radio.
This episode was produced by Mary DeDuke.
We had help this week from Jeremy Johnston and Jared Holt.
Our staff also includes Alison Craiglow, Greg Rippin,
Zach Lipinski, Ryan Kelly,
Rebecca Lee Douglas, Morgan Levy, Emma Terrell, Jasmine Klinger, Eleanor Osborne, Lyric Bowditch,
Jacob Clemente, and Alina Kullman. Our theme song is Mr. Fortune by the Hitchhikers. All the other
music was composed by Luis Guerra. If you are so inclined, we would love you to rate or review the
show on your podcast app. It is a great way
to help new listeners find it. As always, thanks for listening.
Let me clear my throat again since that Laker game's killing me. LeBron!
The Freakonomics Radio Network. The hidden side of everything.