3 Takeaways - Nobel Laureate, Daniel Kahneman: His Latest Findings on "Noise" and Flaws in Human Judgement (#27)
Episode Date: February 9, 2021Learn about Nobel Laureate Daniel Kahneman’s latest findings on “noise” and how there is more noise and flaws in human judgement than you think — Find out why you should see a doctor in the mo...rning and go to court after lunch.
Transcript
Discussion (0)
Welcome to the Three Takeaways podcast, which features short, memorable conversations with the world's best thinkers, business leaders, writers, politicians, scientists, and other newsmakers.
Each episode ends with the three key takeaways that person has learned over their lives and their careers.
And now your host and board member of schools at Harvard, Princeton, and Columbia, Lynn Thoman.
Hi, everyone. It's Lynn Thoman. Welcome to another episode.
Today, I'm delighted to be here with Daniel Kahneman.
Danny is the world's most famous psychologist,
although I'm sure he'll be embarrassed that I'm saying that.
And he is one of the founders of the new field of behavioral science.
He won the Nobel Prize for his pioneering work on how people make decisions.
He proved that erroneous or irrational decisions are everywhere, when we judge a baseball player
or an investment or a presidential candidate, or when a doctor makes a diagnosis or when
a judge decides on a defendant's guilt or innocence.
I'm excited to learn about Danny's latest discoveries, which you're hearing about here, before they've even been published.
I am honored that Danny has chosen me and Three Takeaways as his first interview and one of the few that he plans on doing on what he calls Noise, which is also the title of his upcoming book.
His work on Noise is as important as the work that led to the Nobel Prize. Danny's going to tell us about his new insights on how our minds deceive us,
both in our everyday lives and in our professional lives.
His ideas will change the way you think about yourself,
how you make decisions, and how you look at the world.
Hi, Danny, and thanks so much for being here today.
My pleasure.
Danny, two formative experiences in your life
were growing up in Nazi Germany
and hiding from the Nazis in barns and chicken coops,
and your experience many years later
as a 20- or 21-year-old in the psychology unit
of the newly formed Israeli army.
How did those experiences shape you
and your interest in psychology?
I wasn't in Germany, actually.
I was in occupied France during World War II.
It wasn't as bad as being in Germany, although it wasn't good.
There is a story that's something that happened to me as a child, which for some reason seems to have some resonance. And it is the experience of, at age seven, living in Paris under curfew for the
Jews and wearing a yellow star. I was playing with a friend at a friend's house and I forgot
about the curfew. It was too late. And so I put my sweater inside out and I went home,
close to home in a place that I actually remember and I revisited
a few years ago, there was a German soldier walking, facing me. He was wearing the black
uniform of the SS, which was the worst of the worst. We approached each other. There was very
little I could do to avoid it. And when we came close, he beckoned to me.
He called me and I approached him and he picked me up.
I was terrified that he might see inside my sweater that I had a yellow star, but he didn't.
He hugged me and he put me down and he took out his wallet and he showed me the picture of a little boy.
And he gave me some money and we went our
separate ways. I tell that story in part because there was something that might have been my
tendencies and certainly the atmosphere in my family that people are very complicated.
And I remember that people are very complicated. That incident sort of symbolized that here was this man
who would just as soon have killed me,
but he saw me as his son, picked me up,
and the complexity of it, that impressed me very deeply.
One episode, it didn't make me a psychologist, I'm sure,
but when I think about what did make me a psychologist,
it was an abiding interest in this complexity of human nature.
And that I think I had when I was a child and it stayed with me all my life.
That was the first experience that you asked about.
The second one is I went on to study psychology.
This was pretty obvious that this was what I should be doing. I was in a unit of the Israeli army, which is the academic reserve.
That meant that our service was deferred.
Like the ROTC took officer training during the summers.
At the end of my BA, I went in to serve.
And I served in the infantry for a year, and then I served in the psychology unit of the Israeli army. The main
thing that I did during my service, I was tasked with something that a 22-year-old should be asked
to do, but I was tasked with setting up an interviewing system for the recruits in the
Israeli army. I constructed an interview, which was different from the way that the interviews were being done.
It was different because it was not oriented to clinical intuition. It was oriented to the
collection of facts and to reliable ratings. So the focus was on reliability. It turns out that
interviewing system was a success. I did that in 1956. Lieutenant Kahneman wrote a report
on this just before I finished my service. This interview stayed virtually unchanged
at least 55 or 60 years. I don't know, I haven't checked for the last decade,
but about 10 years ago they were still using it. And more important to me personally,
is that the work that I've been doing in recent years, and the book that we've finished writing,
the main recommendation that we come up with at the end of the book is very much in the spirit
of what I did as a 22-year-old setting up the interview for the Israeli army. I'm finishing my career
very much with the same idea with which I started it.
Can you tell us more specifically, if it's not classified, what a couple of the screening
criteria are for the Israeli military that were so important that they used it for 50 plus years? The interview was supposed to assess the recruits' personality
and to what extent were they suitable for combat duty.
There were some traits.
If you drew up a list of the important traits,
you would come up with essentially the same list that I did,
whether somebody is responsible and reliable and sociable.
These were all mayor recruits,
so there was one trait that was called masculine pride,
effectively macho, and it was a list of six traits.
The idea of the interview sounds very elementary,
but it's not typical, actually, of interviews,
was that people were supposed to ask factual questions
about each of these topics
and then produce a rating for the topic
before moving on to the next
so that the ratings were as independent
from each other as possible.
So that was a feature.
And I can tell you more about that
because there was another idea
which permeates my current work.
What we did in the interview, I was effectively trying to suppress clinical intuition.
I was asking the interviewers who were 20 years old, of the soldier, but to do something completely different and actually a lot more boring, to ask factual questions and to applying their clinical intuition to try to do the same thing,
to figure out if the fruit was suitable for combat duty.
They complained that I was turning them into robots.
And I offered a compromise.
And the compromise was that they should do exactly as I had told them,
that is, collect the six ratings and so on.
But then they could
close their eyes and have an intuition. How good is this recruit going to be as a combat soldier?
And it turned out that whereas the previous clinical interview had very little validity,
the intuitive question at the end of the interview, much to my surprise, was actually
very useful. The conclusion from that, which I've been thinking about a lot in very recent years,
has been that you don't want to suppress intuition, that you want to delay it. You want
to delay it until you have the facts, because if you have intuitions prematurely, they're likely to be wrong.
That's both my experience in the Army and what I learned from it.
I can see the tie-in to your current work on noise.
Before we talk about noise and your most recent discoveries, can you tell us briefly what some of your most important
findings have been? You mean over my whole career? Yes. You know, it's a very long career.
Better that you summarize what you feel are your most important findings than I do.
Clearly the most important work I ever did, I did it in my collaboration with my friend and colleague starting in 1969 and going on for about 12 years. And we had a very good run. We studied intuitive
thinking about uncertain problems and about prediction problems. A lot of it was inspired
by my experiences in the army. And we also studied decision-making under uncertainty.
And it turned out that the work was well received.
Amos died in 1996.
So I got the Nobel Prize on our joint work, not anything that I did on my own.
That's clearly the most important thing.
Since then, I've done quite a few other things. I've studied
how people think about what might have been counterfactual thinking. I've done a lot of
work on that. I studied well-being for a number of years and how people remember their experiences
and how people score their experiences. And then I spent some years writing a book, Thinking Fast and Slow, which summarized a lot of my thinking up to then.
And that probably in some ways, that work has crystallized my thinking.
I was quite depressed when I finished that book, but it turned out to have been a success.
Danny, you are so humble.
What are the most important biases that you discovered?
That's not something that we discovered, although it fits very well. A very important bias is overconfidence. That is that people think they know when they don't know. The sense of confidence
that we have, this idea that we understand the world. This is because we can
tell a good story about it. And our confidence stems from the quality of the story and not from
the real connection with the real world. This is a very important source of problems.
There are biases that we are very susceptible to. We tend to jump to conclusions very easily.
And then we're slow changing our minds.
And it's that combination of jumping to a conclusion and then holding on to those conclusions quite stubbornly
and fitting everything that happens into the image that you already have.
That turns out to be a very important bias.
And this is something that people do to link it
to something that is happening now when people are living in very different worlds in this country,
with believing in different facts as if there was no reality, there is no shared reality.
This, to a psychologist, is not really surprising. What people believe is not because they have reasons or arguments for believing it.
We tend to believe what people that we trust and love believe and what they tell us to believe.
And this is true for all of us. Left and right, pro-Trump or against Trump, this is how our
beliefs come to be. This is why arguments have very little effect, because we do not believe what we believe because of arguments.
We believe what we believe for completely different reasons.
And then we fit our understanding of new facts in the world to what we already believe.
Your latest discoveries are on what you call noise, which is also the title of your upcoming book. What is noise?
Noise is unreliability in decision-making, and the best way to explain it is by telling the story of
how we came to study it. I was consulting with an insurance company. The idea came up, I forget exactly the context in which it came up, to see whether their underwriters actually operated in a uniform way.
In order to do that, they constructed cases, very realistic cases with a lot of information in them. They had about 50 underwriters who put the dollar value on these casters,
whether they're appropriate premium.
What happened that was interesting
was I talked to executives in the company
and I asked them a question,
which it turns out you can ask almost anybody.
In a well-run company,
when people are making dollar judgments,
you don't expect people to agree perfectly.
But what is a reasonable range?
What is the difference in percentage that you expect if two random people,
two random professionals evaluate the same case?
And that turns out to have a very common answer.
People think 10% is tolerable and about reasonable. The pessimists
say 15%. But the true answer for those underwriters was 50%, 5-0. There was a lot less agreement
than anybody expected. And it turned out that that was true also among claims adjusters,
and it turns out that it's true also among claims adjusters, and it turns out that
it's true as well wherever people make judgments.
Professional judgments, when it's really judgment and not computation, but where judgment is
involved, turns out to be extremely noisy in the sense that different people faced with
the same training and the same organization, faced with the same problem, give very different answers.
And what makes the problem interesting is that the executives
and the people themselves are barely aware that this problem exists.
In the case of the insurance company, it came as a complete surprise.
I'm sure it must have been shocking for the insurance company.
It was.
But people get shocked of that kind kind and then they forget them.
You can be shocked and this is very bad news,
but unless you have an easy solution that you can apply, then you forget about the shock.
Is there noise in other fields like criminal justice or doctors?
It turns out that the best established demonstration of noise was in criminal justice.
One of my co-authors is Cass Sunstein, who was a famous jurist.
And he knew the story of Judge Frankl, a famous judge in the 1960s and 70s,
raved about the fact that sentencing is a lottery.
That the sentence that the defendant with a particular case coming before a judge,
there is no telling what the sentence will be because sentences are hugely variable.
And indeed, they are stunningly variable.
The idea that there is noise that had been discovered before,
it turns out that the fact that you could demonstrate noise by showing
judges cases, the same cases, and having them suggest sentences for these cases,
and the variability is enormous. The sense that when a defendant faces a judge,
at least in the experiment that we looked at, where the average sentence for crimes,
where the average sentence would be seven years in prison,
fairly severe crimes, but it's plus minus three years.
The lottery is intolerable in the sense of real injustice,
and it's not even the worst.
People who ask for asylum face a lottery that is even worse,
where depending on the judge that they encounter,
their chances of being admitted could be 10% or 90%. A complete lottery. And you have lotteries
of that kind in the patent system. You have lotteries of that kind in the foster care system.
You have lotteries of that kind in hiring. You have lotteries of that kind in performance
evaluation. And in medicine, there is a lot of noise. Our conclusion was, wherever there is
judgment, there is noise, and there is more of it than you think. That's our motto for the book.
That's how the book began. Many people would say, sure, there's going to be a lottery in an area like psychiatry.
But what about what people consider harder medicine, say, pathologists or oncologists
or radiologists? It turns out that radiologists are very noisy. It's a real problem in radiology.
You present the same x-ray. You can do what we call a noise audit with radiologists. It's a real problem in radiology. You present the same x-ray, you can do what we call
a noise audit with radiologists. That's very easy to do. You present the same x-rays to
multiple radiologists. The level of disagreement is really shockingly high. It's not that they
don't know anything, they do. But if they disagree 15 or 20% of the time, that's a high rate of
disagreement. And further furthermore they disagree with
themselves you can show the same radiologist the same x-ray on multiple occasions and they are not
going to see the same thing in the x-ray so there is a lot of noise in radiology there is noise in
certainly oncology of course there would not be noise in deciding whether somebody is a diabetic
because there is a test and there is a number, and if you're beyond that number, you're a diabetic.
But there is a lot of noise in medicine.
I was so surprised that the same doctor would give a different diagnosis
or can come to a different conclusion based on seeing the same data a second time?
The data are complex. When you're looking at an image, there is just a lot to see. You cannot
see everything. Your attention will be drawn to something, and your interpretation will depend on
where your attention went. And if your attention goes elsewhere,
the next time you see it,
you could easily come up with a different conclusion.
Now, when it comes to judges,
we know some of the factors that affect them.
Mood affects them.
Whether their football team won the day before or not affects the sentences that they give.
Temperature affects them.
You're better off being sentenced on a
cool day than on a very hot day, and so on. There is a lot of noise. That's what we found.
How does mood make a difference?
Mood, it turns out, makes a difference in two different ways. If you're a judge,
it makes a difference in that when you're in a bad mood, you tend to be more sedate and harsher. That's the obvious thing.
But mood also affects the way that people think.
And there, its story is much more complicated.
People tend to think more superficially when they're in a good mood.
And they are more susceptible to their own intuitions.
There is something that we call sensitivity to bullshit.
It's a technical term. And people are more sensitive to bullshit we call sensitivity to bullshit. It's a technical term.
And people are more sensitive to bullshit, more responsive to bullshit, and more inclined to believe nonsense when they're in a good mood than when they're in a bad mood.
When we're in a bad mood, we tend to be more critical.
Because mood varies within the individual over time, their responses to stimuli and to tasks and to cases will vary
over time as well. You said the time of day makes a difference. What difference is morning versus
after lunch or late in the evening? Well, that's been studied with judges and it's been studied
with physicians. Those are studies that I know. So there is clearly a difference in the prescription
that physicians write early in the day and late in the day. They are more inclined to prescribe
painkillers and antibiotics late in the day. And you can see why. It's sort of a lazy thing to do.
And when you're tired, prescribing antibiotics is the easy thing to do, and prescribing painkillers, if you're asked, then it's harder to resist and it's not worth the effort.
So you prescribe more painkillers.
Those are differences.
Judges, the suggestion is that they are in a better mood after they eat than when they are hungry.
You're better off with a well-fed judge who is
happy and in a good mood. So we should schedule our doctor's appointments in the morning and our
appointments in court after lunch. After lunch, precisely. How about the sequence of events?
Does it matter what comes before? Yes. One of the characteristics of human judgment and human thinking is a search for coherence.
We tell ourselves stories.
We formulate stories.
They come to mind.
And the stories tend to be much simpler than reality.
But stories also tend to be internally consistent.
So when you begin a study and a story, and it doesn't take much information to get you started, when you begin a story, you complete it in the same spirit.
There have been interesting studies of interviewing in that context.
So, for example, any job interview starts with a few minutes of getting acquainted.
But it turns out that those three minutes are critical because the interviewer forms an impression of the interviewee during those three minutes.
And a lot of the interview is spent not actually collecting information, but the interviewer is working to confirm her impressions.
She created initial impressions, and now she's no longer collecting information impartially.
She's primarily collecting confirming information.
You ask different questions of people depending on whether you think that they are an introvert or an extrovert.
And whether you think highly of them or poorly of them.
You do not ask the same questions.
In that way, first impression turned out to be extraordinarily poor.
How about social influence? Does that change people's judgment?
Oh, yes, of course. There is a serious question about the value, positive or negative,
of discussion and of discussing teams and discussing issues in teams.
And there are pros and cons.
Teams that do well are made up of people who will not necessarily think alike,
but are able to recognize a strong argument when they see one.
If one of the members of a team knows something and the others accept listening and looking for it and willing to accept,
then the team will be affected.
In many situations, teams actually create noise in the sense that teams polarize.
That is, if you have a team with a predominant feeling about an issue,
the fact that they all feel that way makes them feel that way more strongly and be more convinced and believe in what they say more strongly than they did. We know a fair amount, actually, about the effects of discussion.
And some ways of organizing discussions are better than others.
What ways are better than others?
In general, when a topic is going to be discussed, we normally elicit facts.
There is an agenda. Somebody knows and introduces the problem, and then people discuss it.
And they form an opinion later after the discussion. In many cases, this is a lot harder,
and people won't like doing it. But if you could collect opinions anonymously before the discussion, you would get a much better discussion.
Because, and this is critical, people tend to be influenced by others.
So if they make their judgments independently, you get more variability in their thinking, but that's good variability.
When you have witnesses, you would not want witnesses to have the opportunity to talk to
each other. In discussing topics, to some extent, it's like having witnesses talk to each other
before they express themselves. So you want the witnesses to give you testimony, then to discuss and try to reach consensus.
It's a different way of approaching.
Does it matter what the first person to talk says?
As in coherence, first impressions matter.
So the first person is going to have
a disproportionate influence,
especially if that person is already influential.
It creates a strong force, and the next person is likely to agree, tone down disagreement,
and people want to be friendly.
You want to create a situation in the group that facilitates disagreement and at the same
time keeps the disagreement muted and civil and pleasant.
There are conflicting pressures.
Danny, I love your example about social pressures and music, where you invert the popular music list.
It's a famous experiment run by Matthew Selganik, a professor of sociology at Princeton with his mentor, Duncan Watts.
They had a website where people could freely download,
I think the list was of 72 songs.
Randomly, they divided the people who logged on to that website,
I think, into nine groups.
You got assigned completely at random to one of these nine.
So one of these was
a control group. People could download any song. In the others, people would come and when they
came to download a song, they would see how many people had already downloaded it. In effect,
they rated the popularity of the songs. You had eight different groups that initially started the same randomly,
but they developed very different cultures.
So the songs that were the favorite in one group were mediocre,
or in some cases not viewed well in other groups.
Actually, the disagreement among the groups was stunning.
The initial responses of a few people had a very
big effect on what happened later. What happened when the popular list was inverted?
In one sort of fiendish experiment, they presented people with information that was backwards.
Some people logged on, found that the most popular songs, they let the group develop from that point on.
And it turned out that you can actually have a very big effect.
People are not completely malleable.
In particular, people are inclined to agree more on the songs that are very bad.
The bad songs tend to be recognized as bad.
The songs that become good, that is much trance here.
Almost any song from middling up to the top can become a hit.
If just the first people are positive about it.
Danny, who should we trust to make decisions? Should we trust people who sound intelligent
and can put together coherent rationales for their judgments,
like columnists, pundits on TV, or many CEOs who sound very confident?
We tend to take people's confidence at face value. We shouldn't, but it's very difficult
to resist. So clearly, if there are other cues to competence, those are the cues to competence that we should follow. In many domains of life, the only other people believe they are. It's not that they have met any criteria. So you can be a pundit and you predict the future, but you don't have to be accurate in your predictions. Nobody's keeping score. It's just whether you present your predictions in a way that is charming and
interesting and confident. There are many respect experts around that we follow. Now, these respect
experts, it's not completely random. Those are intelligent people, but in many cases,
they operate in a domain where reality imposes very few constraints. We think of the example of astrologers in the
Middle Ages. Some astrologers, I'm sure, were considered much better than others. They were
more respected than others. And I'm sure they were more intelligent and more articulate and
more eloquent. So there was a reason why they were picked as respect experts. But of course,
they didn't know anything, because astrology is useless. That's a
problem in many situations still exists. How do we identify people with the best judgment?
Are there traits that people with good judgment have? Yes, some of this is obvious. The first
thing is to know what you're talking about. If people are operating in a domain, they'd better know the domain.
In some domains, you can actually evaluate expertise.
How many operations have the surgeon done
and how successful were they?
You don't need to ask more.
So skill is the first thing that you should be looking for.
But often we don't know skill.
And there, the next thing is intelligence and intellectual temperament.
What I mean by intellectual temperament is there is a characteristic that people with very good judgment tend to have.
They're called actively open-minded.
They're eager to have their mind changed.
They're eager to learn, and they're not locked in to their existing positions.
They're less coherent than others.
That makes them much better judgements.
There are things we know about people with very good judgments.
Danny, you found that there is lots of noise, and it sounds like pretty much everywhere.
You recommend what you call a
noise audit. Can you tell us what a noise audit showed you in forensic science, fingerprint
analysis? Because that seems like that's a field based on hard science, where we assume that
experts would agree. On fingerprinting, experts do agree most of the time, but they disagree far more than
the public believes. We tend to think that the fingerprinting is sort of infallible,
but actually it is fallible. And people disagree with each other to some extent,
and they disagree with themselves to some extent, so that when you show the same print to the same examiner
a few months apart without their knowing and they're seeing it the second time, they're
not necessarily going to say the same thing.
One thing I should add, you know, if we're talking about fingerprint examiners, there
is one kind of mistake they do not make.
They do not say that there is a match when they're not sure, because they know
the possible consequences of a false match. But whether they identify a match or call
their examination inconclusive, on that there is a lot of noise.
And what did you find when you did a noise audit of fingerprint examiners? The FBI has done it and other people
have done it. There's less than 1% mistakes, I think. That's the best estimate. In one sense,
that's wonderful. In the other sense, you would want more. What is sort of interesting is that
people who are in that business, they really prefer to think that they're infallible.
And it's very difficult to convince them otherwise.
How can we improve decision-making?
How do you see what you call decision hygiene?
Most of my professional life, I've studied psychological biases.
Those are systematic errors that people make.
Noise is really a very different thing. It's not something that people agree on. When people agree or make the same
mistake, that's a bias. But noise arises when people are all over the place, so it's a completely
different type of error. But it is a type of error. There have been many attempts to de-bias,
to correct for biases,
with middling success, I would say. But the idea of de-biasing is that you know what a bias is,
and you try to avoid it, or you try to eliminate it after the fact. What we are thinking about, what we call decision hygiene, are steps that you can take in making decisions that will make it
less likely that you'll make a mistake without knowing
what the mistake is that you will avoid. And this is what hygiene is. That is when we wash our hands,
we do not wash our hands. Maybe during the COVID period, we know what we're trying to avoid. But
most of the time, we just wash our hands and we avoid all kinds of germs that we know nothing about.
To the extent that we're successful, we will never know what infections were avoided.
And that's the idea of looking for procedures in decision making that have a similar kind of effect.
Can you give some examples of what would be included in decision hygiene? We advise a way of thinking about problems,
which is very similar to what I was saying earlier
about the interview in the Israeli army,
which is to break up a problem into pieces or elements
that you can evaluate in a fact-based way
independently of each other.
So break up problems is one
general approach to decision
hygiene. Keeping
parts of problems
independent of each other and
keeping individual judges
who are cooperating
as a team independent of each other.
Those are
hygiene recommendations.
So people don't influence each other.
So people don't influence each other.
Delaying intuition is a hygiene recommendation.
Most of us live with the unquestioned belief that the world looks as it does because that's the way it is.
And we assume that other people view the world much the way that we do.
But is that true?
We have that belief whether it's true or not.
Quite often, it's an illusion.
We feel much more confident that our perception of the world is correct
than we have any business being.
We feel more confident because we have that sense
that the world is just as it appears to be.
Because of that, we think that other people see
the world as we do, but if they don't, something is wrong with them, because we see the world as
it is. This illusion of understanding the world is basic to the way we live, but in some ways,
it's quite costly and pernicious, too. How do you see U.S. politics and polarization? I don't have anything novel to say
about it. It is a very extreme example of the resistance of people to facts that threaten
their prior views. You now have the population in the U.S. divided into camps that have completely different beliefs. You cannot even imagine how you
would change minds on this. Now, each of us is so convinced that we are in the right that our only
question is how to convince the other side. But that exercise of how to convince the other side,
we come up with very little. This is a very extreme example of the search for coherent explanations and coherent stories
sort of gone pathological and influenced by this echo chamber of social media, which means
also that people select the facts to which they're exposed.
This is very different from the way it was when there were three television channels
50 or 60 years ago.
And there was a fair and balanced doctrine, I think,
where they were supposed to present
several sides of each issue.
This is gone.
Now you can choose to listen to a channel
or to be on media
and to hear what you want to hear. One of the worst things I heard
in that context, I think it was in the film Social Dilemma, was that when you Google the word
climate change in different areas of the country, you're not going to get the same response.
There are areas of the U.S. where when you look for climate change, you get hoax,
comes up very high. People hear what they want to hear. Before I ask you for your three takeaways,
is there anything else you'd like to discuss that you haven't touched upon already?
No, we've covered so many things. What are the three takeaways that you'd like the audience?
Those are things I already said.
I think my first takeaway of the work that I've been doing recently, there is noise.
My second takeaway, there is more of it than people think.
It's hard to detect and it's hard to see. improve our judgment and decision making that we haven't tried before or that we haven't thought about before in quite the same way as you do when noise is on your mind. Danny, thank you for our
conversation today and for sharing your discoveries with me on Three Takeaways as your first interview
on noise and one of the few interviews you plan on doing. Your discoveries on noise will change
how I think about myself, how I look at the world,
and how I make decisions. Your book, Noise, is also terrific. I really enjoyed it. Thank you again.
Thank you very much.
If you enjoyed today's episode and would like to receive the show notes or get new fresh weekly
episodes, be sure to sign up for our newsletter at 3takeaways.com or follow us on Instagram, Twitter, and Facebook.
Note that 3takeaways.com is with the number 3.
Three is not spelled out.
See you soon at 3takeaways.com.