The Ryan Hanley Show - RHS 165 - AI, Data & When to Ask the Right Questions
Episode Date: December 15, 2022Became a Master of the Close: https://masteroftheclose.comIn this episode of The Ryan Hanley Show, we're joined by Helen and Dave Edwards, founders of Sonder Studios and authors of Make Better Decisio...ns: How to Improve Your Decision-Making in the Digital Age, an essential guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI.Do not miss this incredible deep dive into the next generation of artificial intelligence and the insurance industry…Episode Highlights: Dave shares that Sonder Studios has been operating since 2019, and it began with the purpose of truly opening people's minds to the depth of humanity in this digital age. (4:24) Helen explains that data has value once people or machines understand it since humans think in 1-4 dimensions, but machines can think in infinite dimensions. (10:29) Dave discusses that for the value of data to be translated to people, we humans must first understand what it means. (13:37) Helen explains how to determine when to ask the correct questions when presenting people with a single data item that they disagree with. (18:21) Helen explains that they wrote Make Better Decisions because decision making with data is very nuanced and one of the first things to look at is how our feelings are processing information. (28:09) Helen believes it's important to be able to calibrate your accuracy depending on how well you understand something. (37:37) Dave mentions that one of the nudges in their book is about recognizing who the humans are in the data and understanding what the data representation is. (43:25) Dave explains that success has several layers, and that's where people get stuck because they don't know what they're asking of the data or which experience to depend on. (48:14) Dave mentions that they named the book Make Better Decisions since there isn't one optimizable solution, heuristic, procedure, or six-step process to make a smart decision, but instead it’s a practice. (1:00:074) Key Quotes: “We sort of started Sonder studio with the mission of really wanting to open people's minds of the richness of humans, while we're in this digital age, you know, that it's not us being supplanted? It is actually where's the beauty? And where are the wonderful parts of being human? And how do we help people understand that?” - Dave Edwards “It's a good idea to have a good understanding of the state of your own knowledge. And that being able to calibrate your accuracy with how well you understand something, is actually a pretty good thing. ” - Helen Edwards “Our premise in our book and our premise around decision making is that there isn't one optimizable answer, there isn't one heuristic to follow, there isn't one process to follow, there isn't a six-step process to make a good decision. We believe this is truly a practice, which is why we have 50 nudges that help you get better, that's why we call it Make Better Decisions. ” - Dave Edwards Resources Mentioned: Helen Edwards LinkedIn Dave Edwards LinkedIn Sonder Studios Book: Make Better Decisions: How to Improve Your Decision-Making in the Digital Age Finding Peak Reach out to Ryan Hanley Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
In a crude laboratory in the basement of his home.
Hello everyone and welcome back to the show.
Today we have an absolutely tremendous episode for you.
It is a conversation with Dave and Helen Edwards,
the authors of Make Better Decisions, a tremendous
new book with the subtitle of How to Improve Your Decision Making in the Digital Age.
This is really a fantastic conversation. It's one of those conversations that it's why I love
doing these podcasts. You get to meet new people that you didn't know
who are doing awesome things with great ideas. And we talk a lot about how to make great decisions,
how to integrate those decisions in the massive amount of data that we have. What is the value
of data? When should we use data? When should we go with intuition and instinct as leaders?
This is a fantastic conversation. I took quite literally two and a half, three full pages of notes during this conversation.
I could have talked to these guys all day.
And I have the book.
I'm reading the book.
It's wonderful.
It is very much something that is worth picking up.
You can make better decisions on Amazon or anywhere that books are sold.
You can always go
to the show notes for the page and find the book link there if you want. Wherever you consume books,
you can find this book. I highly recommend it. I really like it. I think you're going to know
exactly what I'm talking about after you have a chance to listen to Dave and Helen and their
thoughts on how to make better decisions. It's a tremendous conversation. Before we get there, guys, make sure that you are subscribed to Finding Peak. Go to
findingpeak.com. It is my new sub stack, free content coming out every week around peak
performance in business, in life, and in insurance, specifically tailored to us, the insurance
industry. We do a wide range of topics, everything from personal development to leadership
development, development in business, our relationships, and also deep dives into marketing,
into lead generation, into digital sales, into what we're doing at Rogue Risk to be a human
optimized digital agency. Very much the model that I believe is the future of the
insurance industry, the future of the independent agency. If you want to learn how we're doing it,
go to findingpeak.com, subscribe, get the emails. And if you want the deep dives,
you can pay for that, which is like seven bucks a month. So appreciate you guys for listening to this show. As always, this is a labor of love, and I just love that you guys give me your time,
so I appreciate you.
With all that said, it is time to get on to our conversation with Dave and Helen Edwards,
co-founders of Sonder Studios and the authors of Make Better Decisions.
Awesome. Well, I'm excited to talk to you guys.
Thank you. Same here.
Yeah. I, um,
I went through and looked at a lot of what you're doing and I think that it's
incredibly relevant to particularly the audience who listens to this show,
which is primarily insurance professionals from up and down the spectrum.
So our audience is, from, you know, everything from one person,
startup agencies in small town, wherever America to executives at the highest level and, you know,
corporations in Hartford and all the different places, Des Moines and Columbus and
all the places where insurance companies operate primarily in the U S. So just so you know,
who we're talking to today, but normally I like to get right into the show. So I'd love if you guys
maybe start with your origin story. Obviously I'd done some background, but I'd love to hear
kind of, you know, every good superhero duo has an origin story.
And maybe we start there and we dig into some of the stuff that I think is incredibly relevant to what's happening.
Okay.
Do we just launch in?
Yeah.
Yeah, rock and roll.
We're talking.
Sounds good.
Awesome.
Well, thanks for having us.
So I guess we've been working together for more than a decade.
I've lost track of the number of years.
We've started multiple companies.
Some have worked out.
Some haven't, as well as the others.
Sondra Studio has been around since 2019-ish.
And it continues work that we've done for several years.
We started off working really closely around thinking about
how do humans and machines come together
and what's happening to us as individuals,
as we are digitized, as our behavior is being monitored,
as our communications are being managed,
as the algorithms are making decisions for us
and pointing things out in the world.
As we've been put into finer and finer grain buckets. Finer and finer grain buckets and being ultra personalized, but in a way that we can't
interrogate and understand because we can't see it. And we sort of started Sondra Studio with the
mission of really wanting to open people's minds of the richness of humans while we're in this
digital age, you know, that it's not us being supplanted.
It is actually, where's the beauty and where's the wonderful parts of being human.
And how do we help people understand that? Yeah. I mean, so many, you know, when we kind of got
into this, the, the, the, the zeitgeist, if you like, was, you know, very much an either or it's
either machines or humans and the machines are going to rule us all. And the more we looked into it,
the more we did the research, the more we talked to people, the less we were convinced of that
story. And so this is very much a, how do we do both? Yeah. And we spent time with organizations
that sit there and say, we've spent all of this money on these huge data projects and putting all
kinds of AI into organizations to make more predictive analytics. And it's not really working or people aren't really using it or they feel like decisions
are harder than they would have been before with all of this data.
What do we do about this?
And that was the genesis of the book, Make Better Decisions, was helping people really
understand the core of who we are as humans and who we are as decision makers, as individuals,
who we are as team decision makers, and how we think about making decisions with data and with AI. And that book accompanies
workshops that we do with large organizations. And we also have started up working in complex
problem solving, which is an interesting area of thinking about complexity. And how do you think
about solving complex problems in a way that it's quite different from simple or complicated problems?
There's lots to unpack there, which is awesome. That makes my job very easy. So I'll give you
some context to some of the issues that we're facing specifically in the insurance industry.
And then I think it's going to be highly relevant to what you guys do. And I think we'll have an awesome conversation
here. So, um, you know, I actually, I own, uh, a independent insurance agency called rogue risk.
One of the very first things that I wrote down was the term human optimized. And what I meant by that was not necessarily all the way to a
AIML situation. However, what I realized throughout my career, having done this for 17 years and
spending a lot of time on the traditional side is that the all human version of our business was dying. There are plenty of boomers that are holding on
for dear life to the paper file cabinet, very human, all human version of this business.
And they've been highly successful in that method, but we are rapidly changing into
an ecosystem that most industries have already moved into, which is this mixed up,
mashed up, you know, what is the value of data? What do we use? What data actually allows us to
have better outcomes? You know, how do we capture it? Who owns it? You know, I mean, there's all
these crazy decisions happening. And kind of the premise of my agency was that there are moments that add value and there are
moments that don't. And I want the humans only spending time or spending the most time possible
in the moments that add value and have systems, processes, use data, feedbacks. And eventually,
I think we get to a place where we're using some form of AI. I've been playing around with open AI a lot lately
to handle those processes that don't add value.
The humans don't add value to, right?
So you waste a lot of time, a lot of energy, resources,
brain cycles throughout the day doing stuff
as a straight human, as a full human,
not necessarily, that wasn't a comment
on sexual orientation.
As just a human, you lose a lot of time and value and energy, just doing all these things
that don't matter. So, so where do you match those up? Where, where do you, what value,
what that actually is value are, are enormous questions in our industry. I mean, we still,
I think we employ I think, I think something like 80% of the COBOL programmers left in the world are employed in the insurance industry in the United States. So like, it's this, it's this snap forward of technology. And now we're having these types of conversations. And I don't think anybody has an answer and no one is doing it well. So, you know, kind of unpacking what you said and maybe one place that I'd like to start,
just because it's an enormous buzz term in most industries, but certainly in ours,
is data.
And frankly, the two questions that I wrote down and related to them was, can we have
too much value?
And can we have too much data? And what does that
look like? And does data even have value? And this is a conversation I've had multiple times
in this podcast. Is it the data that has value or is it what you bring out of the data? What
does all that mean? Let's start with the actual nuts and bolts and try to get to how we use it to make better decisions as we go.
Data definitely has value once people understand it.
Or machines understand it.
And let's start with that.
Why do we even want AI in the first place?
And it's because humans think in one, two, three, four, lots and lots,
and machines can think in unlimited dimensions.
And this ability for machines to take data that in some cases is really quite alien
or, you know, seemingly inhumanhuman like collected below our conscious outside of our conscious
recognition eye tracking mouse clicks and things like that um that the the machines can find
patterns in that at enormous dimensions now there's really no real sort of practical limit
other than compute power and and and cost and But the problem with that is that eventually,
for most situations, a human has to be able to justify
how they use that prediction from a machine.
So if the prediction from the machine is decoupled
from the human level decision-making, which is
what you'd expect in most human facing products, then we need to have accountability and
responsibility. And we need some sort of justification, which generally means some
kind of causality, some, not just a correlation, which is what the machines are good at. And humans
come in because they need to be, it's only us that can really put the causation in and put the
justification in and say, yeah, the machine says this, but we're going to do this, or we're going
to do this anyway, or we're going to follow what the machine says, and this is why. And that level
of responsibility and accountability that sits only with humans for now, the danger is, of course,
that we fail to recognize that. And that's really what the sort of AI ethicists work on,
is how do you stitch together hidden bias in data sets and make the right decision on top of that?
So I can see Dave's itching to jump in here.
I would add that what matters with humans is that data can be utterly overwhelming.
That high dimensional space is completely, it's like trying to imagine, it know, it's like trying to apply quantum theory to
something. It's just not intuitive. And people make mistakes on that basis and become overwhelmed.
So I think a lot of the dichotomy that we hear around, well, is data of, you know, is data of
value? Is it too big? Is it this or that? It's actually really more about how humans tackle overwhelming amounts of data. That's really what our book is about,
is ways to not be overwhelmed and to recognize when human cognitive biases work against you
in your judgment and decision-making. I think that the distinction for me is that data has
value, or at least hypothetically value. right? There can obviously be data that has zero
value, I guess. But the challenge is what, in order for it to, for that value to translate to us, we
humans have to know what it means. And we generally communicate things about what one thing means when
we're communicating with each other by telling stories. It could be a simple story. Here's the story for
why this is the primary customer target. Here's the story for why this is the right insurance
product for you. Here's the story of the US American dollar, which is essentially a story.
The challenge is that data doesn't tell stories. We have to tell the stories with the data. And that's a gap that is,
I think, misunderstood and easily overlooked because people are used to seeing these
great dashboards. You log in and you look at your tableau and it's got all kinds of
colors and lines and things. And well, doesn't that tell you everything you need? Well, no,
because it's not telling you any sort of story over time. It's not telling you any cause and effect. It's not
applying it any form of context that we understand naturally because we've evolved to be able to
communicate with each other using stories. But data is a really recent addition into this whole
concept. And we actually just don't look at data,
even when it's presented in a two by two, in a two dimensional space, we don't naturally
know and understand and agree upon the story that's there. So we have to translate it.
That's a difficult thing for a lot of people. One of the one of my most interesting takeaways in the move from being a foot soldier in a company to being
a leader was how differently the same piece of information could be interpreted by a group of
people, right? You present a stat on a screen to a group of 10 leaders in an executive forum,
and the feedback you get from the angles that everyone slices
that singular piece of data up is incredible.
We recently did, so we were acquired back in April by a larger holding company.
And I'm now on the executive leadership team of that holding company.
And, you know, so all the division
leaders get together and there's 17 of us total and whatever. And we're walking through different
departments and, hey, you know, this result and we're seeing this and, you know, we're,
we're, our variance is off here. And, you know, why, why do we think that might be happening. And like you said, without a story to the data, the why of something,
I mean, it is just personal context, filters, biases, experience, it gets passed through all
these different things. And what comes out the other side is like, you almost start to think
which one of us is the crazy one?
Like, we're all staring at the same piece of data, yet it's seemingly seeing completely
different pictures. And I think that's where I sometimes get lost in my own leadership is,
how much do I trust the data? And how much do I trust my gut? And what does that look like?
Where is, you know, one of the things I wrote down during your kind of introduction or origin story was, you know, where's the nuance?
Where do we, how do we understand nuance in a data rich world? Or something that scares me,
mostly because probably I read too much, is I'm a big fan of Nassim Nicholas Taleb. And right now I'm plowing through his epic
anti-fragile. I don't know if you've read that book, but he talks about black swans and, you
know, the triad or whatever. And I always think to myself, when I look at data and I think patterns
and I think pattern recognition and all these kinds of things. Are we creating fragility in our business because pattern recognition essentially starts to carve out black swan events? this massive thing coming that maybe our gut as humans and experience could possibly see,
not always, but a pattern recognized data set that's pushing everything to means and averages
and giving you variances tends to carve off. And is that a concern or something we have to negotiate?
Well, there's a couple of things that you raised there first i'll go back to the the very first
thing you pulled up which is essentially sort of analysis versus gut feeling that's where we start
our leadership workshops with exactly that question um and we start it from the perspective
of well you know modern neuroscience is telling us that everything that we, that all decisions are emotional
and that here's why.
And we kind of unpack that and we unpack what heart versus head means these days, you know,
the sort of, and then we look at the sort of fast and slow thinking that, and how to
trigger better ways of thinking.
And what's really going on when it comes to finding that nuance is quite complex.
You know, you've got a mix of a bias for machine learning
or a bias for automation
and taking what a machine says,
giving higher weight to that recommendation
than you would even in the face of evidence to the contrary.
So now the classic example of the people that follow Google maps into a lake.
And, but these things happen all the time with,
with data because you put,
you put a good dashboard in front of someone and all of a sudden they forget
to ask the good question. So it's like,
how do you step in and how do you intervene and know when you know when you should ask the right questions and what are those questions?
And this is a very human process. And sitting around that table, presenting people with one
data point that they see differently, we sort of give people a bit of a release from that,
because that's quite anxiety provoking. And
because the sort of the promise of the analytics movement has bled into how we think about each
other. So the promise of the analytics movement is that there's one single optimizable answer
that can be found best by a machine, not a human. And we forget that all difficult decisions by
definition are difficult because people have different perspectives. So then why do we have
different perspectives? Our cause and effect reasoning causes us to think in quite noisy ways.
This is recent work by Daniel Kahneman and Saboni. And we have this noisy, undesirable variability in our thinking.
That variability can be desirable.
It's called creativity, right?
We all have a different perspective.
But we'll show people perception illusions, perception pictures.
We'll ask them, what do they see?
And everyone sees something different. It they see? And everyone sees something
different. It's quite predictable that everyone sees something different. So we shouldn't be
surprised that everyone sees something different in the same data. The question that then becomes,
what do you do about that? And what you do about that is firstly, embrace that diversity,
that diversity in thinking is what's going to get you through a
complex problem. And there's lots of techniques for optimizing and maximizing the human part of
that. Some of it is, well, people do need to have what we call minimum viable math. You really,
especially in something like insurance, you should be be sitting there should know what a mean is should know
what a standard deviation is you'd be surprised unfortunately i'm not yeah i get it that's why
we teach minimum viable math and to give everyone the same common language and so that especially
if you're using machine learning or any kind of predictive analytics, you, you really need to understand what a false positive
is. You really need to understand what a false negative is. You really need to understand how
different cohorts in the data will, um, can, can, uh, optimizing for different things in those
different cohorts can give you unintended outcomes and overall profitability,
for example.
And the final thing I just want to touch on is what you were talking about there in terms of the black swan.
And this is a new product for us.
But moving from decisions to complexity, a lot of our traditional tools, whether they be analysis tools or processes and
decision-making structures in organizations, are just not fit for purpose when it comes to this
new world of complexity. And whether it's because we are sorting by such finer and finer grain cohorts in the data,
whether people on the other end have so much more agency.
20 years ago, you didn't really know what someone thought of you.
And now you know.
Social media will tell you what they think of you.
And these things can be organized.
And the self-organization and this decentralization of control and this emergent property that
is now humanity on the internet that touches all businesses, normal statistics just flat
out doesn't work.
We have to turn to complexity science, which is coming to the
point that there are new heuristics and new shortcuts that we can take out of complexity
research. And a huge amount happened during COVID, just in terms of understanding epidemiological
models and things like that. But that math is just, I mean, the insurance industry is probably
one of the few industries that's poised to adopt some of that complex math to help with decision
making. But until humans really have access to some of this new science, we have to kind of
glean lessons out of it. And that's how we deal with the black swan is releasing yourself from this,
from this need to sort of have every, it's really a different way of looking at uncertainty.
It's not trying to say that, well, there's a 0.01% chance of X, because we know that that's
just too hard for people to deal with in fact actually that
one's not so bad i mean um what are the probabilities we understand one percent ninety
nine percent fifty percent zero percent and a hundred percent those are the only five probabilities
that human and humans intuitively understand i think that came from rich Thaler. But we try and help people think in a much more dynamic, open, complex, networked way
so that you can be sort of released from this tyranny of having to really sort of grapple
with uncertainty in a way that's just counter or not intuitive to us and open up the team
to thinking much more dynamically,
to solving problems as they come at you,
to being much more agile about how you use experimentation.
And you just see the data in a different way.
It really is a totally different way of thinking.
Yeah.
One of the things that you have in your book,
which wasn't a huge topic,
but it was a topic that I was very
interested in. I just want to bring it up considering my audience probably is on the
lighter side with some of these topics, a familiarity with them. But I think when we're
looking at, say, and I know you've talked about predictive analytics, but still those
predictions are based on past experience. And, you know, one of the, one of the,
I don't want to say questions cause I ha it hasn't been presented to me, but that,
you know, people have framed multiple times in different, you know, when I'm dealing with big
dashboards and stuff is, you know, the concept of
how do we know, how do we know when to step away from the data and trust, say our gut, right.
And having been in business for 20 plus years now, and I know you guys have been in business
for a long time too. I think it's undeniable to say there's moments where you look at everything
the way that it looks and you're like, nah, we got to go this way,
right? Here's the answers here. And you're like, why? I'm not a hundred percent sure. I see this
and I see this over here and I feel this and you know, there's this swelling and I just can't
explain a hundred percent other than I know this is a direction we at least have to try, right? And that's a really
hard call. Those calls are becoming even tougher now that we have so much data behind every
decision, right? You struggle to justify, you know, one of the things I seemingly have seen
in some of the organizations that I've been in is that more data leads to more bureaucracy. People
are less willing to take chances because that those chances aren't necessarily backed up by the data that's given to them.
So how do you, if you're a leader and like, unfortunately, my style tends to be more
wrecking ball than a craftsman. But how do you know, or what is a good heuristic for when to take the leap away from the data and when to stick to it?
And I know that's not an easy question, but I know it's a question a lot of people in our industry in particular are dealing with.
I'm sure many more are as well.
But it's a very common question.
Okay, we see this happening.
Feels like we should go this way, really.
But the data is telling us to go
here and you know how do we how do we manage that how do we manage that divergence yeah i mean i
think it's a terrific question i think it's the core of of sort of where the where we all are
right now because what you highlighted is that the that um the data can make us quite risk averse
yeah we need the next we need the next data if we if we if if if we need if so much data is
available surely the answer is there so there's a couple of things and this is really why we wrote
this book because you can't you can't go head-on into any of this that you know there's there's a
subtlety the real nuance has been able to sort of look at it from lots of different angles it's
you know pick up your wrecking ball and turn it around a few times. And the first one is, is feelings that there's no question that feelings
come first. If you don't like the way that graph looks, you're going to feel, you're going to feel
it. And that feeling is going to, is going to impact how you evaluate it. Yeah. It's part of
how you it's part of the, we don't sample from our brains in a way that like a computer does. It's a
probability distribution. Depending on how we feel, we're going to take a different reaction
to data. So the first thing is, how are your feelings actually influencing the way you process
information? Another reason, another thing, another nudge that I use all the time is ambiguous data. If the data is unclear, if the answer isn't in the data, then we have a natural tendency to use our intuition and our gut. So as a leader, you step back and you say, well, why is the answer not in the data? Is this question actually not able to be answered by the data?
Or is the data not representative in a way that helps me make a
decision? Yeah. So going to that next level of, do I want to use my gut because the data's not clear,
or do I want to use my gut because of some other reason? There's a lot of interesting research out
of, I can't remember who did it now, but that founder-led organizations are able to take a lot
more step away from the data moment. And that's because the founder has more scope. They're seen
as more able to take risky bets and it's because their name's on it. There's an accountability thing. So being able to decouple what the data says
is the right decision from what the decision that is made by the actual human. And that's okay.
You know, data is past events. It's relying on a stable world. It's possibly biased. So are people, but there's going to be bias in there. Data is not
imaginative. It has no ability to make transformational creative leaps. It can only be
used in the service of those things. So in the end, it's totally fine that a human makes that
decision. But I think that we have got ourselves in a little bit of a
knot because of this promise of the analytics movement that the answers in the data, it may not
be. Let me go back to what you started with around feelings. And I think that it's an important one,
especially as you described yourself as saying that, you know, my leadership style is using a
wrecking ball. So my question is, what's your emotional sweet spot for making
a good decision? Because when we're highly charged, when we're highly stressed, we will
lean more on intuition. We've evolved to do that. That's why we run when something is really
stressful, when something is scary. Those kinds of emotions, we're highly charged will lead us to user intuition
more. So the question is, what is your emotional sweet spot that allows you to find the places where
your intuition is actually reliable? Intuition is great, by the way. It's cheap, cognitively cheap.
It's generally good enough. It is all based on data, meaning the data of our own individual
experience. But it is something that's
quite useful. The question is though, where's the emotional sweet spot that allows you to say,
I'm going to slow down and I'm going to consider this a little bit more. I'm going to think through
this. And I think the next step would be to really evaluate one of the nudges in our book is talking
about experience versus data. So when you look at that data and you go, I don't think so, to stop and
query, what is it? What is it about your experience that's different from what the data is telling you?
And then thinking about how those two might be, well, you might want to rely on one or the other
experience versus data or combine the two of them. So for instance, we recently done some work with
a big retail operation and the data about what happens
in the retail stores can differ from individual experiences working in those stores. That makes
sense, right? Large data, individual experience. Sometimes one is more important than the other,
but sometimes you have to put, you have to mesh them together in order to make a decision. You
can't just blindly follow one or the other. You have to go into it and you learn from the extra context of, well, my experience is different
from the data and here's why. Okay. Now that I know why, what do I want to do with that? Why?
Resolving that anomaly is I think a really important step and it's actually really fun to do.
If your gut feeling is telling you something really different than the data, like you said, you explained the process of sort of digging into it more.
But resolving that anomaly can be extremely satisfying.
It's that one of um tim hartford's work about his experience of of the the london
underground uh where the trains are just packed all the time but the data collected by transport
london suggests that the trains are empty and he's like wow this doesn't make any sense so he dug
into the data and he explained about how the measurements
taken because, you know, really understanding that exactly the moment the measurements taken
and why and who's looking at it. And his pithy sort of putting to integration of the story is,
well, one transport for London measures the experience of the trains, whereas he measures
the experience of the people. And that's such a lovely insight,
right? How do you move from measuring the experience of the trains to measuring the
experience of the people? So we sort of nudge, we have these nudges that have you really dig
into it from the perspective of, well, where is exactly that data point is taken? Why is it taken?
Who's looking at it? And what kind of processing
happens before you see it as a chart or a graph or a table? And what you find as you step through
that process is you realize, huh, some of this was taken for an entirely different reason. It's
measuring a completely different experience. Yeah. What's up guys. Sorry to take you away from the episode, but as you know, we do not
run ads on this show. And in exchange for that, I need your help. If you're loving this episode,
if you enjoy this podcast, whether you're watching on YouTube or you're listening on
your favorite podcast platform, I would love for you to subscribe, share, comment if you're on YouTube, leave a rating review if
you're on Spotify or Apple iTunes, et cetera. This helps the show grow. It helps me bring more
guests in. We have a tremendous lineup of people coming in, men and women who've done incredible
things, sharing their stories around peak performance, leadership, growth, sales, the things that are going to help you grow as a person and grow your business. But they all check out comments, ratings, reviews.
They check out all this information before they come on. So as I reach out to more and more people
and want to bring them in and share their stories with you, I need your help. Share the show,
subscribe if you're not subscribed. And I'd love for you to leave a comment about the show because
I read all the comments. Or if you're on Apple or Spotify, leave a rating review of this show. Subscribe if you're not subscribed. And I'd love for you to leave a comment about the show because I read all the comments. Or if you're on Apple or Spotify, leave a rating review of this
show. I love you for listening to this show. And I hope you enjoy it. Listening as much as I do
creating the show for you. All right, I'm out of here. Peace. Let's get back to the episode.
I love that concept of whose experience are you measuring? I love that. So a couple things. One, they actually just discussed the concept of around founders making more decisions off of judgment. I don't know if you guys listen to the all in podcast, which is like a big entrepreneurial podcast and whatever. But, you know, one of the hosts,
Jason Kalkanis said that there's, I'm going to forget the stats. I'm not even gonna try to quote
it, but there's some statistic on, there's a certain percentage of equity at which once the
founder is below that, they, they compress down into almost like, like they stopped taking chances.
They stopped stretching. They stopped breaking new boundaries. They just really start like day they compress down into almost like, like they stop taking chances, they stop stretching,
they stop breaking new boundaries.
They just really start like day-to-day
operationally running the company.
But any like innovation slows,
all these things kind of slow
because it has to do with the fact
that at a certain point of equity,
they can be fired.
And like when they,
as soon as the founder hits,
whatever that percentage of equity is
that they can potentially be fired,
like the,
the percentage of growth innovation and everything just compresses way down
because now they can't step out onto a ledge and come back from it.
And I find that to be incredibly interesting because I have been fired
multiple times and seemingly because I have not learned whatever that trigger
point is that's broken in my brain. So, you know, and again, to the part you asked the question,
like, you know, your leadership style is to be more of a wrecking ball. And oftentimes, I think
the reason that I prefer that method personally is I like to know the actual answer. I really struggle with
armchair. I don't know what to call it quarterbacking because we're not playing
football, but you know, armchair decision-making, you know what I mean? Where we kind of,
if I try something and I get a result, then I know what the answer is versus if I just kind
of sit back and go, well, you know, we think this is what would happen if we tried that thing. So we're not
going to do it. I tend to just be like, you know, okay, let's, let's do it. Let's, let's go try that
thing. And if it works, then, you know, sure enough, we know, we know what the answer is.
And I don't know that that's for everybody or the right way because
it gets you in a lot of trouble but what i do think you get is very real tangible data points
on what actually happens and what doesn't um that's why the reason i definitely think you want
to um you know you want to differentiate between um throwing stuff at the wall and just trying
things versus a good experiment you know testing everything testing everything as a, as a good thing. So if you can write, I mean, the discipline is
write down the hypothesis, design the experiment, go do it. That is the way to sort of not be
overwhelmed by data. It's also the way to be cautious, to be sort of realistic in what the,
what the outcomes are that you're expecting. One of the nudges that we use
an awful lot, and so do people that, you know, we come back and talk to people after a year or so,
and it's become sort of one of their favorites, is called calibrate confidence. And it's the idea
that most people are overconfident most of the time and um that's not that's served
as well as a species right you don't go and try stuff if you don't have some degree of
overconfidence if you knew everything that you were up against over the last decade how many
different decisions would you have made sometimes it's's better not to know, you know, it's good. So you've got to balance this a bit.
But in general, that it's a good idea
to have a good understanding
of the state of your own knowledge
and that being able to calibrate your accuracy
with how well you understand something
is actually a pretty good thing.
And so being able to put a number on your knowledge,
I'm 100% sure of that, or I'm 90% sure of that, or I'm 75% sure of that. One of the things that
that enables is it enables one, you to think, huh, okay, I have to put a number on it. So
you come up with one and you realize as you do that, you sort of
generate this curve in your own mind as to where you sit in your state of your own knowledge.
But it also allows other people or you to someone else to flip it around and say,
80% confident? Why not 100? What's that 20%? What's up with that and what it does is is it forces an explanation and explanations
are generative you don't just blurt out something you actually have to sort of sit and think and
most people most of the time under explain so the minute they have to explain them and you can do it
to yourself it really draws out and you generate new knowledge by actually
doing it. You generate a new understanding in yourself and in others. It's a very, very powerful
technique. And it doesn't mean that you become a risk averse sort of institutionalized ops guy.
It means that you are more able to recognize the state of your own knowledge.
Because, you know, none of us want to program in regrets or live a life where we're sort
of denying that we regretted a decision.
But a regret's okay.
I mean, you know, you want a few false positives, right?
You want to be able to do a few things that were kind of wrong.
They were the right call statistically to sort of have enough risk taking in your life.
But starting out with at least a knowledge of your own sort of state of knowledge, I
think is really powerful.
And yeah, you might back off a few things that you otherwise would have plowed into,
but you might not.
You might actually have a better perspective on why you're doing something, even though it is risky. So I'm only
60% sure, but this is a real high stakes call. If we win this one, we've cracked this nut.
So you're able to differentiate between sort of wild ass, thought out risks and a really calculated we're
doing this because if we win it we've won everything yeah is it is it and i'm gonna i'm
gonna butcher this whatever i'm trying to explain to you um i always get metaphors and analogies i
was a math major so this words are not always my specialty but but is it fair to say that like,
if you're trying to make a decision, data gives you kind of the vector, the direction that you should be looking. And then your intuition, gut experience, the accumulation of what you've had
as a professional gives you the
ability to pinpoint in and where you actually go like like inside that that range it's going to
give you a range of a direction if you have 360 degrees it's going to say here this is this data
shows us this is kind of where we want to be pointed and then because i've been doing this for
10 years and i've had these seven experiences here's the three places inside that range that
we want to run tests it's it's more that's the scalpel your intuition is the scalpel kind of is that
or is it the opposite or is that just a crazy example yeah well no it's a it's a really good
example the neuroscientist would say it's exactly the opposite okay antonio dimazio who says feelings
come first say says that this is how it works.
Feelings and intuition will point you to the appropriate space in the decision space to look.
And then data actually allows you to really sort of hone in
on exactly what that analysis outcome is.
Then you would add again that your humanity, your sense of accountability, your risk aversion,
your you enables you to actually grapple with the uncertainty and make the decision.
So it's kind of like a data sandwich is what you're describing.
Yes, that data sandwich.
Data sandwich, I like that.
But it's also, I mean, I'm reflecting back
on what you were saying about the data
about founders and their percentages of ownership
and their risks and so forth.
Because I think about, you know, you expressed one,
I haven't seen the study, so I can't have any reflection
other than just hearing what you said
and then go, huh, really?
Like, because my intuition is telling me,
I don't know. I'm questioning that conclusion from that data. And it's because of my lived
experience of which companies have been, I think, the most innovative at different eras in time,
GE under Jack Welch and Apple after Steve came back and Disney under Bob Iger, those are all, you
know, remarkable success stories that where the, the leader didn't have a meaningful ownership
percentage.
Does that mean that my experience overrules the data?
No, but it does mean that if I was presented with that and I was thinking about actually
using that for some, you know, for some purpose, I'd want to dig deeper into it and question
it a little bit more to be able to
understand that delta. And one of the nudges we do have in our book is around who are the humans
in the data? So understanding what the data representation is. So who are the humans in
the data? Where is it? And then there's also the question of how are you actually drawing what
story out of it? So there could be an alternate example. So we talk about list,
which you'd have to believe to believe the opposite. Is it about the founder's percentage
or could it be about the size of the company? I don't know. Is there other alternative answers
and other alternative causes to the result? Yeah. I think this is really good to your point.
And I can forget which one of you made it around like what is good data, right? Because I, and
again, now that you say that I didn't put this piece of information in not on
purpose, I just didn't add it, is that they were talking specifically about early stage companies.
So you're going from a founder who owns 100% to when they lose that percentage is often because
one, they just got paid. So they went from usually broke to not broke. And now they went from no one can fire me because I'm, you know, one or three people
or whatever to now I can be fired and I have something to lose.
By venture capitalists, like the people who you're quoting.
Yeah, like they're showing up and actually firing the CEO.
Yes.
Yeah, exactly.
So it's like, so, you know, you take that and it's a really interesting, it's a really
interesting conversation because you say at this, you know, you take that same individual, you know, they're, they're, you know, maybe a co-founder or
the, the only founder, they own the majority of the, of the company. They're growing it.
They can't be fired. They can also go broke tomorrow, but they're, you know, and they're
growing, they bring in a big investor. They take a smaller cut. Now they're, you know,
they have some money, they have something to lose. They have a board that can kick them out right now. All of a sudden they start to play it a little
safer because you don't want to make that decision against the grain of the data where the board of
directors in your VCs can come in and go, the data told you to do this. You did this. It didn't work.
You're out right. Where the flip side of everyone that you just named, while 100% true, incredibly innovative, were also late stage, enormous companies that also those guys had big, huge contracts.
And there's a lost cost fallacy, I would believe, in the people who gave them those contracts. million a year plus a 50 million dollar bonus to run you know disney or whatever that i'm gonna
kind of give him some leeway and making some decisions and that we're paying this person so
you know and again i'm just spitballing off the top of my head but like it is it's incredibly
interesting how that one data point of these were early stage companies versus all companies
completely changes the reference of of what that story can mean so that actually worked out i didn't
mean it to that actually worked out pretty well. I think it worked out quite well.
The serendipity of a good conversation. So all right. So I wanted to go back to
because this is a concept I think is tremendous. And I just want to flush it out a little more.
The, that concept with the, the, the train, the subway system, we were talking about whose experience are we measuring, right?
That to me feels like that,
that feels incredibly powerful to me because it feels like just as we just had
a slight miscommunication drastically changed our experience with a comment.
Now, again, he just threw this comment out on a podcast. Who knows how real the study is, right? It seemingly
felt real. Good conversation for what we're talking about. But my point is, how do we know
we're measuring the right experience for our business? How do we know that? What's that filter system or heuristic, I guess.
I'm thinking.
It's a good question.
So I mean, I think the,
how do you know whether you're measuring the right experience?
I think I'm pausing because there's so many different
sort of contextual answers to the question.
Yeah.
Right.
If you think about in the insurance industry, obviously you've got the perspective of the
insurer and the insured, potentially also the reinsurer, right?
Because you've got lots of different layers in the industry and thinking about, let's
say you're trying to, you know, assess whether a new insurance product is successful.
I'm, I'm, I'm, I'm following
your lead and just kind of spitballing here. Yeah. Yeah. Go ahead. You know, I mean, understanding,
um, the first question is, um, I would say whether you're, whether you're measuring the right,
right question, the right, um, data and the right perspective is to be a little bit more in depth
in terms of what the question is. So define success more deeply.
Think about what you mean by that. So this sort of this question of making sure you're starting
with that. I think there's also when you actually get to the conclusion and you say, yes, this has
been a very successful product, going through the classic five whys to make sure you're really
digging through to the right answer. Have you actually gotten to an answer that you actually think is truly there? Because success could be high revenue, you know, for the company,
success could be low risk for the reinsurer, success could be customer satisfaction for the,
you know, for the, for the insured. There's a lot of different layers of what what success might mean and that's usually where i
think people get caught is that they're not sure exactly what they're asking of of data uh and so
therefore they're not sure which uh which experience to rely on well data is you know generally it's
harder to collect the thing you really want to know about than it is to collect the easy thing.
So I think the first answer is it depends on what your goal is, right?
So that's the kind of overarching meta answer.
But if you go down a layer from that, there's a couple of things that can happen. One is that pretty quickly, in a complex situation like insurance and customer
service, you probably pretty quickly find there's some sort of paradox. There's some sort of
dilemma. You can't have the perfect customer service at the same time as keeping costs down,
or you can't have the, I mean, in insurance, there's always this
background of, we want to have great customer service and settle claims and make everyone
happy. But at the same time, everybody knows that they're on the call with some sort of rationing
process, some sort of gotcha kind of process. So I think that being able to very quickly get down to the point
that you know why measuring something is hard. Why is this a hard problem to solve? What's the
dilemma that we're constantly going back and forth on? What are the poles of the dilemma?
So I think that's an important one. And another one is a, which is
more sort of on the complexity side of our house. But in the decision one, there's a really important
concept that again, came from Danny Kahneman, which is that we tend to substitute an easy
answer for the right answer. So a good, the simplest example is the right question is, how happy am I with my life?
The easy question is, how do I feel right now? And that happens all the time when it comes to data,
all the time when it comes to measurement. So using this nudge of right versus easy when is what is the right measure like write that down
what do we really want to know and then what's the easy measure and and actually putting them
in front of you and and because in in this in this world of of data gathering by machine of unconscious stuff or of, you know, using a product like Cogito to capture,
you know, emotional responses and what have you, and put nudges into, say, a call center
with agents in a call center. There are so many things that are easy to measure,
not necessarily right measures. So, but it doesn't mean you don't do them. It just means that you really need to be
much more consciously aware of on one level, what are the proxies, but then on the other,
just what's the right answer? What's the right thing we're trying to get versus what's the easy
thing? Yeah. There's two really incredibly relevant problems that you guys addressed in there. One is, so technically,
insurance agents work for carriers. So all the marketing that you'll see out of insurance agents
is that we work for you. That is technically not accurate. Now, there is a term for that. It's
called a broker. But in the United States, 90 plus percent of the property casualty insurance
agents are not brokers, they're agents, which means that technically they work for the carrier.
While in order to get paid by the carrier, you have to convince your client large stakeholders who are, you know, in some cases at odds with
each other, who's, do you care if the, do you want the carrier to be happy? Because if the
carrier is happy, you get faster response time, oftentimes higher and more inclusive compensation.
You get access to additional products. You get access to special programs, special pricing,
right? If the carrier is happy, but if the carrier is happy, that doesn't necessarily mean that the clients are as
happy. Even if they purchased for you, it doesn't mean they're as happy as they could be. And if you
measure straight client happiness, and you're only about the client and all that matters is the client
relationship. Well, oftentimes, and this is very, very common, your relationship with the carriers
starts to actually become at
odds. And now who you've actually signed a contract with and technically are responsible to
and is at you're at odds with to the client, which sounds good and feels good. And everyone
likes to thump their chest and say, my clients love me. But if your carriers hate you, then your
business is making less money. You oftentimes can't offer as good a product set. You may not get first pass into different beta programs or
specialty programs or specialty lines programs that can ultimately provide either greater access
or just better products to your customers. And it is a very, and that's not even to mention,
do you care what your employees, how they feel, how they're doing, their metrics,
or the vendors that you work with, or any referral partners that come in. So you have, and we're not alone in this, the
principal agent problem in the insurance industry is fairly unique, not wholly unique,
but fairly unique. But it is this, I'm thinking through just the millions of conversations
or probably thousands is technically accurate of conversations that I've had around this particular
problem. Where do you, where do you focus your attention and which relationship is more important
to value? And I think that goes all the way down to the baser of where you said to begin your
answer, which was how do you define success? Like is success maximizing revenue in
every way, shape or form, then probably you need your carriers to be happy and focus on the things
that make them happy. Do you care more about the relationship you have with your clients,
the longevity of those relationships, the, the ancillary benefits that comes out of having deep,
rich, well, well built, uh, uh, solid foundation with your clients, which can also be profitable as well, but
probably not maximizing profit.
And I think that's going to be different for every agency or every individual business
and who those leaders are and who the people are inside them.
And that's, there's no, I get, particularly in our industry, again, and I'm sure this is the case with
others, it's just I've spent almost two decades of my life in this one, is as soon as someone
starts telling me, like, this is the way you should do it, I am like, every bell in my
being starts to go off and say like, ah, I don't know.
I've been part of too many different conversations. That had to be true. So it does
seem like this is work that very much needs to be done on an individual basis. And this is maybe
where my question is. My next question is coming from being that I want to be cognizant of your
time and respectful to our audience's time. It does vary. It seems very much like we should be
doing this work on an individual basis
versus, and not that we can't look at best practice studies and stuff like that. They're
probably good, good benchmarks, but versus relying solely on the benchmarks or the frameworks passed
down by, by a consultant, we need to be maybe working with a consultant to figure out what
this is individually. This is individual work that we need to do because it oftentimes can be very
unique to us. Is that a fair, is that a fair assessment?
I think that's fair. And, and, and I'd, I'd,
I'd actually make it even more individual in the sense that, well,
basically everything's moving to sort of more personalized, but I'd take it.
I'd hazard a guess that, you know, when you started out 20 years ago,
these relationships were much,
much more one-to-one
and not a lot of machines involved.
Now, what if 50% of the value of that relationship
is now done by machine?
And that inside of that,
there's an artificial intelligence
that's making predictions
that sometimes decisions are coupled
still with that prediction
because their policies that are put in,
their rules that are set across at scale across the whole across the whole client base or whatever but if
if there's agency in that if there's if there's variability if there's um agency in the in the
way that you're making your judgments and your decisions this is a much more complex system
suddenly we're down into self-organizing we're down into um emergent properties we're down into self-organizing, we're down into emergent properties, we're down
into adaptation, things that you make decisions on within your own discretion and judgment that
are fundamentally different than you would have made 20 years ago. You've got totally different
access to information, rules are different. There's either more rules or less rules,
more decisions, less decisions. They're sort of on a spectrum. So I think that that's actually the real reason that this is so individual and so unique is, and why we went to
nudges, because in the end, this is about personal practice. This is about getting to know what it is
that you value and being able to understand how you specifically understand context, how your imagination works,
how your creativity works. You're clearly a bit of a status quo buster yourself. So that's worked
really well for you, right? That wrecking ball's worked well. Worked really well for me until I
turned 40 and then it just didn't work. And I don't know what it was about that.
Some sort of transition.
It's a little bit of, you know,
you can be a kind of young upstart.
And a lot of us who are contrarians in our younger years,
that doesn't work as you get older.
People expect the gray hairs.
They expect that wisdom.
They're not really as
forgiving of those behaviors. And plus there's a lot of survivor bias, you know, you're here,
it's worked. You don't, you're not looking at the people who haven't survived when they were
wrecking balls. I can, I won't, but I can tell you some names of people who just didn't survive that process. And they're no longer those kind of decision makers.
So I think it really is this world of personalization exists because we can do it.
I think that it's quite wise to think about this as an individual decision bubbling up
to your organization's decision, whatever size your organization is.
And you think about other sort of industries that have gone through
perhaps somewhat similar major transitions,
just looking at the financial services business
and thinking about portfolio management.
20 years ago, it was all about stockbrokers
making individual stock calls for their clients.
You'd be wanting to work with one of the big banks
because they had the flow.
They had the trading desk right there.
Their optimization was around,
how much am I getting in terms of my trading commissions
versus how much money am I making for my clients?
It was that kind of a sort of tension.
You could go to smaller places,
but they wouldn't have the same access
to the market timing that you could
get at the big banks. Now, fast forward 20 years, and it's a totally different world.
A lot of the portfolios that you're picking are optimized around ETS that are all set up in terms
of in large scale research situations. If you go to the little brokers, they can pick anything you
want in the market. Actually, when you go to the big banks now, they're all regulated out of central research organizations of what they're allowed
to give to their clients because the regulations have changed. Still, same sorts of tensions.
Am I making money for the bank? Am I making money for my client? But the whole profile of who you
are and what you do and what you can offer has changed. And I have friends who've lived through
that entire timeline, being sort of the high net worth folks at big banks and their jobs
are completely different from what they were 20, 25 years ago. But that now do they stay there?
Well, that's their own individual personal decision and that's fine. And I would echo,
I think what, what Helen said is our premise in our book and our premise around decision-making
is that there isn't one optimizable answer. There
isn't one heuristic to follow. There isn't one process to follow. There isn't a six-step process
to make a good decision. We believe this is truly a practice, which is why we have 50 nudges that
help you get better. That's why we call it make better decisions. It's more like meditation in
terms of practice and thinking about what works for you
as an individual inside the sphere of people that you're making decisions with, then it is about
some sort of step-by-step process that you can put on boxes and have a framework and, and, and do
that can be really unsatisfying for people. When we, when we say it, we truly believe it. It's not
a, we're not having some sort of easy get out of jail
free card by saying there isn't a six-step process and we haven't invented it. We actually just truly
believe decision-making is way too complex to be able to have a set process. You have to think
about what nudge do I want to use right now in this situation, with this topic, with this group
of people to make my decisions just a little bit better than they would have been otherwise.
Yeah. I love that. I love it. Guys. I, uh, I have thoroughly enjoyed this conversation. Um,
the book is make better decisions, how to improve your decision-making in the digital age on Amazon.
I'm assuming the rest of the places that work. Local bookshops, wherever you need it.
If people want to connect with you guys in the digital space, what's your spot? Website,
LinkedIn, where do you want people to go to connect with you?
Yeah, our website is getsondra.com and you can reach us there. Hello at getsondra.com is an
easy email address. You can find us both on LinkedIn. We also have a, we have a podcast ourselves called artificiality,
which is a combination podcast newsletter that we host on sub stack. So you can find us there.
That's great. And I'm on Tik TOK. Yes, we do. We do. We do. We do have participate in some of the
other social medias. How do you like the TikTok and Instagram and
Facebook? TikTok, I set a time limit. Any more than five minutes, I'm wasting too much time,
but it's just too damn addictive. I think that the interesting thing about that,
about TikTok and Instagram for us is that we wrote this book coming out of working in a
corporate setting and working at workshops, but it's so quickly becomes
really applicable for people in the personal life. So we got a wonderful comment on a TikTok video
where someone said, you just explained why my marriage has been in the shitter for the last
five years. Thank you. And so that was quite an eyeopening moment, especially, and it was quite encouraging,
especially since we're obviously a married couple.
We worked together.
We've done this for a long time.
It's kind of nice to feel like maybe actually this could be, you know, this people find
applicability in their personal lives too.
That's good.
That's tremendous.
I mean, I think, I mean, all the concepts we're talking about while applied, obviously
to business, you know, I'm sure there's a derivative that applies very much to how you, and, and I really liked the fact that you position it as a practice. I think that,
um, you know, in my own life, I very much try to approach things as a practice versus
when I was younger, I think I oftentimes was shooting for the goal, right? I just was, it was,
you know, did I get to this thing or did I not get to this thing? And today I think hopefully maybe it's turning 40, which I did recently. Um, you know, I, I seemingly viewing all changes in our lives
as practices, unless it's something very acute, um, tends to be a more sustainable and predictable
and proactive way of getting stuff done. So I love it. This has been absolutely
wonderful conversation. I wish you guys nothing but success on the book and everything that you're
doing. Um, obviously highly recommend this and hope everyone will check it out. Who's listening
guys. I appreciate your time and, and, uh, I hope you have a wonderful day.
Thank you. been fun. In one call.
This is the exact method we use to close 1,200 clients in under three years during the pandemic.
No fluff, no endless follow-ups, just results fast.
Based in behavioral psychology and battle-tested, the one-call closed system eliminates excuses
and gets the prospect saying yes more than you ever thought possible.
If you're ready to stop losing opportunities and start winning,
visit MasterOfTheClothes.com.
That's MasterOfTheClothes.com.
Do it today.