Something You Should Know - The Strange Factors That Influence What You Buy & The Fascinating Story of Data
Episode Date: May 29, 2023Ever wonder why mosquitoes are attracted to some people more than others? I begin this episode by discussing 5 factors that make you so irresistible to those pesky insects. https://theweek.com/articl...es/462191/5-things-that-make-irresistible-mosquitoes Consumer behavior is a fascinating area of research. While people like to think they make objective decisions about what to buy or not buy, there are a lot of factors that influence those decisions such as description, price, ease of use and many more. And they influence you in ways that are not so obvious. Joining me to explain these factors is Richard Shotton. He is a behavioral scientist and author of the book The Illusion of Choice: 16 ½ Psychological Biases That Influence What We Buy (https://amzn.to/3q2Vne9). What do you think of when you hear the word data? Doesn’t it seem that data has an air of certainty, authority and objectivity. It’s hard to argue with data, right? That’s what concerns Chris Wiggins who is here to take a hard look at the history of data, algorithms and statistics and how they have come to drive so much of our lives. Should we accept data simply because, well, it’s data? Or should we be a bit more skeptical? Chris Wiggins is an associate professor of applied mathematics at Columbia University and he is the New York Times’s chief data scientist and co-author of the book How Data Happened: A History from the Age of Reason to the Age of Algorithms (https://amzn.to/3luS1Pb). Ever pull a green potato chip out of the bag and wonder if it is safe to eat? What about the occasional dark brown chip? What’s the deal with that one? Listen as I explain what to do with these off-color chips. https://www.mentalfloss.com/article/30746/whats-those-green-potato-chips-you-sometimes-find PLEASE SUPPORT OUR SPONSORS! Indeed is the hiring platform where you can Attract, Interview, and Hire all in one place! Start hiring NOW with a $75 SPONSORED JOB CREDIT to upgrade your job post at https://Indeed.com/SOMETHING Offer good for a limited time. Discover Credit Cards do something pretty awesome. At the end of your first year, they automatically double all the cash back you’ve earned! See terms and check it out for yourself at https://Discover.com/match If you own a small business, you know the value of time. Innovation Refunds does too! They've made it easy to apply for the employee retention credit or ERC by going to https://getrefunds.com to see if your business qualifies in less than 8 minutes! Innovation Refunds has helped small businesses collect over $3 billion in payroll tax refunds! Let’s find “us” again by putting our phones down for five. Five days, five hours, even five minutes. Join U.S. Cellular in the Phones Down For Five challenge! Find out more at https://USCellular.com/findus Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
The search for truth never ends.
Introducing June's Journey, a hidden object mobile game with a captivating story.
Connect with friends, explore the roaring 20s, and enjoy thrilling activities and challenges
while supporting environmental causes.
After seven years, the adventure continues with our immersive travels feature.
Explore distant cultures and engage in exciting experiences.
There's always something new to discover.
Are you ready?
Download June's Journey now on Android or iOS.
Today on Something You Should Know,
why are mosquitoes attracted to some people more than others?
Then, why do you buy the things you buy?
What influences your purchasing choices?
I think a lot of people
have a belief that they weigh up their choices in a very deliberative, reflective manner,
whereas an awful lot of research suggests that there are subtle influences on our choices that
have a much bigger than expected impact. Also, you know that occasional green potato chip you get? Is it safe to eat?
And data.
We rely on data for so much, we've come to revere it.
Data comes with this sort of aura of objectivity and truthiness,
and that's part of, you know, what I've spent a lot of time trying to work through.
It's like, how did it come to pass that when somebody says they have numbers about a thing,
that makes it somehow more true than other ways of knowing the world?
All this today on Something You Should Know.
Mama, look at me!
Vroom, vroom! I'm going really fast!
I just got my license. Can I borrow
the car, please, Mom? Kids go from 0
to 18 in no time. You'll be relieved
they have 24-7 roadside assistance
with intact insurance.
Mom, can we go to Nana's house tomorrow?
Go to Jack's place today.
I'll just take the car. Don't wait up, okay?
Kids go from 0 to 18 in no time, don't they?
At Intact Insurance, we insure your car so you can enjoy the ride.
Visit intact.ca or talk to your broker. Conditions apply.
Something you should know.
Fascinating intel.
The world's top experts.
And practical advice you can use in your life.
Today, Something You Should Know with Mike Carruthers.
Hi. Hey, welcome to Something You Should Know.
You know, there are a lot of reasons to like summertime, warmer weather.
I'm sure you can name a bunch of them.
But there is something about summer weather and warm weather that is just not so great. Like summertime, warmer weather. I'm sure you can name a bunch of them.
But there is something about summer weather and warm weather that is just not so great.
And that's mosquitoes.
And as I'm sure you've noticed, mosquitoes often seem to be attracted to some people more than others.
Why?
Well, here are five things that can make a person more appealing to a mosquito. First of all, consuming alcohol may make your blood tastier to mosquitoes,
according to a French study in 2011.
Exercise.
According to another study, exercise triggers this trifecta of biological signals that makes your exterior especially delicious to mosquitoes.
Your blood type seems to matter. People with type O blood
are much more susceptible to mosquito bites than any other blood group.
Being male, lady mosquitoes, which are the only ones that bite,
the male mosquitoes don't bite, it's just the female mosquitoes,
they seem to prefer men. In fact, larger people in general
attract more mosquitoes than smaller people.
And pregnancy is an issue. Mosquitoes are attracted to women who are pregnant.
In one study, pregnant women attracted twice as many mosquitoes as those who were not pregnant.
And that is something you should know. When you buy things,
when you make purchasing decisions,
it often seems like you're making that choice
of your own free will.
What you buy is your choice.
Well, maybe. Sometimes.
But there are a lot of other factors coming at you,
often under the radar,
that influence what you buy.
Marketers spend a lot of money and effort figuring out what these influences are and
put them into practice to get you to buy their products or services.
And you are probably unaware of many of them.
But you're about to be made aware of them by my guest, Richard Schotten.
He is a behavioral scientist who has a great book out called The Illusion of Choice, 16 and a half psychological biases that influence what we buy.
Hi, Richard. Welcome to Something You Should Know.
Very good to see you.
So explain and go in a little deeper to what I was just starting to say there about how there are these other factors that influence our buying decisions and what you call
this illusion of choice. The illusion part is that often our choices aren't made for the reasons that
we expect. I think a lot of people have a belief that they weigh up their choices in a very
deliberative, reflective manner. Whereas an awful lot of research from
behavioral science and psychology suggests that there are subtle influences on our choices that
have a much bigger than expected impact. So let's jump right into an example that
explains what you just said. And maybe a good one is the subtitle of your book your title of your book is the illusion of choice
16 and a half psychological biases that influence what we buy and that 16 and a half is one of these
principles so let's start with that now there is a lovely study by schindler at rutgers university
that suggests a degree of precision makes a communicator more credible and more believable.
So in his study, he recruits group people, shows them an ad for a deodorant,
and sometimes this ad claims that the deodorant reduces perspiration by 50%.
Other occasions, the ad says it reduces perspiration by 47% or 53%.
And when later on Schindler asks people how accurate is that
geodrant claim how credible is it there is a difference in people's responses so people think
that the precise claim is about five percent more credible ten percent more accurate so even though
people see exactly the same content to a meaningful degree
by introducing this element of precision it boosts credibility and believability what schindler
argues is generally in life people who know what they talk about talk very precisely people who
don't know what they talk about talk in generalities and over time people learn and
conflate the two things. And that translates to choice, so make the connection to how that
influences my choice. So the argument there would be if you saw the deodorant offering 50% you might
ignore it, you might not choose that deodorant. That very subtle tweak by the
advertiser introducing 47% or 53%, introducing this almost illusory degree of precision,
that's what influences you. Most people, if they were put on the spot, would say,
oh, I chose the deodorant because of the benefits that were listed. They wouldn't
zone in on the fact that one of the levers of influence is that hidden point of precision.
So I'd like you to talk about what you call the IKEA effect. And it has to do with
this old story in marketing lore about Betty Crocker cake mix.
Oh, yeah.
And how that, because I'd heard this a long time ago, that when they tried to sell cake mix,
just as complete cake mix, didn't do as well. But when they tried to sell cake mix just as a complete cake mix didn't
do as well but when they made consumers add eggs it made us feel like we were really baking and
that helped so explain all that yeah absolutely so the story you're referencing is an old anecdote
that's been passed around in marketing but it was never really certain whether it was true or not. And the original claim was that when Betty Crocker launched their instant cake mix,
this is back in the 1950s when they had realized that there were an increasing number of mothers
going out to work. So two parents out with jobs didn't have time to bake cakes from scratch so they realized there was a market for
an instant cake mix they create a cake mix very very simple you buy the mix tear open the package
pour it into a tub whisk it up with water stick it in the oven hey presto 20 minutes you've got a
cake now when they launched it like that there were limited
sales it wasn't very popular but then a psychologist who's working with Betty Crocker begins to wonder
if they've just made the whole process too easy because a cake it's not just about getting calories
on board very quickly it's about expressing your love for the family. And how much love are you really expressing if you've put no effort into it at all?
So what Betty Crocker did was add in an artificial extra step.
Now they change the cake mix.
You buy the cake mix, tear open the packet, pour it into the tub, whisk it up with some water.
And now you add an extra step in, which is they tell you to crack an egg into it.
And it's only when they add that extra step that sales take off it's only when there's
this degree of effort involved that it that it that it boosts the sales now two psychologists
dan ariely michael norton back in i think 2012 they begin to wonder whether that anecdote was just a story. So they tested that principle
in more controlled circumstances. They recruit people and they ask them to bid money to take
home an Ikea box. And people will bid a very small sum. You mean an Ikea box from the store,
Ikea, like a box of furniture?
Next group of people, they make the same offer, how much are you prepared to pay for this Ikea box?
But this time, the box hasn't been assembled. The participants have to build the box themselves.
And in that second scenario, people's willingness to pay goes up by the order of about 50%.
Now, that study was repeated
again and again in slightly different scenarios but each time you see essentially the same finding
which is the more effort people put into a product the more they value it. So there is an interesting
opportunity here for marketers which is on some, getting your consumers to go to a bit of effort,
making them put a bit of work in will make them appreciate your product more. So that's another
example of a driver of behavior that people might not be aware of. Let's talk about price,
because price seemingly is an objective thing. You decide whether or not to buy something
based on the price. The price is an objective number. It can't possibly be influenced by
anything else. It is the price. The price is the price. And a lot of people say they're price
shoppers. They're very sensitive to price. So how is that influenced by marketers? So there are a couple of interesting parts there.
The first is the same price can be made to appear very different with a few little tricks.
The second bit, which I think speaks exactly to your point, is sometimes people labor under the assumption that high price equals high quality. There's a lovely study from Babashev at Stanford where he serves five different bottles
of wine and people sample these wines and then they rate how much they like
them and each of the wines has a very prominent price label on it. The twist in
the experiment is even though there are five different bottles there are only four different types of wine. One of them has been repeated. So
people are drinking say a Merlot, they're having a sip of Merlot, they think it comes
from a five dollar bottle and they will give it a mediocre rating. Then a few
minutes later they take a sip of the same Merlot, but from a bottle that says it costs $45. Now, the average rating
in that second setting is 70, 7-0, 70% high. Shiv's argument is people assume that high price
items are higher quality because they often are, but they take it too far. So in this setting, when people are drinking
exactly the same liquid, they expect that second wine to taste better because it's more expensive
and that becomes a self-fulfilling prophecy. So our expectations affect our actual experience.
So if you're a business or a brand, you've got to be very careful about heavy discounting.
Because what it will do over time is train your customer to think your product isn't very high quality.
And that expectation will affect their actual experience.
We're talking about all the factors that influence how consumers behave.
And my guest is Richard Schotten.
He is author of the book book The Illusion of Choice.
Bumble knows it's hard to start conversations.
Hey. No, too basic.
Hi there.
Still no.
What about hello, handsome?
Who knew you could give yourself the ick?
That's why Bumble is changing how you start conversations.
You can now make the first move or not.
With opening moves, you simply choose a question to be automatically sent to your matches.
Then sit back and let your matches start the chat.
Download Bumble and try it for yourself.
At Wealthsimple, we're built for whatever you're building.
Built for Jane, who wants to break into the housing
market. We're built for Ted, who's obsessed with what's happening in the global markets.
And built for Celine, who just wants to retire and explore the world's flea markets. So take a
moment and think about what you're building for. We've got the financial tools to help make it
happen. Wealthsimple, built for possibilities. Visit
wealthsimple.com slash possibilities. So Richard, you said, or I think you said,
there are ways to manipulate the price, even though the price is the price. But if that's
the case, how do you manipulate the price to influence a consumer? I did a study last year with Michael Aaron Flicker. We recruited 282 people
and we told half of them that Sierra Nevada Pale Ale 12-pack cost $18.99. And when we said to those
people, how good value is this brand? Just over 13% said it was good or great. The other half of the people, we showed
them the same brand, same volume of beer, 12 cans or 12 bottles, same price, but we added on
a couple of extra words. We said that's the same as $1.58 a bottle. And what we saw when we asked
people that question about value, we saw a more than doubling of people thinking it was good or great value.
We're up to 28% now.
The argument here, and this is an idea called unit reframing, is if you draw people's attention to that smaller absolute amount involved in a subunit of the product, it will boost their perception of value. The argument is that people when they
are presented a price don't draw every relevant bit of information, they put too much emphasis
on the elements of the price that are salient, that the business draws attention to.
So these tiny little tweaks in how you present the price can have a significant change on
behavior.
So if you run a business and you sell, say, broadband, don't go out and say it's $30 a
month, say it's a dollar a day.
Tiny change that you can boost the margins and profitability of your business.
Framing is another one of these marketing concepts that you talk about.
So explain what that is. Framing is a related idea. It's essentially the idea that people relate
or respond to descriptions of events rather than the events themselves. So there's a classic study from, I think it's 1974, and a psychologist called Elizabeth Loftus.
And she creates a video of two cars crashing together.
And she plays people this video.
And you could Google Loftus framing car video, and you can see the actual video online.
She plays people this video and then
she says to some of them, some of the viewers, how fast were the cars going when they crashed
together? Other people are asked how fast were those cars going when they collided together?
And even though people watch exactly the same video, you get markedly different responses.
The people who heard the collided
question guessed the speeds about I think it's saying like 25 or 27 percent lower than the people
who heard the smashed phrasing the question. Her argument is we don't respond to the actual event
or that's not what we only respond to. The words that are used to
describe an event act as a lens, and that changes how we respond to the situation.
So the argument here is how you frame an event, the language you use to describe an event,
will affect to a degree people's response. So if you are selling meat and you term that meat 25% fat,
you'll get a very different response.
People will think it will be greasier and fattier
than if you call it 75% lean.
Exactly the same statistic.
How we respond to it relates to the frame that's used.
So I think most people have heard of the halo effect
and know what it is, but how does it apply
to the illusion of choice?
What is that?
Because it steers you to buy something
because you think it's good?
So the broad idea of the halo effect
is if someone or something,
so it could be a person or a product
rather than us evaluating that product and all its different attributes if that product has one
standout ability or that person has one standout ability it affects our judgment of that person on
all the other attributes so if you are phenomenally friendly,
and that's the thing that we notice about you when we first meet you, the halo effect suggests that
we'll also assume that you are kinder, that you are more ethical, that you're more intelligent.
So it's the idea that a strong characteristic in one field bleeds into people's perceptions
of ability in other fields, even if they're unrelated.
That affects people's commercial decisions.
Because what it means is if you as a brand need to project the idea that you are trustworthy
or well-priced, it might be quite hard to prove those things in a
short TV ad or a short internet video but what you can do is show that you are
humorous or likeable. Those are things that are much much easier to convey and
if you convey those attributes in a very powerful way the halo effect suggests
you will subtly
influence people's perception of you on all these other different metrics you
know there's long been this idea in the world of market research that if you
want to know what your customers want just ask them they'll tell you but based
on what you're saying about how consumers behave, what they decide to buy and why
and all the influences, it doesn't seem like asking the consumer directly would give you
much information. I think you're completely right there. That is a longstanding idea and a very
widely researched idea in behavioral science. There's a wonderful phrase from a University
of Virginia psychologist called Timothy Wilson. And he says, we are strangers to ourselves,
that people don't have full introspective insight into their own motivations.
So if you send them a survey, or if you put them in a focus group, they'll give you lots of answers
about why they buy your beer or your trainers. But the problem is most of them are just plausible post-rationalizations.
They don't actually reflect the genuine drives of behavior.
So if as a business you are relying on what I would call claims data, be very careful.
A lot of those statements will not be true reflections of what genuinely influences people.
And what you should do instead is what most psychologists do, which is set up these simple
test and control experiments. Don't ask people, create a A versus B study, which flushes out some
of these drivers of behavior. Lastly, one thing you write about that I found particularly surprising
was the Pratt-Fall effect.
So explain the Pratt-Fall effect.
Okay, so this is both probably my favorite ever study,
and it was conducted by Elliot Aronson, a professor at Harvard.
He gets a colleague to take part in a quiz,
and he gives the colleague all the answers.
So the guy gets 92% of the questions right, looks like an absolute genius, wins the quiz by miles.
But then at the end of the quiz, he spills a cup of coffee down himself.
Aronson has recorded all of this and he takes the recording and he plays it to listeners.
But he splits the listeners into two groups.
One group hear the entire incident, great performance and spillage. The other group
hear an edited version, so they only hear the amazing quiz performance.
Aronson then questions everyone as to how appealing is the contestant. And he sees a
significant difference. The group who heard the spillage and great performance rate the contestant about 40, 45 percent more appealing than the group who just heard the amazing quiz performance.
Aronson calls this the pratfall effect, this idea that we prefer people or products who exhibit a flaw.
Now, that idea, that is an amazingly powerful tactic for a business or
a brand. If you think about some of the greatest ever ads, Avis, we're number two so we try
harder. Guinness, good things come to those who wait. VW, ugly as only skin deep. Listerine,
the taste change twice a day. Again and again, some of the best brands ever draw attention to a flaw. Now why this is
so successful is firstly most brands brag so if you tell people about a flaw
you're being distinctive and therefore you're memorable we know that
distinctiveness leads to memorability. Secondly in any persuasion situation
commercial or face-to-face most people are a little bit cynical about what they're told
because they think the communicator has a vested interest to spin the truth.
The brilliant thing about admitting a flaw
is you have plausibly demonstrated your honesty
and therefore any other claim you make afterwards is that bit more believable.
So you get around this trust cap problem.
And then the third and final reason is in many cultures certainly the case in britain and america flaws often have
a mirror strength so if you're guinness and you go out and say we're slow well people assume if
you're slow well most times most situations things that take ages to make well
they're normally higher quality people assume slowness is associated with high quality or if
you're listening you say you taste awful or people assume if it tastes bad it must be pretty potent
so the best brands apply the pratfall effect very selectively they spend an awful lot of time
thinking what their core strength is
and then thinking, is there a mere weakness, I could admit,
that might emphasize that strength?
Well, I find this so fascinating that we can be influenced
by so many different things, subtle and maybe not so subtle,
because as we said in the beginning,
it seems as if we're making our own choices,
and to some degree we are, but seeing how these things influence us in many ways under the radar is surprising.
I've been talking to Richard Schotten.
He is a behavioral scientist, and the name of his book is The Illusion of Choice,
16 1⁄2 Psychological Biases That Influence What We Buy.
And if you're interested in reading that book, there's a link to it at Amazon in the show notes.
Appreciate it. Thanks for being here, Richard.
You're welcome, Greg. It's a chat moment.
This is an ad for better help.
Welcome to the world.
Please read your personal owner's manual thoroughly.
In it, you'll find simple instructions for how to interact with your fellow human beings and how to find happiness and peace of mind. Thank you and have a nice life.
Unfortunately, life doesn't come with an Owner's Manual. That's why there's BetterHelp Online
Therapy. Connect with a credentialed therapist by phone, video, or online chat. Visit betterhelp.com
to learn more. That's BetterHelp.com.
Since I host a podcast, it's pretty common for me to be asked to recommend a podcast.
And I tell people, if you like something you should know, you're going to like The Jordan Harbinger Show.
Every episode is a conversation with a fascinating guest.
Of course, a lot of podcasts are conversations with guests, but Jordan does it better than most. Recently, he had a fascinating conversation with a British woman who was recruited and radicalized by ISIS and went to prison for three years. She now works
to raise awareness on this issue. It's a great conversation. And he spoke with Dr. Sarah Hill
about how taking birth control not only prevents pregnancy,
it can influence a woman's partner preferences, career choices,
and overall behavior due to the hormonal changes it causes.
Apple named The Jordan Harbinger Show one of the best podcasts a few years back,
and in a nutshell, the show is aimed at making you a better, more informed critical thinker.
Check out The Jordan Harbinger Show.
There's so much for you in this podcast.
The Jordan Harbinger Show on Apple Podcasts, Spotify, or wherever you get your podcasts.
When I say the word data, what do you think of? Years ago, before computer algorithms and AI, data, as I recall, just meant information.
In school, if you had a paper to write, you had to research and find data to support your argument.
And that data was often found in books in the library.
Data was information.
But those two words, data and information, sure don't mean
the same thing anymore. Someone with data is much better armed than someone who merely has
information. Data is factual, hard to argue with, because it's data. But wait, hold on a minute.
Let's take a closer look at data and how it came to be what it has become.
And joining me is Chris Wiggins. He's an associate professor of applied mathematics at Columbia University and the New York Times chief data scientist.
He's co-author of a book called How Data Happened, A History from the Age of Reason to the Age of Algorithms.
Hey, Chris, welcome to Something You Should Know.
Thanks. Thanks very much for having me.
So what is data? What does that word mean to you?
Yeah, well, my relationship with the word, I would say, has changed quite a bit over the years.
You're right. When people hear data today, they'll think about computers and algorithms. But in the sciences and also sort of in popular press, data comes with this sort of aura of objectivity and truthiness.
And that's part of, you know, what I've spent a lot of time the last couple of decades trying to work through is like, how did it get that way?
How did it come to pass that when somebody says they have numbers about a thing, that makes it somehow more true than other ways of knowing the world?
Yeah.
And did you just say truthiness?
I don't know if that's a real word, but I love that word.
Don't fight it.
I think, you know, the idea of truthiness is, you know, like you read something in the newspaper or you hear something from a, or you read something in the palm of your hand
on some newsfeed, and all of those things
come sort of with a different credibility to them, right?
And the sort of, I guess the medium is the message,
as they say, right?
The sort of packaging of the thing comes
with your own expectations
of how much you're gonna believe it,
but also the rhetoric of the thing.
You know, when I tell you most people believe this, that's got a
different rhetoric from when I tell you 87% of the people believe that. And I put that number on it,
and it just turns it up a notch in terms of how much you're supposed to believe it.
And as a scientist, I come from a tradition where that's part of what we're supposed to be aiming
for is objectivity of it. But with numbers in particular, you know, how do you get that way? And how does it come to pass? What sort of subjective design choices are
actually hidden when somebody quantifies something for you? That's part of what I've been really
looking into the last couple of years. Is there a sense when this concept of data, of gathering
information to create data, when that caught on? It's a good question. So, I mean, people have
been using numbers to try to make sense of things for a long time, but we had to choose a starting
point somewhere around the 18th century, the early 19th century. First of all, when the word
statistics enters the English language, right away in the early 19th century, you start seeing people
push back on the idea that there should be numbers in statistics. And that, I think we still see to this present day.
So when Amazon started selling books, they started hiring editors to make reviews of books.
And then eventually the real numerate people said,
well, no, we don't need to hire people who are good at reviewing books.
We can just use the data from other people's reviews. Or when Pandora was a company that people were using for trying to listen to new songs,
part of the way they did it was to hire people who were musical experts. And then eventually,
now we use companies like Spotify that have massive machine learning departments
that are using people's engagement with data to recommend new songs. And when I hear that, the idea of hiring experts
to write reviews of books, for example, versus taking what actual real people think and aggregating
that, that seems more accurate. Because, I mean, how many times have you gone to the movies
and, you know, the movie critics have panned a movie and you really like it or the movie critics love a movie and you really hate it?
That expert reviews in that case don't seem to hold up to me as well as data, the data of gathering lots of people's opinions and running with that.
Nonetheless, there are plenty of people who
are pushing back against it at the time. It also happens in politics. So for example,
one of the examples we talk about is when Obama's re-election campaign of 2012 started advertising
that they were going to be hiring for statisticians. And you could read writings by people
dismissing that as, well, you know, that's politics as done by Martians,
right? That's not real politics. And in fact, in the early 19th century, when people started using
numbers in the field of statistics, it was dismissed as vulgar statistics, as opposed to
the high statistics, which was qualitative statistics and understanding countries in
terms of the greatness of kings and the wisdom of the people running in the country.
So you see those sorts of fights all the time when people push back on numbers.
For example, I spent a lot of my time at the New York Times as chief data scientist there.
That is a place, the New York Times, where there's a particular craft, like the craft
of journalism and the craft of being an editor, where it's not real clear whether or not machine
learning is going to advance or be able to useful
to augment that craft. Now, there are other things at the New York Times where machine learning is
very, very useful, particularly around the business of trying to decide on where to put
the paywall or other innovations on the business side. But there are certain things in life where
it's not real clear that success can even be quantified yet, let alone that that success can be optimized. So when the New York times comes out
every day, where the stories are in the paper that gets thrown on somebody's driveway, that's
determined by people. That's not determined by much, very much determined by people. And they're
very good at what they do, right? That's another thing about certain places is, you know, they have people who are really good at whatever that craft is, let's say, being an editor or deciding how to allocate those stories. And there are many ways that you can use machine learning, but that's probably not the way that you want to use machine learning if you already have people who are some of the best people in the world at that particular art. So when you say that, you know, people pushed back against Spotify and Amazon doing what they do and
other companies that do the same thing, but people push back on everything that's new. And there's
always that kind of settling down before things become accepted. But there's always that push
back, the give and take of whether this is a good idea. So why is this different than anything else?
Absolutely.
Well, it comes back to what you were saying earlier about the way data sort of comes with this, well, I guess maybe I said it, that the way data comes with this aura of objectivity and truthfulness.
It's even there in the word.
If you think about the word data, which comes from a word that means given, it's like somebody just gave you this fact, right?
You're not really supposed to interrogate this fact or where it came from.
Just somebody gave it to you.
So technology, you're right, absolutely.
Technology always comes with some pushback.
But with data, there's an extra form of,
an extra power to having data behind your argument.
It gives this, like I said, this sort of extra truthiness.
Part of what we want to get at is the relationship between data and truth and data and power. Part of that power comes from the fact that when you
have data of something, it gives you extra ability to say that something is true. But part of it,
particularly in present day, is when data is really powering a computational algorithm,
we see all sorts of ways in which people who have access to data are able to do things that people
who don't have access to that data cannot do.
For example?
Well, in present day, I'll tell you the thing that everybody seems to be excited about since last December is large language models, an example of which is chat GPT.
So a lot of the news media the last month or two has been about the power of computers that generate human-sounding
text or, in addition, images, image data that look like it was generated by humans.
So that's been very exciting to a lot of people in the media lately. And there's a lot of companies
that are thinking continuously, how are we going to leverage this new exciting technology of
computers that can generate text that sounds like it was generated by a human being.
So that's a novel use of data that I think is capturing people's imagination now.
And I have to say, it goes back to one of the original uses of the phrase artificial intelligence, right? In the 1950s, when people posited or coined the term artificial intelligence,
one of the first things they were talking about is, well, could there come a day when computers are able to generate words
in a way that sounds like human beings?
And we're here.
It's happening right now, and people are thinking a lot about
what are going to be the implications of computers that,
when they've been trained on lots and lots of data,
can generate new text that sounds just like a human being.
Besides artificial intelligence,
and there have been a lot of concerns voiced lately about that, but besides that, what are the other concerns you have about data?
Another concern, and this has been true, I would say, maybe for the last five to ten years, is people realizing the dangers of having algorithms make decisions that are really important to us, say, as a society. So for example, there's been
a lot of great writing about the risks of having judges look at the outputs of an algorithm to
inform their decisions, say, whether or not to give somebody bail or whether or not to make other
judicial decisions. If a judge is armed with the output of an algorithm that informs those decisions, how might that create more bias in our legal system?
For example, if those algorithms are themselves trained on previous data, we don't know how the biases of people that have made those decisions in the past might be inherited by the algorithmic output, which is then informing future decisions. So over the last decade, there's been a lot of concern about the way that using data and using numbers can obscure the
possibility of extremely biased technologies, which then have reinforcing bias and outputs
in ways that are not what we're aiming for, for our society. Because data, in order to
accumulate data, you have to look to the past, right?
It does. But also, once I imbue that in some sort of algorithm, it makes it difficult for people to critique it.
It makes it difficult for people to say, okay, well, I'd like to understand why that algorithm did what it did, or what data it was trained on, or any number of other questions that you'd like to ask.
It becomes, as they say, a black box,
difficult to question. In your example of the judicial, of a judge using an algorithm to
come up with a prescription, well, he's going to come up with a prescription with or without
the algorithm. That's his job. So what harm would it be to have another tool to help him make his
decision? You're absolutely right that in any case where an
algorithm is being used, say, instead of a human, using an algorithm that has bias doesn't mean that
there wasn't already bias in the human. I think part of what is dangerous about using automated
decision systems for really high stakes problems like this is the lack of an ability to query how is the algorithm performing?
Do we understand it? Do we understand its limits? Do we understand its biases? For example,
is the algorithm more accurate on certain groups than the algorithm is on other groups?
It's very difficult to query that. But you're absolutely right that the same questions should
be asked of the humans who are executing these decisions
in the absence of algorithms. So explain what you're trying to do, what your hope is, because
data is data, an algorithm is an algorithm, and it may be flawed, there may be biases in it.
What do you hope to change? Part of what we're trying to argue for is that just the presence
of an algorithm alone, a computational algorithm, doesn't necessarily mean that it's better or that it's true.
So part of what we want to do is we want to help people understand where these algorithms come from, how do they get developed, what are the subjective design choices that go into the algorithmic development,
and in cases where people feel like there is an algorithm that is being used in ways that are not in our best interest, what are the powers that are available to all of us for pushing back, for querying against the use of an automated decision system in some sort of high-stakes scenario?
But who's to determine whether it's in our best interest?
Who makes that judgment?
An excellent question.
So, for example,
let's say an example from the popular press. So Amazon several years ago used a hiring algorithm
or developed its own hiring algorithm. And the idea was that Amazon would train this algorithm
on previous hiring decisions. And it was realized after auditing the algorithm for a while, once it was live, that it was biased and was rejecting women candidates. So that's a case where certainly if you're a woman candidate
applying to Amazon, that's not in your best interest. But also, at some point, Amazon
realized it was not in their best interest and they weren't creating the workforce that they
wanted to create. As far as who pushes back on an algorithm, that can be very different depending on whether we're talking about state, corporate, or people power.
So is it the case that a state, like the US government, has its job as regulating the
way a particular company functions?
Is it the case where a company, for example, let's say Apple doesn't want certain apps
on the App Store, is it a case where a company needs to audit the performance of another
algorithm on its store in order to figure out whether or not it should be included in its app store?
Or is it the case where individuals are interested in pushing back on state control, where state is
using an algorithm, or on individual companies where they may not want to work for that company,
they may not want to give their money to that company? All of those things are examples of
centers of power where different people are investigating the use of algorithms.
As you look back at the history of data and algorithms and how this all developed,
what's another pivotal story in that development?
Another story, I would say, was the way that people started trying to make sense
of artificial intelligence itself.
So in the 1950s all the way through the 1980s, it was pretty clear to all the people who were leading the field of artificial
intelligence that it wasn't a data problem at all, that artificial
intelligence was going to be solved by understanding the way real human beings
solved really hard problems like proving theorems or working within a company. And
once we could just figure out what those rules are and program those rules into a
computer, then we would be able to have a computer execute all of those functions at the level of a human being.
It's been an exciting story to see how, let's say, 1980s through present day, and particularly in the last 20 years, people have realized that artificial intelligence should be solved using data, which today seems obvious. In fact, we use the terms artificial intelligence and statistics and big data almost interchangeably. But for the first half of the life of artificial
intelligence, people were pretty convinced that it wasn't a data problem at all. It was actually
really about understanding rules and human behaviors and then making a computer perform that.
So it's not exactly a disaster story, but it's a story that gives some sort of clarity to why we have these separate phrases for these separate things.
Machine learning is a term of art, mostly from academia, which has now become part of industry.
Artificial intelligence is a separate term. Statistics is a separate term.
Each one of these terms has its own history and its own communities.
And one of the things we wanted to make clear is how did it come to pass that we have these different words for things that seem to be very similar today. But if you look over
the last decades or even centuries, you'll see that it was ideas that were being created by
very different communities with very different interests. Well, it does seem that statistics
and algorithms, that those two words really have kind of meshed together, that, you know, that's algorithms are statistics, I think, in many people's mind, that that's, or at least that's
the ingredients to the recipe. As you take statistics, you put them in the mixer, and
somehow you come out with an algorithm, that that's how you make it.
That's the way a lot of people treat it today and most of our lives.
Most of our lives today when we're experiencing an algorithm, it is an algorithm that's informed by some sort of statistics.
But an algorithm itself could be anything, like the algorithm for tying your shoes or for any other sort of process.
Moreover, artificial intelligence, for most of the life of the phrase artificial intelligence, had nothing to do with data, even chatbots.
Some of the most fun chatbots from the 1970s were really just a handful of rules. It wasn't informed by
data at all. So trying to understand why those terms are related to each other, even though we
do use them interchangeably today, really benefits from a historical look. For example, as I was
saying earlier, the idea that statistics, when it first entered our language, had nothing to do with data whatsoever, let alone with algorithms.
All of these ideas were created by different communities of people with different goals,
different aspirations, and different ideas about what was the right method for getting
to those aspirations.
I think for most of us, I mean, we know about algorithms in the sense that, you know, Google uses algorithms to give us results or Amazon uses an algorithm to give us ideas of what we might want to buy.
And I think we have a sense of that, but it's kind of beyond my reach.
I know it's there.
And so and it must work.
Right.
I mean, these algorithms must work because we use them.
Companies use them. Governments use them. They must work because we use them, companies use them, governments use them.
They must work or we wouldn't use them.
So what's the big so what here?
Part of it is when people choose whether or not to use, say, an algorithm that feeds them the news, for example.
So when you're using an app and the app is recommending to you different movies, that doesn't seem particularly malignant or benign.
It's simply recommending movies to you.
And it works well if you find movies that you like.
When algorithms are recommending to you, let's say, which news to consume or maybe which health care providers to use, those are decisions with more consequence.
And I think it's useful for people to understand what are the forces at play? maybe which healthcare providers to use. Those are decisions with more consequence.
And I think it's useful for people to understand what are the forces at play? How did those algorithms come to be? What data are they trained on? What are the subjective design choices? What
are the interests of the private companies that are shaping these algorithms or designing these
products? And it gives people the opportunity to think through it and to choose
whether or not they want to use one algorithm or a different algorithm or no algorithm or a product
that doesn't use algorithms at all. Part of, I think, what people benefit from is an understanding
that the algorithms, yes, they just seem to be there and they seem to work, but I think it benefits
people not to just assume that algorithms need to be part of our lives, but to think about what are the moments
in our lives where an algorithm is recommending to us a certain thing? And is that recommendation
in our interest or in the interest of a private company? Is it optimizing for what we want? Is
it optimizing for what the company wants, for example? Well, it's a lot to think about, because as we said in the beginning, that data algorithms
have this sense of objectivity, of, as you call it, truthiness.
It's hard to argue with.
But in fact, there are arguments to make about data that make us take a closer look, and
I think it's important we do that.
I've been talking to Chris Wiggins.
He's an associate professor of applied mathematics at Columbia University
and the New York Times chief data scientist.
The name of his book is How Data Happened,
a history from the age of reason to the age of algorithms.
And there is a link to that book in the show notes.
Appreciate it. Thanks, Chris.
Likewise, likewise. Thanks so much for having me.
Have you ever been eating potato chips and you reach into the bag and you pull out a green one,
or it has green edges around it? Is it okay to eat? Yeah, probably so. If you've heard that
green potatoes have turned poisonous, that's true, but you'd have to eat about two pounds of whole green potatoes in order to feel
the effects of that poison. The toxic solanine in green potatoes is mostly near the surface of the
skin, and once peeled and processed, the fraction of that toxin that remains is not really enough
to do us any harm. If the potato chip in question is really dark brown, that one you might want to skip.
The sugar levels and the amino acids in that chip are off, and so will be the taste.
And that is something you should know.
I know that you know someone who would love listening to this podcast,
so please do us a favor and share this with a friend.
I'm Mike Carruthers. Thanks for listening today to Something You Should Know.
Hey, hey, are you ready for some real talk
and some fantastic laughs?
Join me, Megan Rinks.
And me, Melissa D. Montz, for Don't Blame Me,
But Am I Wrong?
We're serving up four hilarious shows every week
designed to entertain and engage
and, you know, possibly enrage you.
In Don't Blame Me, we dive deep into listeners' questions,
offering advice
that's funny, relatable, and real. Whether you're dealing with relationship drama or you just need
a friend's perspective, we've got you. Then switch gears with But Am I Wrong?, which is for listeners
who didn't take our advice and want to know if they are the villains in the situation. Plus,
we share our hot takes on current events and present situations that we might even be wrong
in our lives. Spoiler
alert, we are actually quite literally never wrong. But wait, there's more. Check out See You Next
Tuesday, where we reveal the juicy results from our listener polls from But Am I Wrong? And don't
miss Fisting Friday, where we catch up, chat about pop culture, TV and movies. It's the perfect way
to kick off your weekend. So if you're looking for a podcast that feels like a chat with your besties, listen to Don't Blame Me, But Am I Wrong on Apple Podcasts, Spotify, or wherever you get
your podcasts. New episodes every Monday, Tuesday, Thursday, and Friday.
Contained herein are the heresies of Rudolf punt wine erstwhile monk turned traveling medical investigator
join me as i study the secrets of the divine plagues and uncover the blasphemous truth
that ours is not a loving god and we are not its favored children
the heresies of red off punt wine wherever podcasts are available