Something You Should Know - How to Change Anyone’s Mind & Could Machines Really Take Over the World?
Episode Date: March 19, 2020You have probably been eating peanut butter since you were a kid. And that turns out to be a really good thing. This episode begins with a look at the amazing and little known health benefits of eatin...g peanut butter – as long as it is the right kind of peanut butter. https://www.medicalnewstoday.com/articles/323781#health-benefits Changing someone’s mind is difficult if not impossible - or so it seems. However, minds do change so clearly it can be done. Jonah Berger joins me to explain how. Jonah is a marketing professor at the Wharton School at the University of Pennsylvania and his latest book is called The Catalyst: How to Change Anyone’s Mind (https://amzn.to/33hpVJE) . Listen as he explains the fascinating research on how to get people to agree with you. The experts are saying that one of the ways to prevent the spread of coronavirus is to NOT touch your face. Good luck with that! Listen as I explain why trying to not touch your face is almost certainly going to make you touch it even more. https://www.wired.com/story/cant-stop-touching-your-face-science-has-some-theories-why/ Could machines really get so smart they could take over the world – or is that just in the movies? Some scientists have expressed real concern that we could create machines that actually become self-aware and could in fact become smarter than we are. Joining me to discuss whether that is a real possibility or just science fiction is John Markoff, a science writer for the New York Times and author of the book Machines of Loving Grace. (http://amzn.to/2j55XgN) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
As a listener to Something You Should Know, I can only assume that you are someone who likes to learn about new and interesting things
and bring more knowledge to work for you in your everyday life.
I mean, that's kind of what Something You Should Know was all about.
And so I want to invite you to listen to another podcast called TED Talks Daily.
Now, you know about TED Talks, right? Many of the guests on Something You Should Know have done TED Talks.
Well, you see, TED Talks Daily is a podcast that brings you a new TED Talk
every weekday in less than 15 minutes.
Join host Elise Hu.
She goes beyond the headlines so you can hear about the big ideas shaping our future.
Learn about things like sustainable fashion,
embracing your entrepreneurial spirit, the future of robotics, and so much more. Like I said,
if you like this podcast, Something You Should Know, I'm pretty sure you're going to like
TED Talks Daily. And you get TED Talks Daily wherever you get your podcasts.
Today on Something You Should Know, wherever you're in a meeting or talking to your spouse and they ask what you want to do this weekend and you say, let's go to a movie.
When you give people one option, they think about all the reasons they don't like that
option.
And so what smart people do, they don't give people just one option.
They give people at least two.
Also, you know not to touch your face to stop the spread of germs.
But knowing it doesn't do you any good.
And like it or not, machines are getting smarter and becoming a bigger part of our lives.
And ordinary devices, whether it's our television or our lampshade or what have you,
will talk to us and they'll listen to us.
And we'll think of that, you know, as Star Trek normal.
I believe that that's going to happen.
I mean, it is happening.
All this today on Something You Should Know.
Since I host a podcast, it's pretty common for me to be asked to recommend a podcast.
And I tell people, if you like Something You Should Know, you're going to like The Jordan Harbinger Show.
Every episode is a conversation with a fascinating guest.
Of course, a lot of podcasts are conversations with guests, but Jordan does it better than most.
Recently, he had a fascinating conversation with a British woman who was recruited and radicalized by ISIS
and went to prison for three years.
She now works to raise awareness on this issue. It's a great conversation.
And he spoke with Dr. Sarah Hill about how taking birth control not only prevents pregnancy, it can
influence a woman's partner preferences, career choices, and overall behavior due to the hormonal
changes it causes. Apple named The Jordan Harbinger Show one of the best podcasts a few years back,
and in a nutshell, the show is aimed at making you a better, more informed, critical thinker.
Check out The Jordan Harbinger Show.
There's so much for you in this podcast.
The Jordan Harbinger Show on Apple Podcasts, Spotify, or wherever you get your podcasts.
Something you should know.
Fascinating intel.
The world's top experts.
And practical advice you can use in your life.
Today, Something You Should Know with Mike Carruthers.
Hi there. Welcome to Something You Should Know.
I've been asked a couple of times if we're going to talk about coronavirus on this podcast or why haven't we?
And they're really, and we're probably not, and there are really two reasons.
First of all, it's not like it's hard to get information.
It's everywhere. There are a whole bunch of podcasts about nothing but the coronavirus.
So we're kind of a haven away from that. And secondly, because we produce this podcast a
couple of days before it publishes, and because news about the coronavirus and what's going on
changes so quickly, I don't want to have out-of-date information.
So that's why I figured it's best that we just stay away from it.
First up today, I want to talk about peanut butter.
Yeah, peanut butter turns out to be a very healthy food.
Eating it will do wonderful things for you.
For example, it lowers your risk of diabetes. One study found that consuming one ounce of peanut butter per day
can lower the risk of diabetes by almost 30%.
It makes you feel full.
Peanut butter's monounsaturated fat and protein
can prevent you from overeating and help you lose weight.
It can lower your stress level.
Peanut butter contains a compound that can regulate
stress hormones. If you eat peanut butter while you're pregnant, you may help prevent nut allergies
in your child. You'll also burn off fat. There is something in peanut butter that reduces your
body's ability to store fat. Just remember to buy peanut butter that has peanuts as its only ingredient, with maybe some salt added.
But avoid peanut butters that are light or low-fat peanut butters.
They almost always have added sugar.
And that is something you should know.
How do you change someone's mind?
Well, usually you don't.
When was the last time you had a political debate and changed the other person's mind?
Or they changed yours?
Or when did you last convince someone to do something they really didn't want to do?
Changing people's minds is hard, often seemingly impossible.
Yet it does happen sometimes.
Sometimes a company can convince you to try their product
instead of the one you've always used.
So it does happen.
And when it does happen, how does it happen?
How did that company get you to try that new thing?
Jonah Berger is a marketing professor at the Wharton School
at the University of Pennsylvania,
and he's really dug into the research on this for his latest book called The Catalyst,
How to Change Anyone's Mind.
Hi, Jonah. Welcome.
Thanks so much for having me.
So why, in a nutshell, why is it so hard to change someone's mind?
You know, I think we have this notion, if we think about a chair, for example,
if we push a chair, chair goes in a certain direction. And we think people are the same way.
If I just give them more facts, more figures, more reasons, if I just tell them more about why
I think they should do what I want them to do, they'll come around. But unfortunately,
people aren't like chairs. When we push chairs, chairs go in the direction we want them to. When we push people, they often go in the exact opposite direction.
They don't just go along, they push back. And so rather than saying, well, how could I get
someone to change? We need to ask a slightly different question. Why hasn't that person
changed already? What are those barriers? Are those things preventing them from changing?
And how can I mitigate them making change much more likely as a result?
And probably, I would imagine, there are times when no matter what, that's not going to work.
That people hold very fundamental beliefs about a lot of things in life that no one's going to change. Yes?
That's an interesting question. And that was sort of a
journey for me in writing this book. You know, I started with salespeople changing clients' minds,
leaders transforming organizations. And people gave me some feedback like you did. They said,
yeah, of course, you know, that'll work. But, you know, Democrats don't become Republicans,
right? Or, you know, it's really impossible to change people's minds about prejudice, right? Or it's really
impossible to get someone who used to be a member of the KKK to renounce the KKK, right?
And so I ended up doing a lot of really unusual interviews for me. I talked to hostage negotiators
who figured out how to get people to come out with their hands up. I talked to substance abuse
counselors who get people to seek help. I talked to a rabbi slash cantor who got someone to announce the KKK.
I talk to people who switch political parties.
And you're right.
It's not easy.
Not all change is easy.
And not all change is quick.
But I think often any change is possible if we give it enough time and we understand enough
about why that person hasn't changed.
Because so often we're focused on ourselves, right?
Take politics, for example.
We want people to switch to our side,
but we don't take enough time to understand,
well, why haven't they done that?
And if we take to understand why,
often we can figure out a way
to get them to at least come some,
if not most of the way.
Well, I think of times that I've changed my mind
or I've changed my position about,
you know, various political things over.
And it's not because somebody changed my mind. No one was lobbying. No one was deliberately trying to change it. I changed it because I took the time to change it, not because somebody prodded or pushed me to change it. Yeah. I was talking to someone who I think said what you said, very similar to the
way you said it. They said, it's not about selling. It's about getting people to buy in.
And I think that's exactly right. One thing I talk a lot about in the book is the idea of reactants.
People like to feel like they're in control or they're in charge. And when we try to push them
or prod them, we take that ability away. Suddenly now, they're not in control. We're in control. And of course, no one wants to do what we want them to do. So they often push
back. And so the question then is, is how can we give them back some of that sense of control?
How can we allow for autonomy and really allow them to persuade themselves?
So tell the story about the Tide detergent pods and what happened and how it relates to this topic.
So people have this sort of anti-persuasion rate, or almost like this spidey sense. And I think
there's no clearer example of it than with Tide Pods. And so some of your listeners may know of
Tide Pods. They may use Tide Pods. They're very, very popular. What you stick in your laundry to
do laundry makes it faster and easier. You don't have to measure all these things. But a few years ago, there was a problem. It was a simple problem, the one unusual one,
which is people were eating them. And if you're sitting there going, people are eating Tide Pods,
what are you talking about? Well, they were. It was called the Tide Pod Challenge. Young people
were essentially challenging each other to eat Tide Pods. So there was a funny video and a funny
article. And then suddenly, lo and behold, kids online were challenging one another to eat these Tide Pods.
And so imagine you're Procter & Gamble in this situation, right?
You're sitting there going, well, who would eat chemicals to begin with?
We shouldn't need to tell anyone anything.
But just to be safe, right, they issued an announcement saying, you know, don't eat Tide Pods.
And in case that wasn't enough, they hired a couple celebrities to post some videos on social media saying, don't eat Tide Pods. They thought that
would be the end of it. And that's exactly when all hell broke loose. So, you know, searches for
Tide Pods jumped up by over 400%. Visits to Poison Control went up as well. Said very simply,
a warning became a recommendation. Telling people not to do something actually made them more likely to do it.
And this is true in a variety of different domains.
In juries, telling people certain testimonies inadmissible often makes them pay more attention to it.
Telling kids not to do something makes them more likely to do it.
Same in the political sphere.
But the opposite is also true.
Asking people to do something often has the same backfire effect.
Because again, when you tell people to do something, now they're not in control.
Now they're not the one making the choice.
You are.
And if they feel like you're in control, they don't want that to happen and they push back.
This anti-persuasion radar is super powerful.
We ignore sales calls.
We avoid emails that are trying to push us to do one thing or another.
But the most damaging is counter-arguing, right?
We may be presenting in a meeting.
Everyone's listening.
They're shaking their heads yes, but really what they're doing is sitting there thinking
about all the reasons why what we're suggesting is wrong.
They might seem like they're listening, but they're not, right?
And so that's the most damaging part, right?
They've got that anti-persuasion radar up, and if we just push them, it's not going to work.
And yet, some people are able to persuade.
So what is it they do differently that gets by that radar or goes under it or over it or whatever and gets people to do what they want?
Yeah.
So I talk about a few tips.
And one that I love is called providing a menu.
So imagine you're in that meeting, right? You're presenting something to an audience. Everyone's shaking their head yes,
and they're sitting there thinking about all the reasons why what you're suggesting is a bad idea
and why it costs so much and all those different things. What you need to do is shift their role.
When you give people one option, whether you're in a meeting or talking to your spouse and they
ask what you want to do this weekend and you say, let's go to a movie. When you give people one
option, they think about all the reasons they don't like that option. Oh, we went
to a movie last week. Oh, it's such a nice weekend. Let's do something else. Oh, your plan is too
expensive. And so what smart people do, whether they're presenting to an audience or trying to
convince a spouse, they don't give people just one option. They give people at least two. They
give them multiple options. They provide, in a sense, a menu. And what that does is it subtly
shifts the role of the listener.
Because now, rather than sitting there thinking about all the reasons why they don't like
what you're suggesting, instead they're making comparisons.
Which of these two options do I like better?
Which one of these is a better fit to me, which is going to make them much more likely
to go along at the end of the day?
You're not giving them 50 options.
You're not giving them 75.
But you're giving them a small set of guided choices, a small set of options that allows them to feel like they
have some volitional choice, but you're guiding that journey to encourage them to go in the
direction that you want. But I've also heard, especially in the world of advertising and
marketing, that if you give people lots of options, or if you give people more than one or two options,
they're more likely to do nothing. And so that's, again, why I'd say it's not an infinite number of
choices, right? We're not giving people 75 options. We give them two, three, maybe even four,
a limited choice set. You're certainly right. There's work on too much choice,
saying if I give you 25 different options, you're going to sit there, your spouse is going to go,
I don't want to do any of them. No, thanks. It's overwhelming. Indeed, there's lots of research
saying too many options is bad, but some options, at least some aspect of choice is a great way to
make people feel like they're in control. Another way I talk about is asking rather than telling.
Rather than telling people what you want them to do, asking them some questions. Asking them
questions that again guide that journey.
I was talking to a leader of an organization that wanted people to work harder.
He wanted them to stay after work.
It was a startup.
He wanted to put more hours in.
Now, of course, when the boss tells you to put more hours in, you say, no thanks, even
if that was something you might have done in the first place.
So instead what he did is he called a meeting and he said, hey, what type of organization
do we want to be?
And you know what people answer when you say, do you want to be a good organization or a great organization? No one says, oh, we want to be a good organization. They all say, we want to be a great organization. Then he said, okay, what do we need to do to get there? People started throwing out different solutions, different ideas. Some of them were, oh, we need to work longer hours. We need to do different things. And then later when he raises those solutions back to people, well, now it's much harder for them not to go along because they came up with the idea in the first place, right? Allowing for autonomy, as you nicely said, sort of getting them to persuade themselves.
If they're participating, they're committing to that conclusion. If they said, oh, we need to put
in longer hours, well, then they're much less likely later when you say, okay, well, you guys
said this, so we need to do it. They're much more likely to go along and less likely to push back. We're talking about how to change somebody's mind,
and my guest is Jonah Berger. He is a marketing professor at the Wharton School at the University
of Pennsylvania and author of the book, The Catalyst, How to Change Anyone's mind. Contained herein are the heresies of Redolph Buntwine, erstwhile monk turned
traveling medical investigator. Join me as I study the secrets of the divine plagues
and uncover the blasphemous truth that ours is not a loving God and we are not its favored children.
The heresies of Redolph Bantwine,
wherever podcasts are available.
People who listen to Something You Should Know are curious about the world, looking to hear new ideas and perspectives. So I want to tell you about a podcast that is full of new ideas and perspectives, and one I've started
listening to called Intelligence Squared. It's the podcast where great minds meet. Listen in for some
great talks on science, tech, politics, creativity, wellness, and a lot more. A couple of recent
examples, Mustafa Suleiman, the CEO of Microsoft AI, discussing the future of technology.
That's pretty cool.
And writer, podcaster, and filmmaker John Ronson, discussing the rise of conspiracies and culture wars.
Intelligence Squared is the kind of podcast that gets you thinking a little more openly about the important conversations going on today.
Being curious, you're probably just the type of person
Intelligence Squared is meant for.
Check out Intelligence Squared wherever you get your podcasts.
So, Jonah, it would seem that a lot of effort
in getting people to change their mind
would seem like a total waste of time, like political advertising.
Is an ad on TV or on the radio or in a podcast
really going to change somebody's mind to vote differently?
It seems like a long shot.
It seems like somebody would have to be very vulnerable
or very on the fence to go, oh, well, I'll vote for them.
Yeah, I mean, I think part of what you're saying,
so in the book, I talk about five barriers.
We talked a little about reactance.
That's this idea that when you push people, they push back.
Then I talk about endowment, distance, uncertainty,
and corroborating evidence.
Those five together actually spell the word reduce,
which is exactly what catalysts do.
They sort of reduce barriers.
But I think the one you're talking about right now
is the idea of distance. Sometimes when we ask for so much, too much from where people are
at the moment, they say, no way, I'm not going to go along. And indeed, you're right. In political
ads, often when people try to get the other side to change their mind, they get Democrats to become
Republicans, Republicans become Democrats. It often isn't very effective. But in primaries,
getting people to switch among candidates often
actually works. A good way to think about decisions, politics in particular, decisions in
general, is almost like a football field. If you think about a football field, two end zones,
you can think about politics with Democrats on one end, Republicans on the other. If you try to get
one side to switch to the complete opposite, it's too far away. Psychologists call that area the region
of rejection. Sure, there's a region around where you are at the moment where you're willing to
consider. You're not only willing to consider your own viewpoint, but maybe the viewpoint's
near yours, five or 10 yards on the field in either direction from where you stand.
But a completely other side of the field, 60 yards away, probably not. But what really good
change agents do is rather than asking for so
much, instead what they do is they ask for less. In some sense, they shrink that change down into
a more manageable amount. So I was talking to a doctor who had a great, great version of this. So
she was trying to get a trucker to be healthier. This was an obese guy who was drinking three
liters of Mountain Dew a day, way overweight. And the tendency in that situation, like in politics,
is to ask for big change right away. Don't drink any soda. Great idea in theory, much harder for
people to actually operationalize, much harder for people to actually do. So what she did instead,
she didn't tell the person to quit soda completely. She said, hey, just go from three liters to two
liters a day. Now the guy grumbled. He didn't want to do it, but eventually he was able to do it.
Then when he came back, he said, okay, now go from two to one and one to zero.
And eventually he's drinking no more Mountain Dew.
It took a while.
It took a few months to do.
But the guy's lost over 25, 30 pounds.
And he's been much more likely to go along with that change because she didn't just ask for less.
She asked for less and then asked for more.
Essentially what she did is she took a big change and broke it down into smaller chunks.
And so we can think about the same thing in politics.
I interviewed some people for the book that switched from Democrats to Republicans or vice versa.
It wasn't like overnight they just woke up the next morning.
They completely changed their perspective.
They moved five or ten yards at a time, but eventually over time went to the completely different other side of the field.
And so I think a really good analogy is almost thinking about stepping stones.
If you want someone to ford a really big river or stream, they might say, no, it's too far away.
I might get wet.
The water's too deep.
I might not make it.
They're not going to go for that big change.
But if you instead, you throw a couple stepping stones along the way, so they take one step,
and then they take another step, and then they take another, now it's going to feel a lot safer and they're going to be much more likely
to forward that river. And so in any change, whether it's politics, whether it's a doctor
trying to get someone to drink Mountain Dew or just get a client to go along, how can we break
that big change down into smaller, more manageable chunks, make them more actionable and make it
easier for people to at least start moving in the right direction.
Talk about uncertainty and how that works into this.
We often forget how risky change can feel. Change, anything new has some risk associated with it.
You might not love the old thing that you're doing, but at least it feels safe. New things
often feel risky and they're often uncertain. You don't know how good a new product or service is going to be.
You don't know how a new initiative is going to perform.
And if you think about it, there's always a cost to change.
You may be familiar with the term switching cost, but sometimes it's a monetary cost.
You buy a new product, it costs some money.
Sometimes it's a time or an effort cost.
You install a new software, you start a new program.
It takes some time or effort to do
that. And the problem is that the costs are often upfront and the benefits are often later. Sure,
a new program might be beneficial for the firm, but it's going to take a while for us to figure
out whether it's actually going to be better. We have to pay all those upfront costs before we get
to the potential benefits. It's something I call the cost-benefit timing gap. Costs are now and
they're certain. Benefits are now and they're certain,
benefits are later, and they're uncertain. And so one question is, well, how can we reduce
that uncertainty? How can we make people feel more comfortable about doing something new,
something different from what you're doing already? And so one thing I talk about, a few
ways to reduce uncertainty. One in particular is to do what I'll call lower the barrier to trial.
And a good way to think about this is to think about a company like Dropbox, for example.
So right now, Dropbox is a billion-dollar business, a file storage company, but they
weren't always that way.
Originally, they started out as a small business.
They had a lot of trouble getting traction.
People weren't used to storing files online.
They wanted to keep them on their computer. And so how do they get people to adopt this new thing? Well, they could
say their product is good, but of course they would say their product is good. No one says their
product or service isn't good. And so one thing they dealt with was how can we get people to
convince themselves? Again, how can we get people to persuade rather than us doing the work for
them? And so they did something interesting. What they did is they gave away their product for free. They gave away their service for free. And
you might think, how can you make money giving away something for free? But they gave away two
gigabytes of storage. And what that did, very interestingly, is it allowed people to experience
the offering themselves. Rather than Dropbox saying, hey, Dropbox is great. Here's why it's
better than what you're doing already. What this did is allows people to experience it themselves. And if they liked it, right,
if they were using it, then eventually they moved through two gigabytes of storage. They
needed to upgrade to a more premium version. And so Dropbox leveraged something we know today as
freemium, a business model where you lower the barrier to trial, you get people to come in,
try something at a lower cost or a lower effort, and then work
them up to a more expensive version. But the principle behind freemium is a lot larger. You
think about test drives of a car, same idea. There's no free version and a premium version,
but a test drive allows you to experience the offering without having to pay money up front.
Think about samples in a grocery store. It does the same thing. And so the key idea of uncertainty
is really how can we make it easier for someone to experience the value of what we're suggesting? Not by telling
them it's great, but allow them to experience it themselves so they can see if it actually is great,
if it's actually going to work for them. And if they like it, they'll stick with it. And if not,
they won't. But particularly if we have a good product, a good service, a good idea,
the question is just how can we get people to experience it themselves, lower that barrier, and then they'll be more likely to come around.
You talked about how we think that we should be able to give people the facts as we see them and that they should just, oh, okay, yeah, all right, I'll agree with you now, even though they didn't before.
Giving people evidence, if that doesn't work, then what should you be giving them instead when you're actually trying to get someone to see, you know, like a political viewpoint or something?
What works? If evidence doesn't, what does?
Yeah, I mean, I think part of the reason evidence doesn't always work, and it's not just evidence,
by the way, it's the type of evidence we provide.
So if we think back to that political context we talked about, there was a great study that was done recently by a sociologist out of Duke who was trying to sort of bridge the
partisan divide.
Everyone says, oh, you know, part of the issue is just filter bubbles.
People are just caught up in their filter bubbles.
They just talked to people on the other side. They just knew what it was like to be a member of the other
party. They'd come around. And so he did this great study where he did exactly that. He got
Republicans and Democrats to get information from the other side on Twitter. So if you're a Republican,
you got information about Democratic views. If you're a Democrat, you got information about
Republican views, sort of bridging, reaching across the aisle, great sort of quick public policy intervention, which would hopefully have a big effect.
He analyzed the data.
He hoped it would bring people closer together.
It didn't.
It didn't have no effect.
It actually had the exact opposite effect.
Democrats who got information about Republicans became more liberal, and Republicans became
even more conservative after getting information about liberals. And it goes back to that idea of distance that we talked about. Yes, it was information, but it wasn't think about where they are in that field and give them information that's just a little bit removed from where they are
at the moment, five or 10 yards in the right direction. So we move them a little bit and move
their zone of acceptance with them. So now when we give them a second appeal, they're more likely
to move in that direction further. It's not information itself that's bad. It's that
confirmation bias that we engage in when we see information that's so far from where we are that we don't want to believe it.
Well, you're right, because depending on the subject matter, what you believe to be true
is probably based a lot on your belief system, not just objective truth.
And a lot of this goes back to this idea of the confirmation bias. I tell this story of this great paper that was done
many years ago where they had both Princeton and Dartmouth students watch a football game.
Okay, so it's a Princeton game versus Dartmouth. Both sides were rough. It was a very physical
game. Lots of people got injured. At the end of the game, they asked people, hey, you know,
who started the fight? And what you find is that even though they watched exactly the same game,
everyone thinks the other side started the fight. And I think that's a great analogy for today's
political sphere, right? Where we think, look, just because you're on the other side, even though
we're looking at the same quote-unquote facts, we're really not seeing them the same way. And so
I think part of the challenge is distance, as we talked about, but also part of the challenge is
what we start with when we have these conversations.
If you start with areas where you disagree, you start with areas where you're far apart on the field, it's unlikely you're going to see eye to eye.
And so it's something called switching the field, which is really starting with areas of common ground, finding areas or places where you agree, and using that common ground to then bend around to places where you disagree.
Start by seeing that person as a human.
Start by seeing that person as someone else who has things in common with you.
And then when you get to political stuff, you might not completely agree,
but at least you're not going to dehumanize them
and you're more likely to have a real conversation.
Well, it certainly makes sense if you're going to change somebody's mind
or at least attempt to, that it's better to try to move them a little bit
at first rather than try to get them to completely do a 180 on whatever it is you're talking about
and there's so much to this there's a lot of nuance to this i appreciate you sharing your
your insight jonah burger has been my guest he's a marketing professor at the wharton school at the
university of pennsylvania and his book is called The Catalyst, How to Change Anyone's Mind.
And you will find a link to that book at Amazon in the show notes for this episode.
Thank you, Jonah. Appreciate you being here.
No problem. Thanks so much for having me.
Do you love Disney? Then you are going to love our hit podcast, Disney Countdown.
I'm Megan, the Magical Millennial.
And I'm the Dapper Danielle.
On every episode of our fun and family-friendly show,
we count down our top 10 lists of all things Disney.
There is nothing we don't cover.
We are famous for rabbit holes, Disney-themed games,
and fun facts you didn't know you needed,
but you definitely need in your life.
So if you're looking for a healthy dose of Disney magic,
check out Disney Countdown wherever you get your podcasts.
Hey, everyone. Join me, Megan Rinks.
And me, Melissa Demonts, for Don't Blame Me, But Am I Wrong?
Each week, we deliver four fun-filled shows.
In Don't Blame Me, we tackle our listeners' dilemmas with hilariously honest advice.
Then we have But Am I Wrong?, which is for the listeners that didn't take our advice.
Plus, we share our hot takes on current events.
Then tune in to see you next Tuesday for our listener poll results from But Am I Wrong.
And finally, wrap up your week with Fisting Friday, where we catch up and talk all things pop culture.
Listen to Don't Blame Me, But Am I Wrong on Apple Podcasts, Spotify, or wherever you get your podcasts.
New episodes every Monday, Tuesday, Thursday, and Friday.
For a long time now, people have thought about and been concerned about the idea of machines, robots becoming smart, maybe too smart. Of course, machines have slowly been creeping into the workplace and taking over jobs for some time now,
particularly task-oriented jobs that don't require a lot of thinking and judgment.
But concern continues to grow about smart machines being able to get smarter and smarter,
and maybe even becoming self-aware.
So is this reality, or is it still science fiction or what? Well,
here to discuss that is John Markoff. He is a journalist who has researched this topic thoroughly.
John is a science writer for the New York Times and author of the book Machines of Loving Grace.
So John, I understand that some people are concerned about artificial intelligence,
but is this like a future fear, you know, that one day this might be a problem,
or is this something that people are really worried about right now?
I think we are in a period where there's actual anxiety and concern beyond the wonks and the designers.
I think this happens periodically in American society.
It happened in the early 1950 society. It happened in the early
1950s. It happened in the 1960s. And, you know, machines are beginning to move into the workplace,
and so people are thinking about it. But machines have been in the workplace for a long time,
kind of slowly creeping in and doing things that humans used to do. And, you know, there hasn't
been any big catastrophic events as a result, or have there? I agree with you.
Machines have been taking jobs from humans going back into the 17th century, the Luddites,
and this is a perpetual state of affairs.
And I think, you know, why now?
For the first time, machines are starting to displace workers not in manual work but in intellectual work and so it's not just white-collar clerks but for the first
time machines are starting to do the job that have been done by $75 an hour
paralegals or $400 out and our attorneys or physicians and that that creates sort
of a new context and that's why we're thinking about it again.
And if you look not too far off into the future, I mean, what's the concern? What's the danger
other than losing jobs? Is that the concern? Well, there are a range of concerns about
interacting with machines from, you know,
from machines that arrange marriages to machines that replace us in the workplace
to machines that make decisions in warfare.
So it's across the entire range of human activities.
And the difference now is that AI technologies,
which have failed to sort of meet their promise since AI research began in the early 1950s,
are now making great strides.
And they're clearly going to be interacting with humans in a wide variety of ranges of things,
and people are thinking deeply about it.
Can you give me an example or two of some of these great strides
of what machines are doing now that they couldn't do before that might surprise people?
Yeah, and I think people are already familiar with them,
but the rate of advance has been striking.
Machines are listening to us.
If you think about Siri or Cortana or Google Now,
for the first time you can speak naturally,
and a machine will do a pretty good job of understanding what you're saying.
And that's an entirely new reality just over the last half decade.
For the first time, machines are seeing things and understanding what they're seeing,
and that's really having a big impact in the workplace.
A machine can recognize an object, and machines are just beginning to understand scenes,
which is something we do without thinking as humans.
For example, you can train a machine to look at a picture and say,
oh, that's a woman, and she's handing a pizza to that person.
That's the holy grail in machine vision, scene understanding.
That's
happening for the first time. Even more interesting to me is that, you know, and we've had robots in
the past. We've had robots forever, but they've been in cages and they do very repetitive tasks
very quickly and very precisely. For the first time, robots are beginning to come out of their
cages and move around in the environment, which means they need a whole set of skills that are human skills.
And to be honest, they're not doing a great job yet,
but you can see the first steps out into the world.
So because a machine can look at a picture and see a woman and see a scenery, so what?
What's next with knowing that? What's next?
The so what is just all over the place. The so what is, for example, in Amazon's warehouses,
where when machines can recognize boxes or packages, they can pick them up and they can
place them in places. And so you don't need warehouse workers anymore. The so what is the cameras that are all around us already will begin to have intelligence,
and so you will no longer need human beings to watch the cameras.
The cameras will watch us intelligently, which is very Orwellian.
In any number of places in the workplace, when you put intelligence into
a visual system, you dramatically increase what the machine can do.
The intelligence part of this, is it just ones and zeros kind of intelligence, or is
it deeper than that?
Well, that's a very rich debate, and I come down on your side. It's just ones and zeros right now. We do not have self-aware machines. And I would argue we don't know how to get there. However, having said that, so that's the question that Elon Musk and others have been raising. You know, are we summoning the demon? Will these machines become self-aware? And that's been a consistent refrain going back long before computing even.
And my argument is that we still don't completely know what human self-awareness, human thought is.
And so until we have some idea about what it is, it's going to be very difficult for us to recreate it. That said, you can increasingly simulate the
kinds of things that humans do, and we as a species have the propensity to anthropomorphize
anything we interact with. And so we will behave as if these machines are intelligent,
and that's the more interesting question. Well, the idea of a machine being self-aware,
that's kind of hard to wrap your head around,
because how could a machine have its own intelligence?
But I guess that's what the concern was.
I remember Stephen Hawking used to talk about this,
that if machines become self-aware,
you know, they could become evil and take over the world. Is that the concern?
Yep. Yep, that is. And, you know, Marvin Minsky, who's a well-known AI researcher,
was fond of saying, you know, if we're lucky, they'll treat us as pets.
Well, that might not be so bad.
No, pets seem to have great life yeah what's so wrong with that my dog's got a pretty good life so i'm thinking okay well that's that may not be bad yeah but
you know i have a friend in silicon valley who likes to say uh never mistake a clear view for
a short distance and i think that's the kind of situation we're in. You know, Silicon Valley likes
to say that, you know, the future is going to arrive tomorrow. And some of these things are
going to take a long time, and they probably won't happen in years or my lifetime.
So, John, this idea of a self-aware machine, I mean, is that total fiction right now?
Yes. Yes. I think that we don't know what self-awareness is. You could
increasingly create machines that give the illusion of intelligence, but that's not the same
thing as intelligent machines. An example, I covered the first Turing test in 1991. Turing
was the mathematician who sort of came up with a way to determine whether you had a machine that had human-level intelligence.
And it involved basically typing a series of questions to a machine or to a human on the other side of a keyboard.
You didn't know which was which.
And if you couldn't tell the difference after a satisfactory period of time, you could say the machine was intelligent.
That was his idea.
So in 1991, the very first year they had the contest, I reported on it,
and there were two groups of judges.
One group were computer scientists, and the other group were people they grabbed off the street.
And from my observation, even in 1991 when the programs were not very good,
for the sort of non-technical observer, we'd already passed the Turing test.
It wasn't very hard to fool the humans, and I think that's the significant point.
It says nothing about the machine, it says a lot about us.
So although this is really interesting, but as you have said, we don't even know what human self-awareness is,
so it would be hard for it to be engineered into a machine.
This still seems to me like a lot of science fiction
that maybe someday might be a problem,
but why is this important to the average person?
Other than an academic discussion and a concern about the future,
which, you know, valid though they
may be, why are we talking about this? So increasingly, we're going to be surrounded
by these systems that are, quote, intelligent, unquote. You'll be interacting with them. I mean,
you know, where I live and work in San Francisco, if you're downtown, half the population, I swear
to you, is walking around looking down at
their palm at their smartphone. I mean, they're just everywhere. And so, you know, that can't be
the final stage of human evolution. The technology is going to evolve so that we have these things
that are, their designers, the computer scientists call them conversational interfaces. We'll get
away from the personal computer and ordinary devices, whether it's our television or our lampshade or what have you,
will talk to us, and they'll listen to us.
And we'll think of that as Star Trek normal.
I believe that that's going to happen.
I mean, it is happening.
You know how many people use Siri and Cortana?
It's just an efficient way to get things done.
But it raises that question of what happens when we begin to treat inanimate objects as having human-like qualities.
And I don't think we have good answers to that yet, and I worry a little bit about it.
Well, that is interesting when you think about it, that people are walking around, in some cases,
putting their life in peril to look at their smartphones while they're crossing a busy street in San Francisco or New York City.
And you're right, that can't be it.
I mean, in fact, you know, it's very easy to assume that our grandchildren will look back and go,
you did what? You what?
Yeah, it's like Crankstart.
Right, exactly.
But what is the norm then that we can't envision now that would...
What replaces it?
So here's my bet, and I hate doing this because one of the best things about being a reporter
is you don't have to be a so-called visionary, because the visionaries are always wrong.
But I used to be very skeptical about this technology called augmented reality.
Imagine having a pair of glasses that sort of allowed you to overlay computing information on top of the world around you.
And then I went and saw the technology being developed by this Florida company called Magic Leap.
And I'd read the science fiction books like Werner Wenge's Rainbow's End,
which is just really a cool sort of exploration of what happens when you can use this technology.
And I was very skeptical that we'd ever be able to do it.
And seeing Magic Leap and since then seeing some of the stuff being done by Microsoft and others has really changed my mind.
I actually think that at some point that we'll wear glasses that will overlay intelligent information around us everywhere.
I don't think that is as crazy as it seems.
And the question then becomes when.
And I think it's a little bit like the invention of the mouse.
Doug Engelbart invented the computer mouse in 1964,
and it wasn't used by everybody, a consumer product, until 1989.
And I think, sadly for you and I,
that it's going to take longer for these augmented reality technologies you know, a consumer product until 1989. And I think, sadly for you and I, that, you know,
it's going to take longer for these augmented reality technologies to show up
and be useful and be affordable than we would want.
Well, wasn't that sort of the idea of the Google Glass that has seemed to have disappeared?
Yeah, we have this term in San Francisco we call the where's glass holes. It engendered
an incredibly interesting conversation
about
the use of technology and
sort of putting it between
two humans. People just hated
them in San Francisco.
And I think
that's going to be an interesting process.
My sense is, I mean,
the Google Glass was not even really augmented reality.
You call it annotated reality.
It was something up and to the left.
And so to see what the machine was saying to you, you'd have to look in a very sort of antisocial manner.
And at some point, I think these big things become transparent.
Maybe it starts that you use it first in the office where you don't interact with other people or something like that.
Or maybe, you know, it's used in elder care first. Maybe it's, you know,
used for your grandparents first, and it gives them a way to get out in the world when they
can't move around. I mean, hard to figure out, but it just seems to me that everyday devices,
as you put computing into them, become magic. And it's hard for me to see that it won't happen with glasses, too.
But like everything, there will be pushback and resistance,
and it has already been, because, like you say, the glass holes.
I mean, I remember seeing somebody, it was hysterical,
talking to some of these people and saying, you know,
they said, yeah, but I have all my contacts, right?
I don't have them.
Yeah, but you have them already in your phone. Yeah, but I've also got them up here. Well, yeah, but I have all my contacts, right? I don't have them. Yeah, but you have them already in your phone.
Yeah, but I've also got them up here.
Well, yeah, but they're in your phone.
If you need them, they're in your phone.
You don't need them up there.
And that there's that resistance to like, you know, why?
What's wrong with the old way is, I guess, maybe what I guess drives a lot of this, too,
is I don't want a pair of glasses between me and...
I would have been in your camp, but then I went and saw the Magic Leap demonstration,
which at that point was just on a bench, and it was like going to the optometrist doctor's office.
And I looked through it, and in the distance, about three feet away from me,
was this forearm creature that was walking in circles.
And I have to tell you, I mean, I look at a lot of this technology, and you see HDTV.
This was a better three-dimensional image, clearer than I can see anywhere on any HDTV.
It was a strikingly clear image, and it was just wandering around.
And then something weird happened.
My host ran his thumb through the image, and his thumb went transparent, not the image.
So something was wrong.
It was fooling my brain in some really compelling way that I didn't get,
and that they can't completely explain yet.
And so think about the ability.
I mean, these people want to get rid of the entire Asian display manufacturing industry.
In their view, you'll wear the glasses,
there will be no computer displays. You'll simply take your fingers and draw a square in the air,
and there will be a high-resolution display hanging in space. And if you want another one,
you'll just do the same thing over again. And, you know, I was kind of seduced by that idea,
you know, if they could make it work. But once again, it's that 64 to 89
kind of time frame. But we've seen that in TV shows and movies of where they, you know, draw
those images. And no doubt they look very cool. And wouldn't it be cool to be able to do that?
But, or would it? I don't know. I don't know. Who knows? Yeah. Well, you don't have to worry just
yet. No, I guess not. But that's interesting because you're right.
It would completely put out of business the whole display industry,
which I guess is pretty big.
Yeah, and where is it written that it's a natural thing to sit at a desk in front of a monitor?
I mean, why is that a natural state of affairs?
Look at the horrible things it's doing to people physically.
So if we could get past that, I think it'd probably be a good thing,
at least from the point of view of ergonomics.
Right, but same thing with people walking down the street in San Francisco
looking at their phones.
I mean, look what that's doing to people.
So, you know, it is fascinating, and there's really no way to tell,
but it's fun to listen to somebody who's looked at this
and you've got some interesting ideas that, you know, probably are as good as anybody else's guess,
maybe better, as to what could come from this.
I can guarantee you we'll be surprised.
Right. As you say, the visionaries are always wrong anyway, so...
Exactly.
Well, thanks, John. I appreciate your time.
John Markoff is a science writer for the New York Times and author of the book Machines of Loving Grace.
You have heard a lot lately about the importance of not touching your face,
because touching your face can be part of the process that spreads germs and gets you sick.
The problem is that just being aware that you shouldn't touch your face can actually
cause you to touch your face even more.
Estimates are that you probably touch your face maybe 16 times an hour or more.
And when I tell you that, and you become more conscious of it, you may do it more, much
in the same way that if I tell you not to scratch and itch,
it makes you more aware of where you might be itching and makes you want to scratch it even more.
There are several theories as to why we touch our face.
It could be a form of self-grooming, the way we see other animals do,
or it could serve some other evolutionary purpose.
But it does seem clear
that just telling yourself not to do it is not very effective. What can work is to keep your
hands busy, holding something like a stress ball maybe, or play with a rubber band, or ask someone
to tell you every time you do touch your face to increase your awareness. But the bottom line is it's very hard to stop touching your face.
So it's very important to wash your hands and follow all the other advice
that cuts down on the spread of germs and not count on
not touching your face. And that is something you should know.
If you like this podcast, and you must if you listen
this long because here we are at the end of the episode,
feel free to leave a review wherever you listen.
An Apple podcast, TuneIn, Stitcher, wherever you listen,
there's a way to leave a rating and review, and we read them all.
I'm Mike Carruthers. Thanks for listening today to Something You Should Know.
Welcome to the small town of Chinook, where faith runs deep and secrets run deeper.
In this new thriller, religion and crime collide when a gruesome murder rocks the isolated Montana community.
Everyone is quick to point their fingers at a drug-addicted teenager, but local deputy Ruth Vogel isn't convinced.
She suspects connections to a powerful religious group.
Enter federal agent V.B. Loro,
who has been investigating a local church
for possible criminal activity.
The pair form an unlikely partnership to catch the killer,
unearthing secrets that leave Ruth torn
between her duty to the law,
her religious convictions,
and her very own family.
But something more sinister than murder is afoot,
and someone is watching Ruth.
Chinook, starring Kelly Marie, and someone is watching Ruth. Chinook.
Starring Kelly Marie Tran and Sanaa Lathan.
Listen to Chinook wherever you get your podcasts.
Hi, this is Rob Benedict.
And I am Richard Spate.
We were both on a little show you might know called Supernatural.
It had a pretty good run. 15 seasons, 327 episodes.
And though we have seen, of course, every episode many times,
we figured, hey, now that we're wrapped, let's watch it all again.
And we can't do that alone.
So we're inviting the cast and crew that made the show along for the ride.
We've got writers, producers, composers, directors,
and we'll of course have
some actors on as well, including some certain guys that played some certain pretty iconic
brothers. It was kind of a little bit of a left field choice in the best way possible. The note
from Kripke was, he's great, we love him, but we're looking for like a really intelligent
Duchovny type. With 15 seasons to explore, it's going to be the road trip of several lifetimes, so
please join us and subscribe to Supernatural then and now.