Something You Should Know - SYSK Choice: The Best Way to Change Minds & The Relationship of Humans and Technology

Episode Date: March 5, 2022

Just how healthy is peanut butter? This episode begins with a look at some amazing and little-known health facts about eating peanut butter – just as long as it is the right kind of peanut butter. h...ttps://www.medicalnewstoday.com/articles/323781#health-benefits Changing someone’s else’s mind is usually difficult if not impossible. Still, people do change their minds so clearly it can be done. Jonah Berger joins me to explain how you can get someone to change their mind and agree with you. Jonah is a marketing professor at the Wharton School at the University of Pennsylvania and author of the book The Catalyst: How to Change Anyone’s Mind (https://amzn.to/33hpVJE) . Listen and you will hear him explain the fascinating research on how to get people to agree with you. We’ve been told by experts that one of the best ways to NOT get sick is to stop touching your face. Well, good luck with that! Trying not to touch your face is really hard. Listen as I explain why. https://www.wired.com/story/cant-stop-touching-your-face-science-has-some-theories-why/ Could machines really take over the world someday– or is that science fiction? There is concern among scientists that we could create machines that might actually become self-aware and end up being smarter than we are. Joining me to discuss whether or not this could happen is John Markoff, a science writer for the New York Times and author of the book Machines of Loving Grace. (http://amzn.to/2j55XgN) PLEASE SUPPORT OUR SPONSORS! We really like The Jordan Harbinger Show! Check out https://jordanharbinger.com/start OR search for it on Apple Podcasts, Spotify or wherever you listen!  Go to https://Indeed.com/Something to claim your $75 credit before March 31st! Masterworks gives everyone the opportunity to invest in blue-chip artwork. To receive exclusive access to their latest offerings go to https://Masterworks.art/SYSK LEVEL UP will give you the confidence, know-how, and savvy to grow your business and thrive. LEVEL UP, by Stacey Abrams and Lara Hodgson, is now available everywhere audiobooks are sold. Discover matches all the cash back you’ve earned at the end of your first year! Learn more at https://discover.com/match M1 Finance is a sleek, fully integrated financial platform that lets you manage your cash flow with a few taps and it's free to start. Head to https://m1finance.com/something to get started!  To TurboTax Live Experts an interesting life can mean an even greater refund! Visit https://TurboTax.com to lear more. To see the all new Lexus NX and to discover everything it was designed to do for you, visit https://Lexus.com/NX Use SheetzGo on the Sheetz app! Just open the app, scan your snacks, tap your payment method and go!  Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 As a listener to Something You Should Know, I can only assume that you are someone who likes to learn about new and interesting things and bring more knowledge to work for you in your everyday life. I mean, that's kind of what Something You Should Know is all about. And so I want to invite you to listen to another podcast called TED Talks Daily. Now, you know about TED Talks, right? Many of the guests on Something You Should Know have done TED Talks. Well, you see, TED Talks Daily is a podcast that brings you a new TED Talk every weekday in less than 15 minutes. Join host Elise Hu. She goes beyond the headlines so you can hear about the big ideas shaping our future.
Starting point is 00:00:42 Learn about things like sustainable fashion, embracing your entrepreneurial spirit, the future of robotics, and so much more. Like I said, if you like this podcast, Something You Should Know, I'm pretty sure you're going to like TED Talks Daily. And you get TED Talks Daily wherever you get your podcasts. Today on Something You Should Know, is peanut butter
Starting point is 00:01:07 junk food or health food? I'll explain exactly why it is so good for you. Then, proven strategies you can use to change people's minds. For example, when you give people one option, whether you're in a meeting or talking to your spouse and they ask what you want to do this weekend and you say, let's go to a movie. When you give people one option, whether you're in a meeting or talking to your spouse, and they ask what you want to do this weekend, and you say, let's go to a movie. When you give people one option, they think about all the reasons they don't like that option. And so what smart people do, they don't give people just one option. They give people at least two. Also, you know not to touch your face to stop the spread of germs.
Starting point is 00:01:38 But knowing it doesn't do you any good. And like it or not, machines are getting smarter and becoming a bigger part of our lives. And ordinary devices, whether it's our television or our lampshade or what have you, will talk to us and they'll listen to us. And we'll think of that, you know, as Star Trek normal. I believe that that's going to happen. I mean, it is happening. All this today on Something You Should Know.
Starting point is 00:02:04 Since I host a podcast, it's pretty common for me to be asked to recommend a podcast. And I tell people, if you like Something You Should Know, you're going to like The Jordan Harbinger Show. Every episode is a conversation with a fascinating guest. Of course, a lot of podcasts are conversations with guests, but Jordan does it better than most. Recently, he had a fascinating conversation with a British woman who was recruited and radicalized by ISIS and went to prison for three years. She now works to raise awareness on this issue. It's a great conversation. spoke with Dr. Sarah Hill about how taking birth control not only prevents pregnancy, it can influence a woman's partner preferences, career choices, and overall behavior due to the hormonal changes it causes. Apple named The Jordan Harbinger Show one of the best podcasts a few
Starting point is 00:02:57 years back, and in a nutshell, the show is aimed at making you a better, more informed, critical thinker. Check out The Jordan Harbinger Show. There's so much for you in this podcast. The Jordan Harbinger Show on Apple Podcasts, Spotify, or wherever you get your podcasts. Something you should know. Fascinating intel. The world's top experts. And practical advice you can use in your life.
Starting point is 00:03:24 Today, Something You Should Know with Mike Carruthers. Hi, welcome. Here we are together again for another episode of Something You Should Know, and I sincerely appreciate you taking the time today. First up today, I want to talk about peanut butter. Yeah, peanut butter turns out to be a very healthy food. Eating it will do wonderful things for you. For example, it lowers your risk of diabetes. One study found that consuming one ounce of peanut butter per day can lower the risk of diabetes by almost 30%. It makes you feel full.
Starting point is 00:04:01 Peanut butter's monounsaturated fat and protein can prevent you from overeating and help you lose weight. It can lower your stress level. Peanut butter contains a compound that can regulate stress hormones. If you eat peanut butter while you're pregnant, you may help prevent nut allergies in your child. You'll also burn off fat. There is something in peanut butter that reduces your body's ability to store fat. Just remember to buy peanut butter that has peanuts as its only ingredient with maybe some salt added, but avoid peanut butters that are light or low-fat peanut
Starting point is 00:04:39 butters. They almost always have added sugar. And that is something you should know. How do you change someone's mind? Well, usually you don't. I mean, when was the last time you had a political debate and changed the other person's mind? Or they changed yours? Or when did you last convince someone to do something they really didn't want to do?
Starting point is 00:05:06 Changing people's minds is hard, often seemingly impossible, yet it does happen sometimes. Sometimes a company can convince you to try their product instead of the one you've always used. So it does happen. And when it does happen, how does it happen? How did that company get you to try that new thing? Jonah Berger is a marketing professor at the Wharton School at the University of Pennsylvania, and he's really dug into the research on this for his latest book called The Catalyst, How to Change Anyone's Mind. Hi, Jonah. Welcome. Thanks so much for having me. So why, in a nutshell, why is it so hard to change someone's mind?
Starting point is 00:05:52 You know, I think we have this notion, if we think about a chair, for example, if we push a chair, chair goes in a certain direction. And we think people are the same way. If I just give them more facts, more figures, more reasons, if I just tell them more about why I think they should do what I want them to do, they'll come around. But unfortunately, people aren't like chairs. When we push chairs, chairs go in the direction we want them to. When we push people, they often go in the exact opposite direction. They don't just go along. They push back.
Starting point is 00:06:19 And so rather than saying, well, how could I get someone to change, we need to ask a slightly different question. Why hasn't that person changed already? What are those barriers? Are those things preventing them from changing? And how can I mitigate them making change much more likely as a result? And probably I would imagine there are times when you're not, that's no matter what, that's not going to work. That, that people hold very fundamental beliefs about a lot of things in life that no one's going to change. Yes? That's an interesting question. And that was sort of a journey for me in writing this book. You know, I started with salespeople changing clients' minds, leaders transforming organizations, and people gave me some feedback like you did. They said, yeah, of course, you know, that'll work. But, you know, Democrats don't become Republicans, right? Or, you know, it's really impossible to change people's minds about prejudice, right? Or, you know, it's really
Starting point is 00:07:14 impossible to get someone who used to be a member of the KKK to, you know, renounce the KKK, right? And so I ended up doing a lot of really unusual interviews for me. I talked to hostage negotiators who figured out how to get people to come out with their hands up. I talked to substance abuse counselors who get people to seek help. I talked to a rabbi slash cantor who got someone to announce the KKK. I talked to people who switched political parties. And you're right. It's not easy. Not all change is easy.
Starting point is 00:07:40 And not all change is quick. But I think often any change is possible if we give it enough time and we understand enough about why that person hasn't changed. Because so often we're focused on ourselves, right? Take politics, for example. We want people to switch to our side, but we don't take enough time to understand, well, why haven't they done that? And if we take to understand why, often we can figure out a way to get them to at least come some, if not most of the way. Well, I think of times that I've changed my mind. I've changed my position about various political things.
Starting point is 00:08:12 And it's not because somebody changed my mind. No one was lobbying. No one was deliberately trying to change it. I changed it because I took the time to change it, not because somebody prodded or pushed me to change it. I changed it because I took the time to change it, not because somebody prodded or pushed me to change it. Yeah. You know, I was talking to someone who I think said what you said, very similar to the way you said it. They said, you know, it's not about selling. It's about getting people to buy in. And I think that's exactly right. You know, one thing I talk a lot about in the book is the idea of reactance. You know, people like to feel like they're in control or they're in charge.
Starting point is 00:08:46 And when we try to push them or prod them, we take that ability away. Suddenly now they're not in control, we're in control. And of course, no one wants to do what we want them to do, so they often push back. And so the question then is, how can we give them back some of that sense of control? How can we allow for autonomy and really allow them to persuade themselves? So tell the story about the Tide detergent pods and what happened and how it relates to this topic. So people have this sort of anti-persuasion rate or almost like this spidey sense. And I think there's no clearer example of it than with Tide pods. And so some of your listeners may know of Tide pods.. They may use Tide Pods.
Starting point is 00:09:25 They're very, very popular. And what you stick in your laundry to do laundry makes it faster and easier. You don't have to measure all these things. But a few years ago, there was a problem. It was a simple problem, though an unusual one, which is people were eating them. And if you're sitting there going,
Starting point is 00:09:39 people are eating Tide Pods, what are you talking about? Well, they were. It was called the Tide Pod Challenge. Young people were essentially challenging each other to eat Tide Pods. What are you talking about? Well, they were. It was called the Tide Pod Challenge. Young people were essentially challenging each other to eat Tide Pods. So there was a funny video and a funny article. And then suddenly, lo and behold, kids online were challenging one another to eat these Tide Pods. And so imagine you're Procter & Gamble in this situation, right? You're sitting there going, well, who would eat chemicals to begin with? We shouldn't need to tell anyone anything.
Starting point is 00:10:06 But just to be safe, they issued an announcement saying, don't eat Tide Pods. And in case that wasn't enough, they hired a couple of celebrities to post some videos on social media saying, don't eat Tide Pods. They thought that would be the end of it. And that's exactly when all hell broke loose. So searches for Tide Pods jumped up by over 400%. Visits to Poison Control went up as well. Said very simply, a warning became a recommendation. Telling people not to do something actually made them more likely to do it. And this is true in a variety of different domains. You know, in juries, telling people certain testimonies inadmissible often makes them pay more attention
Starting point is 00:10:43 to it. Telling kids not to do something makes them more likely to do it. Same in the political sphere. But the opposite is also true. Asking people to do something often has the same backfire effect. Because again, when you tell people to do something, now they're not in control. Now they're not the one making the choice. You are. And if they feel like you're in control, they don't want that to happen.
Starting point is 00:11:03 And they push back. This anti-persuasion radar is super powerful. We ignore sales calls. We avoid emails that are trying to push us to do one thing or another. But the most damaging is counter-arguing. We may be presenting in a meeting. Everyone's listening. They're shaking their heads yes. But really what they're doing is sitting there thinking about all the reasons why what we're suggesting is wrong. They might seem like they're listening, but they're not, right? And so that's the most damaging part, right? They've got that anti-persuasion radar up, and if we just push them, it's not going to work. And yet, some people are able to persuade. So what is it they do differently that gets by that radar or goes under it or over it or whatever and gets people to do what they want.
Starting point is 00:11:47 Yeah. So I talk about a few tips and one that I love is called providing a menu. So imagine you're in that meeting, right? You're presenting something to an audience and everyone's shaking their head yes. And they're sitting there thinking about all the reasons why what you're suggesting is a bad idea and why it costs so much and all those different things. What you need to do is shift their role. When you give people one option, whether you're in a meeting or talking to your spouse and they ask what you want to do this weekend and you say, let's go to a movie. When you give people one option, they think about all the reasons they don't like that option. Oh, we went to a movie last week. Oh, it's such a nice weekend. Let's do something else. Oh,
Starting point is 00:12:18 your plan is too expensive. And so what smart people do, whether they're presenting to an audience or trying to convince a spouse, they don't give people just one option. They give people at least two. They give them multiple options. They provide, in a sense, a menu. And what that does is it subtly shifts the role of the listener. Because now, rather than sitting there thinking about all the reasons why they don't like what you're suggesting, instead they're making comparisons. Which of these two options do I like better?
Starting point is 00:12:43 Which one of these is a better fit to me, which is going to make them much more likely to go along at the end of the day. You're not giving them 50 options. You're not giving them 75, but you're giving them a small set of guided choices, a small set of options that allows them to feel like they have some volitional choice, but you're guiding that journey to encourage them to go in the direction that you want. But I've also heard, especially in the world of advertising and marketing, that if you give people lots of options, or if you give people more than one or two options, they're more likely to do nothing. And so that's, again, why I'd say it's not an infinite number of choices, right?
Starting point is 00:13:19 We're not giving people 75 options. We give them two, three, maybe even four, a limited choice set. You're certainly right. There's work on too much choice, saying if I give you 25 different options, you're going to sit there. Your spouse is going to go, I don't want to do any of them. No thanks. It's overwhelming. Indeed, there's lots of research saying too many options is bad.
Starting point is 00:13:37 But some options, at least some aspect of choice, is a great way to make people feel like they're in control. Another way I talk about is asking rather than telling, right? Rather than telling people what you want them to do, asking them some questions. Asking them questions that, again, guide that journey. I was talking to a leader of an organization that wanted people to work harder. He wanted them to stay after work. It was a startup. He wanted to put more hours in. Now, of course, when the boss tells you to put more hours in, you say, no thanks, even if that was something you might have done in the first place. So instead what he did is he called a meeting and he said, hey, what type of organization do we want
Starting point is 00:14:12 to be? And you know what people answer when you say, do you want to be a good organization or a great organization? No one says, oh, we want to be a good organization. I'll say, we want to be a great organization. Then he said, okay, what do we need to do to get there? People started throwing out different solutions, different ideas. Some of them were, oh, we need to do to get there? People started throwing out different solutions, different ideas. Some of them were, oh, we need to work longer hours. We need to do different things. And then later when he raises those solutions back to people, well, now it's much harder for them not to go along because they came up with the idea in the first place, right? Allowing for autonomy, as you nicely said, sort of getting them to persuade themselves.
Starting point is 00:14:41 If they're participating, they're committing to that conclusion. If they said, oh, we need to put in longer hours, well, then they're much less likely later when you say, okay, well, you guys said this, so we need to do it. They're much more likely to go along and less likely to push back. We're talking about how to change somebody's mind. And my guest is Jonah Berger. He is a marketing professor at the Wharton School at the University of Pennsylvania and author of the book, The Catalyst, How to Change Anyone's Mind. Hi, this is Rob Benedict. And I am Richard Spate. We were both on a little show you might know called Supernatural.
Starting point is 00:15:18 It had a pretty good run, 15 seasons, 327 episodes. And though we have seen, of course, every episode many times, we figured, hey, now that we're wrapped, let's watch it all again. And we can't do that alone. So we're inviting the cast and crew that made the show along for the ride. We've got writers, producers, composers, directors, and we'll, of course, have some actors on as well, including some certain guys that played some certain pretty iconic
Starting point is 00:15:45 brothers. It was kind of a little bit of a left field choice in the best way possible. The note from Kripke was, he's great, we love him, but we're looking for like a really intelligent Duchovny type. With 15 seasons to explore, it's going to be the road trip of several lifetimes. So please join us and subscribe to Supernatural then and now. perspectives, and one I've started listening to called Intelligence Squared. It's the podcast where great minds meet. Listen in for some great talks on science, tech, politics, creativity, wellness, and a lot more. A couple of recent examples, Mustafa Suleiman, the CEO of Microsoft AI, discussing the future of technology. That's pretty cool. And writer, podcaster, and filmmaker John Ronson,
Starting point is 00:16:52 discussing the rise of conspiracies and culture wars. Intelligence Squared is the kind of podcast that gets you thinking a little more openly about the important conversations going on today. Being curious, you're probably just the type of person Intelligence Squared is meant for. Check out Intelligence Squared wherever you get your podcasts. So Jonah, it would seem that a lot of effort in getting people to change their mind would seem like a total waste of time, like political advertising. Is an ad on TV or on the radio or in a podcast really going to change somebody's mind to vote differently? It seems like a long shot. It seems like somebody would have to be very vulnerable or very on the fence to go, oh, oh, oh, well, I'll vote for them.
Starting point is 00:17:41 Yeah, I mean, I think part of what you're saying, so in the book, I talk about five barriers. We talked a little about reactance. That's this idea that when you push people, they push back. Then I talk from where people are at the moment, they say, no way, I'm not going to go along. And indeed, you're right. In political ads, often when people try to get the other side to change their mind, they get Democrats to become Republicans, Republicans to become Democrats, it often isn't very effective. But in primaries, getting people to switch among candidates often actually works. A good way to think about decisions, politics in particular, decisions in general, is almost like a football field. If you think about a football field, two end zones, you can think about politics with Democrats on one end, Republicans on the other. If you try to get one side to switch to the complete opposite, it's too far away. Psychologists call that area the region of rejection.
Starting point is 00:18:42 Sure, there's a region around where you are at the moment where you're willing to consider. You're not only willing to consider your own viewpoint, but maybe the viewpoint's near yours, five or ten yards on the field in either direction from where you stand. But a completely other side of the field, 60 yards away, probably not. But what really good change agents do is rather than asking for so much, instead what they do is they ask for less. In some sense, they shrink that change down into a more manageable amount. So I was talking to a doctor, had a great, great version of this. So she was trying to get a trucker to be healthier. This was an obese guy who was drinking three liters of Mountain Dew a day, way overweight. And the
Starting point is 00:19:19 tendency in that situation, like in politics, is to ask for big change right away. Don't drink any soda. Great idea in theory, much harder for people to actually operationalize, much harder for people to actually do. So what she did instead, she didn't tell the person to quit soda completely. She said, hey, just go from three liters to two liters a day. Now the guy grumbled. He didn't want to do it, but eventually he was able to do it. And then when he came back, he said, okay, now go from two to one and one to zero. And eventually he's drinking no more Mountain Dew. It took a while, took a few months to do, but the guy's lost over 25, 30 pounds. And he's been much more likely to go along with that change because she didn't just ask for less. She asked for less and then asked for more. Essentially what she did
Starting point is 00:19:59 is she took a big change and broke it down into smaller chunks. And so we can think about the same thing in politics. I interviewed some people for the book that switched from Democrats to Republicans or vice versa. It wasn't like overnight they just woke up the next morning, they completely changed their perspective. They moved five or 10 yards at a time, but eventually over time went to the completely different other side of the field. And so I think a really good analogy is almost thinking about stepping stones, right? If you want someone to forward a really big river or stream, they might say, no, it's too far away. I might get wet. The water's too deep. I might not make it. They're not going to go for that big change. But if you instead, you throw a couple of stepping
Starting point is 00:20:37 stones along the way. So they take one step and then they take another step and then they take another. Now it's going to feel a lot safer and they're going to be much more likely to forward that river. And so in any change, whether it's politics, whether it's a doctor trying to get someone to drink Mountain Dew, or just get a client to go along, how can we break that big change down into smaller, more manageable chunks, make it more actionable, and make it easier for people to at least start moving in the right direction? Talk about uncertainty and how that works into this. We often forget how risky change can feel. Change, anything new has some risk associated with it. You might not love the old thing that you're doing, but at least it feels safe. New things often feel risky and they're
Starting point is 00:21:18 often uncertain. You don't know how good a new product or service is going to be. You don't know how a new initiative is going to perform. And if you think about it, there's always a cost to change. You may be familiar with the term switching costs, but sometimes it's a monetary cost. You buy a new product, it costs some money. Sometimes it's a time or an effort cost. You install a new software, you start a new program. It takes some time or effort to do that. And the problem is that the costs are often upfront and the benefits are often later. Sure, a new program might be beneficial for the firm, but it's going to take a while for us to figure out whether it's actually going to be better.
Starting point is 00:21:53 We have to pay all those upfront costs before we get to the potential benefits. It's something I call the cost-benefit timing gap. Costs are now and they're certain. Benefits are later and they're uncertain. And so one question is, well, how can we reduce that uncertainty? How can we make people feel more comfortable about doing something new, something different from what you're doing already? And so one thing I talk about, a few ways to reduce uncertainty. One in particular is to do what I'll call lower the barrier to trial.
Starting point is 00:22:23 And a good way to think about this is to think about a company like Dropbox, for example. So right now, Dropbox is a billion dollar business, a file storage company, but they weren't always that way. Originally, they started out as a small business. They had a lot of trouble getting traction. People weren't used to storing files online. They wanted to keep them on their computer. And so how do they get people to adopt this new thing? Well, they could say their product is good, but of course they would say their product is good. No one says their product or service isn't good. And so one thing they dealt with was how can we get people to convince themselves? Again, how can we get people to persuade rather than us doing the work for them? And so they did something interesting. What they did is they gave away their product for free. They gave away their service for free. And you might think, how can you make money giving away something for free? But they gave away two gigabytes of storage. And what that did, very interestingly, is it allowed people to experience the offering themselves. Rather than Dropbox saying, hey, Dropbox is great, here's why it's better than what you're doing already. What this did is allows people to
Starting point is 00:23:22 experience it themselves. And if they liked it, right, if they were using it, then eventually they moved through two gigabytes of storage, they needed to upgrade to a more premium version. And so Dropbox leveraged something we know today as Framium, a business model where you lower the barrier to trial, you get people to come in, try something at a lower cost or a lower effort, and then work them up to a more expensive version. But the principle behind Framium is a lot larger. You think work them up to a more expensive version. But the principle behind freemium is a lot larger. You think about test drives of a car, same idea. There's no free version or premium version, but a test drive allows you to experience the offering without
Starting point is 00:23:54 having to pay money up front. Think about samples in a grocery store. It does the same thing. And so the key idea of uncertainty is really how can we make it easier for someone to experience the value of what we're suggesting? Not by telling them it's great, but allow them to experience it themselves so they can see if it actually is great, if it's actually going to work for them. And if they like it, they'll stick with it. And if not, they won't. But particularly if we have a good product, a good service, a good idea, the question is just how can we get people to experience it themselves, lower that barrier, and then they'll be more likely to come around. You talked about how we think that we should be able to give people the facts as we see them and that they should just, oh, okay, yeah, all right, I'll agree with you now, even though they didn't before. Giving people evidence, if that doesn't work, then what should you be giving
Starting point is 00:24:47 them instead when you're actually trying to get someone to see, you know, like a political viewpoint or something? What works? If evidence doesn't, what does? Yeah, I mean, I think part of the reason evidence doesn't always work, and it's not just evidence, by the way, it's the type of evidence we provide. So if we think back to that political context we talked about, there was a great study that was done recently by a sociologist at a Duke who was trying to sort of bridge the partisan divide. Everyone says, oh, you know, part of the issue is just filter bubbles. People are just caught up in their filter bubble. If they just talked to people on the other side, if they just knew what it was like to be a member of the other party.
Starting point is 00:25:25 They'd come around. And so he did this great study where he did exactly that. He got Republicans and Democrats to get information from the other side on Twitter. So if you're a Republican, you got information about Democratic views. If you're a Democrat, you got information about Republican views, sort of bridging, reaching across the aisle, great sort of quick public policy intervention, which would hopefully have a big effect. He analyzed the data. He hoped it would bring people closer together.
Starting point is 00:25:49 It didn't. It didn't have no effect. It actually had the exact opposite effect. Democrats who got information about Republicans became more liberal, and Republicans became even more conservative after getting information about liberals. And it goes back to that idea of distance that we talked about. Yes, it was information, but it wasn't just any information.
Starting point is 00:26:07 It was information that was really far from where they were currently. And so it's not information itself that's bad. It's about picking the right information. If we're going to give people information, that's fine, but we have to think about where they are in that field and give them information that's just a little bit removed from where they are at the moment, five or 10 yards in the right direction. So we move them a
Starting point is 00:26:28 little bit and move their zone of acceptance with them. So now when we give them a second appeal, they're more likely to move in that direction further. It's not information itself that's bad. It's that confirmation bias that we engage in when we see information that's so far from where we are that we don't want to believe it. Well, you're right, because depending on the subject matter, what you believe to be true is probably based a lot on your belief system, not just objective truth. And a lot of this, you know, goes back to this idea of the confirmation bias. I tell this story of this great paper that was done many years ago where they had both Princeton and Dartmouth students watch a football game. So it's a Princeton game versus Dartmouth.
Starting point is 00:27:10 Both sides were rough. It was a very physical game. Lots of people got injured. At the end of the game, they asked people, hey, who started the fight? And what you find is that even though they watched exactly the same game, everyone thinks the other side started the fight. And I think that's a great analogy for today's political sphere, right? Where we think, look, you know, just because you're on the other side, even though we're looking at the same quote unquote facts, we're really not seeing them the same way. And so I think part of the challenge is distance, as we talked about, but also part of the challenge is what we start with when we have these
Starting point is 00:27:42 conversations. If you start with areas where you disagree, you start with areas where you're far apart on the field, it's unlikely you're going to see eye to eye. And so it's something called switching the field, which is really starting with areas of common ground, finding areas or places where you agree, and using that common ground to then bend around to places where you disagree. Start by seeing that person as a human. Start by seeing that person as someone else who has things in common with you. And then when you get to political stuff, you might not completely agree, but at least you're not going to dehumanize them and you're more likely to have a real conversation. Well, it certainly makes sense if you're going to change somebody's mind or at least attempt to, that it's better to try to move them a little bit at first rather than try to get them to completely do a 180 on whatever it is you're talking about.
Starting point is 00:28:27 And there's so much to this. There's a lot of nuance to this. I appreciate you sharing your insight. Jonah Berger has been my guest. He's a marketing professor at the Wharton School at the University of Pennsylvania, and his book is called The Catalyst, How to Change Anyone's Mind. And you will find a link to that book at Amazon in the show notes for this episode. Thank you, Jonah.
Starting point is 00:28:50 Appreciate you being here. No problem. Thanks so much for having me. Hey, everyone. Join me, Megan Rinks. And me, Melissa Demonts, for Don't Blame Me, But Am I Wrong? Each week, we deliver four fun-filled shows. In Don't Blame Me, we tackle our listeners' dilemmas with hilariously honest advice.
Starting point is 00:29:07 Then we have But Am I Wrong, which is for the listeners that didn't take our advice. Plus, we share our hot takes on current events. Then tune in to see you next Tuesday for our Lister poll results from But Am I Wrong.
Starting point is 00:29:18 And finally, wrap up your week with Fisting Friday, where we catch up and talk all things pop culture. Listen to Don't Blame Me, But Am I Wrong on Apple Podcasts, Spotify, or wherever you get your podcasts. New episodes every Monday, Tuesday, Thursday, and Friday. Do you love Disney? Then you are going to love our hit podcast, Disney Countdown. I'm Megan, the Magical Millennial. And I'm the Dapper Danielle.
Starting point is 00:29:43 On every episode of our fun and family-friendly show, we count down our top 10 lists of all things Disney. There is nothing we don't cover. We are famous for rabbit holes, Disney-themed games, and fun facts you didn't know you needed, but you definitely need in your life. So if you're looking for a healthy dose of Disney magic, check out Disney Countdown wherever you get your podcasts.
Starting point is 00:30:14 For a long time now, people have thought about and been concerned about the idea of machines, robots becoming smart, maybe too smart. Of course, machines have slowly been creeping into the workplace and taking over jobs for some time now, particularly task-oriented jobs that don't require a lot of thinking and judgment. But concern continues to grow about smart machines being able to get smarter and smarter, and maybe even becoming self-aware. So is this reality, or is it still science fiction? Or what? Well, here to discuss that is John Markoff. He is a journalist who has researched this topic thoroughly. John is a science writer for the New York Times and author of the book Machines of Loving Grace. So, John, I understand that some
Starting point is 00:30:57 people are concerned about artificial intelligence, but is this like a future fear, you know, that one day this might be a problem? Or is this something that people are really worried about right now? I think we are in a period where there's actual anxiety and concern beyond the wonks and the designers. I think this happens periodically in American society. It happened in the early 1950s. It happened in the 1960s. And, you know, machines
Starting point is 00:31:26 are beginning to move into the workplace, and so people are thinking about it. But machines have been in the workplace for a long time, kind of slowly creeping in and doing things that humans used to do. And, you know, there hasn't been any big catastrophic events as a result, or have there? I agree with you. Machines have been taking jobs from humans going back into the, you know, 17th century, the Luddites. And this is a perpetual state of affairs. And I think, you know, why now? For the first time, machines are starting to displace workers, not in manual work, but in intellectual work. And so it's not just white-collar clerks, but for the first time, machines are starting to do the job that have been done by $75 an hour
Starting point is 00:32:12 paralegals or $400 an hour attorneys or physicians. And that creates sort of a new context, and that's why we're thinking about it again. And if you look not too far off into the future, I mean, what's the concern? What's the danger other than losing jobs? Is that the concern? Well, there are a range of concerns about interacting with machines from, you know, from machines that arrange marriages to machines that replace us in the workplace to machines that make decisions in warfare.
Starting point is 00:32:50 So it's across the entire range of human activities. And the difference now is that AI technologies, which have failed to sort of meet their promise since AI research began in the early 1950s, are now making great strides. And they're clearly going to be interacting with humans in a wide variety of ranges of things, and people are thinking deeply about it. Can you give me an example or two of some of these great strides of what machines are doing now that they couldn't do before that might surprise people.
Starting point is 00:33:28 Yeah, and I think people are already familiar with them, but the rate of advance has been striking. Machines are listening to us. If you think about Siri or Cortana or Google Now, for the first time you can speak naturally, and a machine will do a pretty good job of understanding what you're saying. And that's an entirely new reality just over the last half decade. For the first time, machines are, they're seeing things and understanding what they're seeing. And that's
Starting point is 00:33:57 really having a big impact in the workplace. A machine can recognize an object, and machines are just beginning to understand scenes, which is something we do without thinking as humans. For example, you can train a machine to look at a picture and say, oh, that's a woman, and she's handing a pizza to that person. That's the holy grail in machine vision, scene understanding. That's happening for the first time. Even more interesting to me is that we've had robots in the past. We've had robots forever, but they've been in cages and they do very repetitive tasks very quickly and very precisely. For the first time, robots are beginning to come out of their cages and move around in the environment, which means
Starting point is 00:34:41 they need a whole set of skills that are human skills. And to be honest, they're not doing a great job yet, but you can see the first steps out into the world. So because a machine can look at a picture and see a woman and see a scenery, so what? What's next with knowing that? What's next? The so what is just all over the place. The so what is, for example, in Amazon's warehouses, where when machines can recognize boxes or packages, they can pick them up and they can place them in places,
Starting point is 00:35:18 and so you don't need warehouse workers anymore. The so what is the cameras that are all around us already will begin to have intelligence. And so you will no longer need human beings to watch the cameras. The cameras will watch us intelligently, which is very Orwellian. In any number of places in the workplace, when you put intelligence into a visual system, you dramatically increase what the machine can do. The intelligence part of this, is it just ones and zeros kind of intelligence, or is it deeper than that? Well, that's a very rich debate, and I come down on your side.
Starting point is 00:36:01 It's just ones and zeros right now. We do not have self-aware machines, and I would argue we don't know how to get there. However, having said that, so that's the question that Elon Musk and others have been raising, you know, are we summoning the demon? Will these machines become self-aware? And that's been a consistent refrain going back long before computing even. And my argument is that we still don't completely know what human self-awareness, human thought is. And so until we have some idea about what it is,
Starting point is 00:36:36 it's going to be very difficult for us to recreate it. That said, you can increasingly simulate the kinds of things that humans do, and we as a species have the propensity to anthropomorphize anything we interact with. And so we will behave as if these machines are intelligent, and that's the more interesting question. Well, the idea of a machine being self-aware, that, you know, that's kind of hard to wrap your head around, because how could a machine have its own intelligence? But I guess that that's what the concern was. I remember Stephen Hawking used to talk about this, that,
Starting point is 00:37:18 you know, that if machines become self-aware, you know, they could become evil and take over the world. Is that the concern? Yep. Yep, that is. And, you know, Marvin Minsky, who's a well-known AI researcher, was fond of saying, you know, if we're lucky, they'll treat us as pets. Well, that might not be so bad. No, pets seem to have great life. What's so wrong with that? My dog's got a pretty good life, so I'm thinking, okay, well, that may not be bad.
Starting point is 00:37:51 Yeah, but, you know, I have a friend in Silicon Valley who likes to say, never mistake a clear view for a short distance. And I think that's the kind of situation we're in. You know, Silicon Valley likes to say that, you know, the future is going to arrive tomorrow. And some of these things are going to take a long time and they probably won't happen in yours or my lifetime. So, John, this idea of a self-aware machine, I mean, is that total fiction right now? Yes. Yes. I think that we don't know what self-awareness is. You could you can increasingly create machines that give the illusion of intelligence,
Starting point is 00:38:27 but that's not the same thing as intelligent machines. An example, I covered the first Turing test in 1991. Turing was the mathematician who sort of came up with a way to determine whether you had a machine that had human-level intelligence. And it involved basically typing a series of questions to a machine or to a human on the other side of a keyboard. You didn't know which was which.
Starting point is 00:38:57 And if you couldn't tell the difference after a satisfactory period of time, you could say the machine was intelligent. That was his idea. So in 1991, the very first year they had the contest, I reported on it, and there were two groups of judges. One group were computer scientists, and the other group were people they grabbed off the street. And from my observation, even in 1991, when the programs were not very good, for the sort of non-technical observer, we'd already passed the Turing test. It wasn't very hard to fool the humans.
Starting point is 00:39:28 And I think that's the significant point. It says nothing about the machine. It says a lot about us. So although this is really interesting, but as you have said, we don't even know what human self-awareness is. So it would be hard for it to be engineered into a machine. This still seems to me like a lot of science fiction that maybe someday might be a problem, but why is this important to the average person? Other than an academic discussion and a concern about the future, which, you know, valid though they may be, why are we talking about this?
Starting point is 00:40:11 So increasingly, we're going to be surrounded by these systems that are, quote, intelligent, unquote. You'll be interacting with them. I mean, you know, where I live and work in San Francisco, if you're downtown, half the population, I swear to you, is walking around looking down at their palm at their smartphone. I mean, they're just everywhere. And so, you know, that can't be the final stage of human evolution. The technology is going to evolve so that we have these things that are, their designers, the computer scientists, call them conversational interfaces. We'll get away from the personal computer and ordinary devices, whether it's our television or our lampshade or what have you,
Starting point is 00:40:45 will talk to us and they'll listen to us. And we'll think of that, you know, as Star Trek normal. I believe that that's going to happen. I mean, it is happening. You know, how many people use Siri and Cortana and it's just an efficient way to get things done. But it, you know, it raises that question of what happens when we begin to treat inanimate objects as having, you know, human-like qualities. And I don't think we have good answers to that yet, and I worry a little bit about it. Well, that is interesting when you think about it, that people are walking around, you know, in some cases putting their life in peril to look at their smartphones while they're crossing a busy street in San Francisco or New York City. And you're right, that can't be it. I mean, in fact, you know, it's very easy to assume that our grandchildren will look back and go,
Starting point is 00:41:33 you did what? You what? Like crank start. Right, exactly. But what is the norm then that we can't envision now that would... What replaces it? Yeah. So here's my bet, and I hate doing this because one of the best things about being a reporter is you don't have to be a so-called visionary, because the visionaries are always wrong.
Starting point is 00:41:55 But I used to be very skeptical about this technology called augmented reality. Imagine having a pair of glasses that sort of allowed you to overlay computing information on top of the world around you. And then I went and saw the technology being developed by this Florida company called Magic Leap. And, you know, I'd read the science fiction books like Werner Wenge's Rainbow's End, which is just really a cool sort of exploration of what happens when you can use this technology. And I was very skeptical that we'd ever be able to do it. And seeing Magic Leap and since then seeing some of the stuff being done by Microsoft and others has really changed my mind. I actually think that at some point that we'll wear glasses that will overlay, you know, intelligent information around us everywhere.
Starting point is 00:42:40 I don't think that is as crazy as it seems. And the question then becomes when. And I think it's a little bit like the invention of the mouse. Doug Engelbart invented the computer mouse in 1964, and it wasn't used by everybody, a consumer product, until 1989. And I think, sadly for you and I, that it's going to take longer for these augmented reality technologies to show up and be useful and be affordable than we would want. Well, wasn't that sort of the idea of the Google Glass that has seemed to have disappeared? Yeah, we have this term in San Francisco we call the where's glass holes. It engendered an incredibly interesting conversation about the use of technology and sort of putting it between two humans.
Starting point is 00:43:31 People just hated them in San Francisco. And I think that's going to be an interesting process. My sense is, I mean, the Google Glass was not even really augmented reality. You call it annotated reality. It was something up and to the left. And so to see what the machine was saying to you, you'd have to look in a very sort of antisocial manner. And at some point, I think these big things become transparent. Maybe it starts that you use it first in the office where you don't interact with other people or something like that.
Starting point is 00:44:01 Or maybe, you know, it's used in elder care first. Maybe it's, you know, used for your grandparents first. and it gives them a way to get out in the world when they can't move around. I mean, hard to figure out, but it just seems to me that everyday devices, as you put computing into them, become magic. And it's hard for me to see that it won't happen with glasses, too. But like everything, there will be pushback and resistance, and it has already been, because like you say, the glass holes. I mean, I remember seeing somebody, it was hysterical, talking to some of these people and saying, you know, they said, yeah, but I have all my contacts, right? I don't have them. Yeah, but you have them already in your phone. Yeah, but I've also got them up here.
Starting point is 00:44:44 Well, yeah, but they're in your phone. If you need them, they're in your phone. You don't need them up there. And that there's that resistance to, like, you know, why? What's wrong with the old way is, I guess, maybe what I guess drives a lot of this, too, is I don't want a pair of glasses between me and, you know. I would have been in your camp, but then I went and saw the Magic Leap demonstration, which at that point was just on a bench, and it was like going to the optometrist doctor's office.
Starting point is 00:45:14 And I looked through it, and in the distance, about three feet away from me, was this forearm creature that was walking in circles. And I have to tell you, I mean, I look at a lot of this technology and I, you know, you see HDTV, this was a better three-dimensional image, clearer than I can see anywhere on any HDTV. It was a strikingly clear image and it was just wandering around. And then something weird happened. My host ran his thumb through the image and his thumb went transparent, not the image. So something was wrong. It was fooling my brain in some really compelling way that I didn't get and that they can't completely explain yet. And so think about the ability. I mean, these people want to get rid of the entire
Starting point is 00:45:58 Asian display manufacturing industry. In their view, you'll wear the glasses. There will be no computer displays. You'll simply take your fingers and draw a square in the air, and there will be a high-resolution display hanging in space. And if you want another one, you'll just do the same thing over again. And, you know, I was kind of seduced by that idea, you know, if they could make it work. But once again, it's that 64 to 89 kind of time frame. But we've seen that in TV shows and movies of where they, you know, draw those images. And no doubt they look very cool, and wouldn't it be cool to be able to do that? Or would it? I don't know. I don't know. Who knows?
Starting point is 00:46:37 Yeah. Well, you don't have to worry just yet. No, I guess not. But that's interesting because you're right. It would completely put out of business the whole display industry, which I guess is a pretty big industry. Yeah, and where is it written that it's a natural thing to sit at a desk in front of a monitor? You know, I mean, why is that a natural state of affairs? Look at the horrible things it's doing to people physically.
Starting point is 00:47:03 So if we could get past that, I think it would probably be a good thing, at least from the point of view of ergonomics. Right, but same thing with people walking down the street in San Francisco looking at their phones. I mean, look what that's doing to people. So, you know, it is fascinating, and there's really no way to tell, but it's fun to listen to somebody who's looked at this, and you've got some interesting ideas that, you know,
Starting point is 00:47:24 probably are as good as anybody else's guess, maybe better, as to what could come from this. I can guarantee you we'll be surprised. Right. As you say, the visionaries are always wrong anyway, so. Exactly. Well, thanks, John. I appreciate your time. John Markoff is a science writer for The New York Times and author of the book Machines of Loving Grace. You have heard a lot lately about the importance of
Starting point is 00:47:52 not touching your face because touching your face can be part of the process that spreads germs and gets you sick. The problem is that just being aware that you shouldn't touch your face can actually cause you to touch your face even more. Estimates are that you probably touch your face maybe 16 times
Starting point is 00:48:13 an hour or more. And when I tell you that and you become more conscious of it, you may do it more, much in the same way that if I tell you not to scratch and itch, it makes you more aware of where you might be itching and makes you want to scratch it even more. There are several theories as to why we touch our face. It could be a form of self-grooming, the way we see other animals do. Or it could serve some other evolutionary purpose. But it does seem clear that just telling yourself not to do it is not very effective. What can work is to keep your hands busy, holding something like a stress ball maybe, or play with
Starting point is 00:48:52 a rubber band, or ask someone to tell you every time you do touch your face to increase your awareness. But the bottom line is, it's very hard to stop touching your face. So it's very important to wash your hands and follow all the other advice that cuts down on the spread of germs and not count on not touching your face. And that is something you should know. If you like this podcast, and you must if you listen this long because here we are at the end of the episode, you probably have people in your life that would like this podcast just as much as you do. So tell them about it and show them how to listen, and then we get a new listener. I'm Mike Carruthers. Thanks for listening today to Something You Should Know. Welcome to the small town of Chinook, where faith runs deep and secrets run deeper.
Starting point is 00:49:44 In this new thriller, religion and crime collide Welcome to the small town of Chinook, where faith runs deep and secrets run deeper. In this new thriller, religion and crime collide when a gruesome murder rocks the isolated Montana community. Everyone is quick to point their fingers at a drug-addicted teenager, but local deputy Ruth Vogel isn't convinced. She suspects connections to a powerful religious group. Enter federal agent V.B. Loro, who has been investigating a local church for possible criminal activity. The pair form an unlikely partnership to catch the killer, unearthing secrets that leave Ruth torn between her duty to the law, her religious convictions, and her very own family. But something more sinister than murder is afoot, and someone is watching Ruth. Chinook, starring Kelly Marie Tran and Sanaa Lathan.
Starting point is 00:50:27 Listen to Chinook wherever you get your podcasts. Hi, I'm Jennifer, a founder of the Go Kid Go Network. At Go Kid Go, putting kids first is at the heart of every show that we produce. That's why we're so excited to introduce a brand new show to our network called The Search for the Silver Lining, a fantasy adventure series about a spirited young girl named Isla who time travels to the mythical land of Camelot.
Starting point is 00:50:54 Look for The Search for the Silver Lining on Spotify, Apple, or wherever you get your podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.