Something You Should Know - The Surprising Ways Algorithms Steer Your Life & How to Make Your Ideas Stick

Episode Date: April 1, 2019

Do you ever long for the good old days? Nostalgia can be such a wonderful thing. We begin this episode with a look at why the past can seem so idyllic and wonderful and what the positive effects of lo...oking back are. http://science.howstuffworks.com/life/inside-the-mind/human-brain/nostalgia4.htm Do you really understand what algorithms are and how they work? You probably should because algorithms are used so often to influence us in making decisions on what to buy, what movies to watch or who to date. Kartik Hosanagar is a professor of technology, digital business and marketing at the Wharton School at the University of Pennsylvania and author of the book A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in control (https://amzn.to/2JuStcy). Listen and hear the good and the bad about how algorithms have become part of everyday life. “Sit up straight and don’t slouch!” How many times have you heard that or said that to someone else? It turns out that posture is really important and I’ll explain why. http://www.dailymail.co.uk/health/article-2295420/Stand-straight-stay-fightingfit-From-raised-blood-pressure-bloated-stomach-surprising-effectsbad-posture.html#ixzz2OBJ72FZQ Some ideas really stick and others quickly fade away. Why? What makes an idea sticky? Chip Heath, author of the book Made to Stick : Why Some Ideas Thrive and Others Die (https://amzn.to/2Y9YaQu) joins me to talk about his research into what makes a really good idea resonate with people. The advice he gives is very practical and will help you create ideas that people will fall in love with.  This Week's Sponsors -Skillshare. For 2 months free access to over 25,000 classes go to www.Skillshare.com/something -ADT. For more information on smart home security go to www.ADT.com -Capital One. Visit www.CapitalOne.com What's in your wallet? Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:01 Today on Something You Should Know, are you nostalgic? Do you long for the good old days? I've got some interesting reasons why you might be. Then, understanding algorithms, because they are a big part of your life. Over a third of purchases at Amazon are driven by algorithms. Over 80% of our viewing activity on Netflix is driven by Netflix's algorithms. Almost all the dating matches on apps like Tinder and Match.com, they're driven by algorithms. Plus, check your posture, because bad posture could be causing you a lot of problems. And what makes a really good idea stick?
Starting point is 00:00:41 What we find is that successful ideas of all kinds, ranging from urban legends to important religious ideas, have six things in common. They're simple, they're unexpected, they're concrete, they're credible, they tap into emotion, and many come in the form of stories. All this today on Something You Should Know. As a listener to Something You Should Know, I can only assume that you are someone who likes to learn about new and interesting things and bring more knowledge to work for you in your everyday life. I mean, that's kind of what Something You Should Know is all about. And so I want to invite you to listen to another podcast called TED Talks Daily. Now, you know about TED Talks, right? Many of the guests on Something You Should Know have done TED Talks Daily. Now, you know about TED Talks, right? Many of the guests on Something You Should Know have done TED Talks.
Starting point is 00:01:28 Well, you see, TED Talks Daily is a podcast that brings you a new TED Talk every weekday in less than 15 minutes. Join host Elise Hu. She goes beyond the headlines so you can hear about the big ideas shaping our future. Learn about things like sustainable fashion, embracing your entrepreneurial spirit, the future of robotics, and so much more. Like I said, if you like this podcast, Something You Should Know,
Starting point is 00:01:56 I'm pretty sure you're going to like TED Talks Daily. And you get TED Talks Daily wherever you get your podcasts. Something you should know. Fascinating intel. The world's top experts. And practical advice you can use in your life. Today, Something You Should Know with Mike Carruthers. Hi, welcome. I would describe myself as nostalgic, a bit sentimental.
Starting point is 00:02:26 I like the good old days. And it turns out that this whole idea of nostalgia and longing for the good old days is a complex thing. It's so complex that it was once thought of as a disease. It was the official cause of death. Nostalgia was the official cause of death for 74 Civil War soldiers. Our understanding of nostalgia has evolved a lot since then. No one dies of nostalgia anymore. While some dictionaries equate nostalgia with homesickness, it is not.
Starting point is 00:03:02 Homesickness is about a place. Nostalgia is about a time. Homesickness feels bad. Nostalgia feels good, or at least bittersweet. We also tend to idealize the memory and edit out the bad parts, so it is almost always positive and pleasant. Most people get nostalgic at least once a week. Music, smells, tastes, old photos can all trigger nostalgia. And so can loneliness, a bad mood, and being cold can also trigger nostalgia.
Starting point is 00:03:35 Research shows that nostalgia promotes a laundry list of positive mental states and behaviors, such as higher self-esteem, optimism, and creativity. So nostalgia is really a coping mechanism. It's a tool for picking us up when we're feeling lost or bored or lonely. And that is something you should know. You've probably heard a lot about algorithms and how they are controlling a lot of things and deciding a lot of things. For example, algorithms determine some of what you see or don't see on Facebook or what Netflix recommends you watch next. But it's more important than that. Algorithms can determine which people get called in for job interviews and which people don't,
Starting point is 00:04:25 in many cases. Or who gets a mortgage or credit card approval and who gets declined. So what are these algorithms? Where do they come from and how do they work? Well, the guy to pose these questions to is Kartik Hasinger. He is a professor of technology and digital business and marketing at the Wharton School at the University of Pennsylvania. He's co-founded four different business ventures, and he is author of a book called A Human's Guide to Machine Intelligence, How Algorithms Are Shaping Our Lives and How We Can Stay in Control. Welcome, Professor. Thanks for being here. Well, thanks for having me, Mike.
Starting point is 00:05:05 Of course. So, we hear so much about algorithms and how they determine things and control things, but I suspect that a lot of people are a little fuzzy on exactly what an algorithm is. I'll admit to not understanding it fully. So, what exactly is an algorithm? Yeah, that's a good place to start. Algorithms quite simply are just a series of steps one follows to get something done. For example, when I make an omelet, there's a series of steps I follow. Now that set of steps, you could call it an omelet recipe, but the computer scientist in me calls it an omelet algorithm. And almost every software application you use follows a certain set of steps, and that is the algorithm in the software. So for example, Amazon's recommendation algorithm that says people who bought this also bought these. It has a certain set of steps it follows. It looks at the product that you are
Starting point is 00:06:05 currently viewing. It identifies who else has bought that product. It looks at what else did they buy and then comes up with a count of everything those people have purchased and recommends the most popular items to you. Now, that's the series of steps it follows. That is the recommendation algorithm of Amazon. And similarly, any software application you use has a set of steps it follows. And that's the algorithm within the software. To the casual observer, your explanation sounds great. It's great that Amazon knows what other people who bought what I just bought, bought, and are now recommending some other things that I might like. What could possibly be wrong with that? Look, I think this is great. Algorithms,
Starting point is 00:06:52 whether it's Amazon's algorithm or Netflix's algorithm, or you look at algorithms for driving cars, all of these are great. They actually have a huge impact, usually a very positive impact on our lives. In fact, studies show that over a third of purchases at Amazon are driven by algorithms. Over 80% of our viewing activity on Netflix is driven by Netflix's algorithms. Almost all the dating matches on apps like Tinder and Match.com, they're driven by algorithms. They're even at the workplace. All these loan approval decisions, they're done by algorithms. Recruiting algorithms figure out which of 100,000 applications are worth inviting for a job interview. They're used in courtrooms to guide judges on whether a criminal is likely to
Starting point is 00:07:43 re-offend, and they guide sentencing decisions. They guide doctors in terms of treatment options. So they're all over, but the fact that they're all over and they drive so many of our decisions also means that if they have biases or they have problems, then it could be very problematic. And over the last few years, we have started to see that these algorithms can be prone to human-like limitations and biases sometimes. So for example, a study showed that
Starting point is 00:08:15 algorithms used in courtrooms in the U.S. have a race bias. Another study, and actually many have shown that some of these recruiting algorithms used by companies, they have a gender bias. Another study, and actually many have shown that some of these recruiting algorithms used by companies, they have a gender bias. We've already seen that algorithms used on Facebook to curate which news stories we should read, they had this limitation that they were not able to detect fake news stories, and they actually actively circulated many fake news stories. So given how pervasive they are and how much impact they have on our decisions, not many of us recognize that these algorithms have limitations as well. And so we're starting to see that these biases and other limitations exist,
Starting point is 00:09:00 and that's what is potentially problematic and what we should be careful about. It would seem, theoretically speaking, that you could engineer the bias out of an algorithm, if it is in there in the first place, that algorithms should be more objective than people, that they should work more objectively and therefore more accurately than people do. But they can't really match human intelligence, right? If I were to ask you, Mike, give me all the rules you use for driving. We can spend hours discussing it and we can come up with thousands of rules, but if we unleash that car and say, go drive on the road, it might take 15 minutes for it to have an accident. And what engineers have realized is that, you know, if we actually, instead of programming in every rule, we say learn from data, these algorithms can be actually very effective.
Starting point is 00:09:57 So instead of coding all the rules for driving, we say, here are videos of thousands of people driving over a year, you know, observe and learn the patterns and learn how to drive. Then the algorithm essentially is now not being given the series of steps, but it's looking at videos of how people are driving and it learns how to drive. It's the same thing with screening resumes. Go observe, you know, the million applications we received over the last three years. Look at who we invited for job interviews. Who did we give an offer to? Who got promoted at the workplace?
Starting point is 00:10:29 And learn who are the kinds of people we want at our workplace. So the algorithm looks at the data and learns from that. And it works quite well. But the challenge is if there are biases within the organization where, let's say, women are not getting promoted, the algorithm picks up those biases as well. So the algorithm is basically doing the best it can given the information it has.
Starting point is 00:10:52 That's right. I mean, the algorithm is very much, you know, these are called machine learning algorithms. So the idea is how do you get machines to learn? And their approach is very much like the way humans learn. You know, we, you know, let's say you look at a child, the child first looks at, you know, this animal, a pet at a home, four-legged animal and says, okay, that's a cat. And now it looks at another animal and says, hey, that's a cat. And somebody says, no, no, no, that's not a cat, that's a dog. And now the child learns, okay, so there's a difference between cats and dogs. The whisker tells me, the face shape tells me what's a cat, what's a dog. And next time the child looks at a tiger
Starting point is 00:11:34 and says, hey, that's a cat. And we say, no, no, no, that's not a cat. That's at least not a domestic cat. It's a wild tiger. And so now the child realizes, okay, now there's a difference between these. So as we are seeing more data and we observe patterns, we learn from it. These algorithms are very similar. You give them lots of data and they learn patterns in the data. You give them lots of data, they say, okay, every time there's a stop sign, the car seems to be stopping. So I know when I see something like that, I should stop. And then it learns, hey, if the car in front of me slows down and it's coming closer, then I brake as well. So I don't hit it. And so it's essentially learning these patterns from
Starting point is 00:12:17 data. Nobody's explicitly teaching it, but it's learning much like a child learning. Now, again, if the child is learning in an environment that's biased or prejudiced, that child could pick up those biases and prejudice. And it's the same with these algorithms. So if the data from which it's learning is not ideal, it learns from it. And it's very hard to prevent some of these things because, you know, we want them to learn from data. And when you take large amounts of data, like, you know, a million job applications at a company or, you know,
Starting point is 00:12:53 again, videos of thousands of people driving, even tens of thousands of people driving over a year, that's a lot of data it can learn to drive. But then, hey, did somebody make a mistake while driving? Could it pick up those mistakes? All of those things are issues one has to worry about. And it has a social consequence because, you know, hey, if you're using it in a socially important setting, like deciding which mod cage applications to approve, you know, biases are problematic.
Starting point is 00:13:21 If you're trying to figure out who gets a job, again, biases are problematic. And so that's where some of these challenges arise. We're talking about algorithms, and my guest is Kartik Hasinger. He's a professor of technology, digital business, and marketing at the Wharton School at the University of Pennsylvania, and he's author of the book, A Human's Guide to Machine Intelligence, How Algorithms Are Shaping Our Lives and How We Can Stay in Control. Have you ever heard a topic discussed on this podcast or anywhere else and thought, hey, I'd really like to learn more about that?
Starting point is 00:13:57 Well, a great place to learn almost anything is Skillshare. Skillshare is an online learning community with more than 25,000 classes in just about everything. Today on the podcast, we're talking about how algorithms work, and there are classes in Skillshare on algorithms, coding, software. There's even a class specifically on understanding Facebook's algorithm. And there are classes in so many areas. Mobile photography, social media marketing, entrepreneurship, just about anything you can imagine. I'm just starting the class on mobile photography because I want to take better pictures with my phone. And the class is great. If you're a Something You Should Know listener, you must like learning.
Starting point is 00:14:39 So I have a great offer for you. Two months of Skillshare for free. That's right. Skillshare is offering something you should know, listeners. Two months of unlimited access to over 25,000 classes for free. To sign up, go to Skillshare.com slash something. Again, that's Skillshare.com slash something to start your two months now. Join me and millions of other students already learning on Skillshare
Starting point is 00:15:08 and get two months for free. That's Skillshare.com slash something. Since I host a podcast, it's pretty common for me to be asked to recommend a podcast. And I tell people, if you like something you should know, you're going to like The Jordan Harbinger Show. Every episode is a conversation with a fascinating guest. Of course, a lot of podcasts are conversations with guests, but Jordan does it better than most. Recently, he had a fascinating conversation with a British woman who was recruited and radicalized by ISIS
Starting point is 00:15:42 and went to prison for three years. She now works to raise awareness on this issue. It's a great conversation. And he spoke with Dr. Sarah Hill about how taking birth control not only prevents pregnancy, it can influence a woman's partner preferences, career choices, and overall behavior due to the hormonal changes it causes.
Starting point is 00:16:04 Apple named The Jordan Harbinger Show one of the best podcasts a few years back, and in a nutshell, the show is aimed at making you a better, more informed critical thinker. Check out The Jordan Harbinger Show. There's so much for you in this podcast. The Jordan Harbinger Show on Apple Podcasts, Spotify, or wherever you get your podcasts. Contained herein are the heresies of Rudolf Buntwine, erstwhile monk turned traveling medical investigator. Join me as I study the secrets of the divine plagues and uncover the blasphemous truth that ours is not a loving God, and we are not its favored children.
Starting point is 00:16:52 The Heresies of Randolph Bantwine, wherever podcasts are available. So, Kartik, even if there is bias that works its way into the algorithm, isn't it still likely going to be better than leaving it up to humans who are just chock full of biases in so many different directions? That's a great, great point. And I'm glad you brought it up. I think it's really important to ask, hey, if algorithms have problems, then what is the alternative? And the alternative is humans. And we know humans have their own set of biases, as you suggest.
Starting point is 00:17:30 And my message is not so much that, hey, we need to get scared about algorithms and run away from them. And there is, by the way, a lot of fear mongering about algorithms. People use terms that suggest that, hey, these things are going to destroy society and so on. And I'm not one of those. I'm not an algorithm skeptic. I think, first of all, we should note that algorithms, on average, are less biased than human beings. Furthermore, I believe, while there isn't yet evidence for it, because we are just understanding the problem. But I believe in the long run, we will have more success in fixing algorithm biases than fixing human biases.
Starting point is 00:18:10 But I think the algorithms have a different kind of problem. With human biases, they don't scale the same way that algorithm biases do. And what I mean by that is that, let's say you have a biased judge. That judge affects the lives of, say, 500 people. But you have a biased algorithm that is used in courtrooms all over the U.S. that could affect hundreds of thousands of lives, maybe even millions.
Starting point is 00:18:36 If you have a biased recruiter, again, or a biased banker who's making mortgage approval decisions, again, they affect a few hundred people, but an algorithm that can scale, meaning it can make decisions for millions of people, you know, the stakes are higher. And so we should be a bit more careful about these biases. But it would seem that a machine could be more objective and come up, maybe not a perfectly objective, but more objective decision than a human? No, that's not necessary, actually. You could, if you're not careful, if you're not formally testing for biases, if you're not asking all the right questions, and we'll come to that in a second, you know, how do we address this? It is not necessary that a machine is less biased.
Starting point is 00:19:22 In fact, a machine could even amplify biases. So it observes a pattern saying that men are more likely to be promoted, and it says, okay, the data is pretty clear that men are the ones who succeed at the workplace, so let's become more aggressive in selecting men and rejecting women. And that's just an example, but you can apply it to anything, mortgage approval, any of these. Those biases are very easy to amplify. Aren't they very easy to eliminate then? If you don't put gender on the application and the
Starting point is 00:19:57 machine doesn't know if the applicant is male or female, the bias is gone. Yeah, it's not that simple, Mike, because what happens is, in fact, every one of these algorithms that have been shown to be biased, they don't actually even have access to the data on which they're biased, meaning the algorithm that was used in courtrooms that had a race bias, that algorithm did not have access to race as a variable. The algorithms that are shown to have a gender bias, they do not have access to the gender of the person. They actually pick up other things that are correlated with these. And so, for example, you know, you have the zip code of a person or you have the name of a person. It starts to figure out these patterns.
Starting point is 00:20:44 So just saying that we're going to hide these variables from the algorithms, which is what a or you have the name of a person, it starts to figure out these patterns. So just saying that we're going to hide these variables from the algorithms, which is what a lot of companies do, and that's not sufficient is what we're finding. Because, you know, there are so many ways at getting at people's gender and race, and we all can do that even if they don't explicitly identify, this is my gender, or this is my race. And, you know, zip codes tell you that, the name of the person tells you that, or there's so many other things. Can't you just take an algorithm and say, and the best way I can think to say this is, lower the standards on everything so that it isn't just the algorithm doing it, but the algorithm is weeding out the obvious weeds. And then a person takes the rest,
Starting point is 00:21:33 and along with the algorithm, they get better results. Yes, I propose an algorithm bill of rights. And basically, I kind of say, here are a few things we should expect and even demand from our algorithms. I mentioned a couple of those, like transparency and audits. Another one I mentioned is of user control. And I kind of say, you know, engineers are going in the direction of autonomous algorithms, because there is this approach that is used by a lot of product designers and engineers that, you know, if we don't involve the user in the decision-making process, we're simplifying it so much for the users.
Starting point is 00:22:10 They don't have to think about it. So the emphasis has all gone in the direction of completely autonomous algorithms. So the problem with complete autonomy is that it's sometimes even hard to detect these, and even when you observe it as a user, it's hard to correct this. And so I usually say that it's great that algorithms provide so much value, but let's use them. Let's keep a human in the loop. Are there things going on with algorithms that affect me that I probably don't know about or wouldn't suspect are going on? I've often run surveys with people asking, you know, to what extent are these algorithms driving your choice?
Starting point is 00:22:52 And usually I find that people underestimate the impact of algorithms on their decisions. Most people think that, hey, the algorithms are giving me some recommendations. I nod politely and do what I want. But the data suggests that they're actually having a huge impact on our choices. Like I mentioned, 80% of choices on Netflix, over a third of the choices on Amazon, and so on. The second issue that is somewhat, I would say, misunderstood is socially consequential settings. Like I talked about courtroom decisions. Sometimes it's life and death decisions. You know, medicine is moving towards, you know, using a lot of algorithms. One of the big trends in medicine is personalized
Starting point is 00:23:40 medicine. So make decisions based on individuals' DNA profiles. And so algorithms will play a huge role there. You know, which schools your kid is assigned to, an algorithm is often assigning kids to, you know, which is the right public school. And so there's a lot of decisions, you know, which policemen go to which precincts. You know, we talked about mortgage approval, recruiting, and so on. So I think the scale of their impact is not as well understood. When I just think about the word algorithms, I remember, you know, at one time thinking, well, you know, that's an algorithm is a pretty cool idea. But it seems to have more of an image problem now.
Starting point is 00:24:27 There's something that almost seems inherently evil. When you hear, well, they're using algorithms for that, like, oh, wait a minute. Don't you think that the sky is falling? And the people who are saying that, I wonder, well, what's the alternative? Is it still not better than nothing? You know, you hear terms like algorithms of oppression or algorithms, you know, being the cannibal algorithms or algorithms that are destroying society. And I think those are very problematic ways to describe them. And I think it's creating a fear which is unnecessary and even misleading. You know, people understand that there's a lot of potential value here, that
Starting point is 00:25:15 we could eliminate huge biases and limitations in human decision-making and create so much value by using computers to objectively analyze the data. And yes, we have seen many instances of algorithm failures lately, but there is a greater chance that we can correct them than correcting human biases. And furthermore, there's so many other settings in which they create so much value. So let's not have this fear-based conversation. Let's not wallow in self-pity. Let's talk about solutions and move forward. Because you could imagine people hearing that 80% of things people watch on Netflix are algorithm-driven and think, you know, Netflix is manipulating us. On the other hand, maybe Netflix is just hitting it out of the park and doing a great job giving you recommendations that you might not otherwise have found, and kudos to them. Exactly, and it's very hard to say sometimes which
Starting point is 00:26:20 of the two it is, and as long as users are savvy about this, we understand what these algorithms are doing. As long as we are a bit deliberate about our decisions and not saying I'll use it passively and do whatever it says, but actively engaged, you know, I think it's all fine. I think if algorithms create so much value where instead of spending hours making decisions, you spend seconds making those decisions because they can show you what's relevant and what's not for your decision-making, that's great. But as long as we are deliberate about it
Starting point is 00:26:53 and not kind of just blindly following them and understand how these algorithms work at a high level, again, not the details, but at a high level, understand their limitations and ensure they're looking at the right kinds of data. They have, you know, again, transparency helps. If you say, what's the data the algorithm looked at? What were the factors it considered most, you know, first?
Starting point is 00:27:16 And what were the factors that were less important for it? If we understand that, we could say, okay, I'm now happy making this decision in a minute where it would have taken me hours. And boy, can we use, you know, some help in all the complex decisions we make every day. Well, I think I've got a better handle now on what algorithms are, what they do, and what they don't do. And I appreciate you sharing your knowledge. My guest has been Kartik Hasinger. He is a professor of technology, digital business, and marketing at the Wharton School at the University of Pennsylvania. And he's author of the book, A Human's Guide to Machine Intelligence, How Algorithms Are Shaping Our Lives and How We Can Stay in Control. There's a link to his book in the show notes. Thank you, Kartik. Thanks very much, Mike. People who listen to Something You
Starting point is 00:28:06 Should Know are curious about the world, looking to hear new ideas and perspectives. So I want to tell you about a podcast that is full of new ideas and perspectives, and one I've started listening to called Intelligence Squared. It's the podcast where great minds meet. Listen in for some great talks on science, tech, politics, creativity, wellness, and a lot more. A couple of recent examples, Mustafa Suleiman, the CEO of Microsoft
Starting point is 00:28:36 AI, discussing the future of technology. That's pretty cool. And writer, podcaster, and filmmaker, John Ronson, discussing the rise of conspiracies and culture wars. Intelligence Squared is the kind of podcast that gets you thinking a little more openly about the important conversations going on today. Being curious, you're probably just the type of person Intelligence Squared is meant for. Check out Intelligence Squared wherever you get your podcasts.
Starting point is 00:29:07 Do you love Disney? Then you are going to love our hit podcast, Disney Countdown. I'm Megan, the Magical Millennial. And I'm the Dapper Danielle. On every episode of our fun and family-friendly show, we count down our top 10 lists of all things Disney. There is nothing we don't cover. We are famous for rabbit holes, Disney themed games, and fun facts you didn't know you needed, but you definitely need in your life. So if you're looking for a healthy dose of Disney magic, check out Disney Countdown wherever you get your podcasts. Doesn't it just fascinate you how ideas work and how some ideas make it, other ideas just fade away or die on the vine? I love the idea of ideas, and so does Chip Heath and his brother Dan.
Starting point is 00:29:56 Dan was a guest a few years ago here talking about special moments in life, and today Chip is here to talk about ideas. The two brothers have co-authored several books, including Made to Stick, Why Some Ideas Thrive and Others Die. Hey Chip, welcome. So how did you and your brother Dan, how did you decide to work together on the idea of ideas? We're 10 years apart, and we discovered that we had this common interest in what makes some ideas stick with people. And so both of us had had experience at trying to teach and get our ideas across and watching other people succeed and some fail. So explain what you mean and maybe some examples would help of ideas that stuck and maybe some ideas that didn't stick. Ideas that stick, JFK's man on the moon speech, the boy who cried wolf, Aesop's fable that
Starting point is 00:30:52 has stuck for 2,500 years, the this is your brain on drugs campaign from the 80s. There are lots of ideas that stick. And if you want to look for ideas that didn't stick, think back to what you remember about the last presentation you saw or the last memo that you read. Probably zero. And so understanding that ideas are very different, very diverse, is there some sort of common thread that applies to ideas that make it and ideas that don't? There are some common principles underlying ideas. So one of the most common that we see is very concrete, tangible images
Starting point is 00:31:30 that you can see in your mind or imagine. So when John F. Kennedy talks about putting a man on the moon, that puts an image in your mind. When they use the egg in This Is Your Brain on Drugs campaign and you saw it drop into the skillet, and you heard the sizzle. That's a tangible concrete image. But unfortunately, most of us, when we try to communicate our ideas, we talk in abstractions, and that prevents our ideas from sticking. the average Joe CEO talking about needing to increase shareholder value versus something more concrete like Herb Kelleher when he founded Southwest Airlines.
Starting point is 00:32:13 His big thing was that we are the low fare airline. Yeah, so there is a principle called the curse of knowledge that we talk about in the book. When we become experts, we think about the world in abstract ways. So if you've ever had a conversation with the IT guy about what's wrong with your computer, he knows what he's talking about, but he talks about it at such an abstract level. And what you're wanting him to do is to tell you which button to press to fix the problem. And so the CEO who's talking about maximizing shareholder value is hearing a song playing in his mind that's not coming across to the rest of us.
Starting point is 00:32:45 What we need is somebody like Herb Kelleher to say, you know, we are about being the low-cost airline. I remember hearing some time ago, and you talk about it in the book too, the problem of giving people choices. One of the things that we found in researching the book is that if people have two good choices, they're actually less likely to choose either than if they have one good choice. And so many times in life we're confronted with, you know, eight core values for our organizations or a 13-point policy plan by a politician. How are we going to make choices about priorities when we're confronting that many options? Yeah, and you just pointed out that it's not like you need a lot of decisions to screw you up, just two screws you up.
Starting point is 00:33:29 Yeah, even two does it. Because choice brings on this paralysis, right? Yeah, we talk about decision paralysis, and there's just lots of research that says that even two good options makes us much less likely to choose either. And I know you talk about it, and I've heard other people talk about the idea that when you're trying to get your point across, when you're trying to convince people to pay attention, simple is better, and yet when we do it, we tend to explain things to death. Yeah, and that's the curse of knowledge kicking in again. As an expert, we know so much, So when we're talking to our kids about why it's important to keep an honest name or we're coaching youth sports leagues,
Starting point is 00:34:14 the reason we got to be a coach is because we know a lot about the sport. But as a beginner, what you need to do is focus on one principle that we need to learn this week or this month. Yeah, and I've seen that, like when coaches coach Little League or they coach soccer, they sometimes overwhelm the kids with so much information that it's hard to really grasp what they need to learn. And you talk about, you know, finding your core message. So if you remember the Jared Subway sandwich campaign, the core of their message is we have really healthy fast food. And the campaign before
Starting point is 00:34:52 Jared, the story about this guy who diets down from 425 pounds to 190 pounds was a slogan, seven sandwiches under six grams of fat. Subway had found the core of their message, but they implemented it in very different ways. And the story of Jared with that concrete image of this guy holding out these enormous pants, that stuck with people. The 7 Sandwiches Under 6 Grams of Fat didn't stick with anyone. Because no one can relate to 7 Grams of Fat. I mean, I don't even know what that is or what it looks like, really.
Starting point is 00:35:24 Exactly. And when we're doing our presentations at work, very often we're more in the seven seven inches under six grams of fat arena. We marshal our facts, we get lots of details together, when what's really going to stick with our audience and lead them to take action is a story or a concrete example. There is something that happens that has really fascinated me for a long time, and it can happen whether you're writing a report for work or a paper for school, or it even applies to podcasting, where people get so into it, so close to it, they make it hard for people to really understand what they're trying to say. And this is an old newspaper problem that you talk about. Yeah, there's an occupational hazard that reporters face,
Starting point is 00:36:10 and it's called burying the lead. And this happens a lot with reporters who have done lots of research for a story. So you're a Washington journalist, and you've done a lot of research for a story about politics. Very very often the thing that is most relevant for the reader ends up way down in the story, and editors call that burying the lead. The trick as a journalist and the trick for all of us is to get the most important new piece of information, the most central core idea, in that first sentence, in that first paragraph of the story. And reporters become very good at that, but most of us don't have the experience of prioritizing when we write an
Starting point is 00:36:51 email to somebody or when we give a speech. We should accept that same discipline that journalists have. Tell the story of Nora Ephron, the writer, the journalist, filmmaker. Nora Ephron was a high school journalist, and her first day in class, this is her first experience as a journalist, filmmaker? Nora Ephron was a high school journalist, and her first day in class, this is her first experience as a journalist, she walks in, the teacher immediately gives them an assignment. He gives them a set of facts that next Thursday the principal of Beverly Hills High School has announced
Starting point is 00:37:18 that the faculty will travel to Sacramento on a colloquium about new teaching methods. Margaret Mead's going to be there, the California governor, Pat Brown, is going to be there. And what he asked them to do with this list of facts is to create a lead for a story. And so students worked away. Most of them just reordered the facts. High school principal Ernest Brown has announced that the faculty will be traveling next Thursday to see a colloquium by Margaret Mead and Governor Jerry Brown. Now, he collects all those, and he riffles through them at the front of the class,
Starting point is 00:37:50 and he looks at the class and says, the lead of the story is, there is no school next Thursday. They had all missed it. You know, they were getting bogged down in the facts, and they hadn't thought about the implications of those facts and conveyed that to her reader. That was her first lesson as a journalist, and it's a good first lesson for the rest of us. As a teacher, if I had one moment in a decade of teaching that is as good as that exercise,
Starting point is 00:38:15 I would hang up my hat and call it a day. Great. And so you say that good ideas have some things in common. What we find is that successful ideas of all kinds, ranging from urban legends to important religious ideas to sticky political ideas, have six things in common. They're simple, they're unexpected, they're concrete, they're credible, they tap into emotions, and many come in the form of stories. And imagine if when people were trying to convey their idea, tell their story, if they kept those six things in mind, how much better it would be
Starting point is 00:38:52 than this kind of usual abstract way we talk. And so when we're working with our kids, when we're working with our co-workers, very often they don't share all the knowledge that we have. I mean, we've spent weeks or months getting the right idea at work, or we've spent years accumulating the experience we're trying to pass along as parents. And so very often we tend to talk in abstract ways. We tell our kids, you know, having a good name and a good reputation is really important. And what they're hearing is blah, blah, blah, blah, blah. Now, the story Aesop's Fable, about the boy who cried wolf,
Starting point is 00:39:25 has been conveying that sentiment for 2,500 years or more, and it's probably much more effective than our very abstract pieces of advice. Yeah, and even things like one in the hand is worth two in the bush, or the golden rule, these things have been around for a long time because they follow the rules that you're talking about. And they cut across cultural boundaries. Very often we think, you know, we've been taught by marketers that you have to segment your message and know your target audience. But there are things that all of us have in common. And something like a bird in the hand that's worth two in the
Starting point is 00:39:58 bush is in 53 different languages. It's an idea that resonates with people because it's concrete and it talks about a trade-off that we make in life about, do we take the sure bird in the hand or do we take a risk on trying to catch the two in the bush? And it's very visual. You can picture the bird in the hand in the bush. It's very visual. Exactly. It's a little bit unexpected. So it has at least three of the properties that we've talked about. And the golden rule? And the golden rule is a classic, important piece of advice that has changed people's behavior for a long period of time. And it's easy for us to picture how we would want to be treated.
Starting point is 00:40:35 And so if we can use that to treat other people, we're going to be way ahead. Have you ever come across an idea that disobeyed the rules that you're talking about here and still succeeded? I haven't come across with an idea that violates them. It's certainly true that false ideas can very often have many of these properties and succeed wildly. So one of my favorite false ideas is you only use 10% of your brain. Now, who did that research study? Yet all of us have heard that, and I've talked to people in Indonesia and Japan and Turkey that have heard it in their own culture. But if 90% of the stuff up there was cushioning,
Starting point is 00:41:15 football players wouldn't need helmets. If we think about why that idea succeeds, it's simple. It's got a little bit of credibility because that 10% sounds really specific. Somebody must have done research on that. And it's more importantly, really, really unexpected. We all think of the brain as an important organ. And so the idea that we're not using 90% of it really sticks with us. And yet it's total baloney. It's total baloney. I wonder where it did start. Actually, there are folklorists that have studied this kind of thing,
Starting point is 00:41:45 and the earliest account of this that has been found, we might think that this goes back to the 80s when we started becoming interested in the brain and brain imaging. Actually, folklorists have traced that idea in our culture back to 1924. So this is an idea that's been circulating for 80 years at least with no advertising dollars, no public relations assistance, and yet it survives and spread on its own. And that's kind of the definition of an urban legend.
Starting point is 00:42:16 Exactly. So urban legends, rumors, but also on the positive side, the proverbs that we were talking about earlier that provide useful advice, or every religious tradition has a set of stories that help people live a better, more moral life. You know, what's interesting is when I listen to you speak, when anybody would listen to you speak and look at the book, you have to come to the conclusion that this sounds right, this makes a lot of sense, but I haven't heard anybody put it all together this way before, although you do give a tip of the hat to Malcolm Gladwell. Yeah, Malcolm Gladwell did a
Starting point is 00:42:51 great job in the tipping point at talking about the idea of stickiness, that social epidemics become epidemic because they stick with people. And what Dan and I, because of our background, have been interested in is really that question, how do we get our ideas across? And in surveying the greatest hits of humanity on sticky ideas ranging from the Bible to Aesop to modern ideas like JFK's Man on the Moon speech, what we found were we kept seeing the same principles. We kept seeing the concrete imagery. We kept seeing the emotional tie.
Starting point is 00:43:23 We kept seeing that many of these come in the form of stories. And eventually we struggle with it enough to realize that there is this deep underlying consistency. And I enjoyed the example that you gave about TV remotes. The curse of knowledge is really well exemplified by the engineers who create TV remote controls. I mean, who can use that thing other than the engineer that initially designed it? And the reason is because the experts are trying to pack as much as they can into their product. And what the rest of us need is a really simple device. Another device that has similar properties was the original Palm Pilot.
Starting point is 00:44:02 The founder of that team that created the Palm Pilot used to walk around encouraging his engineers to keep that device simple. It only did four things, but it did them really well. He actually had a kind of visual proverb that he would carry with him. He had a block of wood in the shape that they wanted the Palm Pilot to be.
Starting point is 00:44:19 And every time an engineer would propose an additional feature, he'd pull out the block of wood and say, where's it going to fit on this device? We're not going to have room for peripheral ports. We're not going to have room for an extended keyboard. We're designing a really simple, elegant device. And you talk about, and I've talked with other guests on this podcast about, the importance and the magic of stories. That when you want to make a point, it's better to tell a story than to give facts and figures that stories are magic. They are magic. And in doing the research, what we found is that a way of thinking about stories is that they're flight simulators for
Starting point is 00:44:59 our brains. You can tell your kids, you know, truth-telling is really important, but if you tell them the story of the boy who cried wolf, they are living through and seeing that they themselves are learning to distrust this boy who's repeatedly crying wolf. And it's not surprising at the end of the story that the ending is bad, because you predicted it all along. And so, which is better, telling people an abstract piece of advice, or letting them experience it in this kind of mental flight simulator? And yet, I can imagine someone listening to you and saying, well, he's just dumbing everything down. He's trying to take everything and make it into an oversimplified, boy-who-cried-wolf kind of explanation, and some things just don't
Starting point is 00:45:43 fit that. They're more complicated than that. Well, nobody accuses the golden rule of being a soundbite. So there is a sense in which when we find the real core of our message, the essence of an idea, a man on the moon in a decade is a really simple idea, but it also encapsulated lots of hopes and aspirations of the whole nation. And that's the standard that we want to aspire to. Dumb soundbites are dumb, but really important core ideas can transform people in societies. Well, there are times in everyone's life when they have to make their point, sell their idea, and you've made it pretty clear why some ideas stick and why some ideas don't. Chip Heath has been my guest.
Starting point is 00:46:31 He, along with his brother Dan, are authors of the book Made to Stick, Why Some Ideas Thrive and Others Die. You'll find a link to the book in the show notes. Unless you're lying down right now, you're probably either... Well, you have to be either sitting or standing. And the question is, are you sitting or standing up straight? You should check your posture because bad posture can really screw things up. Here are some surprising side effects of slumping. People who walk with bad posture report feeling more depressed and have lower energy levels. Slouching can raise your blood pressure by inhibiting blood circulation. Chronic slumpers
Starting point is 00:47:13 often wind up with leaky bladders due to weakened muscles. Poor posture can give you heartburn by pushing everything up towards the esophagus. Slouching can also trigger headaches and asthma attacks because it can inhibit oxygen intake. Bad posture can even take a toll on your confidence and concentration. Students who sit up straight for tests tend to score better than those who slouch. And that is something You Should Know. Questions, comments, or just to say hi, you can always email me. My direct email is mike at somethingyoushouldknow.net and there is also a contact form on the website, which is somethingyoushouldknow.net. I'm Mike Carruthers. Thanks for listening today to Something You Should Know.
Starting point is 00:48:04 Welcome to the small town of Chinook, where faith runs deep and secrets run deeper. In this new thriller, religion and crime collide when a gruesome murder rocks the isolated Montana community. Everyone is quick to point their fingers at a drug-addicted teenager, but local deputy Ruth Vogel isn't convinced. She suspects connections to a powerful religious group. Enter federal agent V.B. Loro, who has been investigating a local church for possible criminal activity. The pair form an unlikely partnership to catch the killer,
Starting point is 00:48:36 unearthing secrets that leave Ruth torn between her duty to the law, her religious convictions, and her very own family. But something more sinister than murder is afoot, and someone is watching Ruth. Chinook. Starring Kelly Marie Tran and Sanaa Lathan.
Starting point is 00:48:52 Listen to Chinook wherever you get your podcasts. Hi, I'm Jennifer, a co-founder of the Go Kid Go Network. At Go Kid Go, putting kids first is at the heart of every show that we produce. That's why we're so excited to introduce a brand new show to our network called The Search for the Silver Lightning, a fantasy adventure series about a spirited young girl named Isla who time travels to the mythical land of Camelot. During her journey, Isla meets new friends,
Starting point is 00:49:21 including King Arthur and his Knights of the Round Table, and learns valuable life lessons with every quest, sword fight, and dragon ride. Positive and uplifting stories remind us all about the importance of kindness, friendship, honesty, and positivity. Join me and an all-star cast of actors, including Liam Neeson, Emily Blunt, Kristen Bell, Chris Hemsworth, among many others, in welcoming the Search for the Silver Lining podcast to the Go Kid Go Network by listening today. Look for the Search for the Silver Lining on Spotify, Apple,
Starting point is 00:49:48 or wherever you get your podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.