Something You Should Know - What AI Is Really Good At & That Feeling You Get When You Don’t Fit In

Episode Date: April 30, 2026

What makes someone—or something—attractive? It may have less to do with beauty and more to do with how easily your brain can process what you’re seeing. There’s a hidden pattern behind what we... find appealing, and it shows up in more places than you might expect. https://pmc.ncbi.nlm.nih.gov/articles/PMC3130383/?utm Artificial intelligence is evolving rapidly, and it’s easy to feel like you’re either falling behind—or overestimating what it can actually do. So what is AI truly good at right now? Where does it fall short? And how can you use it effectively without getting lost in the hype? Christopher Mims, technology columnist for The Wall Street Journal and author of How to AI: Cut Through the Hype. Master the Basics. Transform Your Work (https://amzn.to/3Qtnd0n), breaks down what today’s AI tools can realistically do, how they differ, and how to get real value from them in your everyday work and decision-making. Have you ever been in a room where you felt like you didn’t quite belong? Maybe you held back from speaking, worried about saying the wrong thing, or felt subtly out of place. That feeling has a name: “churn.” Claude Steele, social psychologist at Stanford University and a leading researcher on identity and perception, explains how this tension arises and how it shapes behavior in powerful ways. In his book Churn: The Tension That Divides Us and How to Overcome It (https://amzn.to/4tZoQl9), he offers insight into why we feel this way—and what we can do to move through it with more confidence and connection. Should you shut your computer down when you’re done with it—or just let it go to sleep? It seems like a small choice, but the answer may not be what you think—especially given how much technology has changed. https://support.microsoft.com/en-us/windows/shut-down-sleep-or-hibernate-your-pc-2941d165-7d0a-a5e8-c5ad-8c972e8e6eff PLEASE SUPPORT OUR SPONSORS POCKET HOSE: For a limited time, when you purchase a new Pocket Hose Ballistic, you'll get a FREE 360 degree rotating pocket pivot and a FREE thumb drive nozzle! Just text SYSK to 64000 RULA: Thousands of people are already using Rula to get affordable, high-quality therapy that’s actually covered by insurance. Visit ⁠⁠⁠⁠⁠⁠⁠https://Rula.com/sysk⁠⁠⁠⁠⁠⁠⁠ to get started. QUINCE: Refresh your wardrobe with Quince! Go to ⁠https://Quince.com/sysk⁠ for free shipping on your order and 365-day returns. Now available in Canada, too! SHOPIFY: See less carts go abandoned with Shopify and their Shop Pay button! Sign up for your $1 per month trail and start selling today at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://Shopify.com/sysk⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ PLANET VISIONARIES : We love the Planet Visionaries podcast! In partnership with The Rolex Perpetual Planet Initiative. Listen or watch on Apple, Spotify, YouTube or wherever you are listening to this podcast. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Amazon presents Laura versus Fruitflies. Swarming your fruit and terrorizing your kitchen, these little freaks multiply at a rate that would make a rabbit say, yo. Chill. But Laura shopped on Amazon and saved on cleaning spray, countertop wipes, and fly traps. Hey, fruit flies, your baby boom ends here. Save the Everyday with Amazon. Today on something you should know, what makes someone attractive?
Starting point is 00:00:37 The answer will surprise you. Then the very latest on AI, what's new, what it's good at, and which AI tool is best for you. The one that is best for you often is the one that you are already devoting the most time to, and that's because of context. These AIs become a lot more useful when they just know more about you, what it is you want to do, what your preferences are. So should you leave your computer on all the time or not? And certain interactions with people who are different than us can feel tense. That feeling is called churn. Churn is my term for the tension that we can all feel in diverse settings.
Starting point is 00:01:20 Classrooms, workplaces, you know, what to say and what not to say and how to behave. All this today on something you should know. Need a vehicle that isn't afraid to make a splash? That's the Volkswagen Taos. Capable and confident, the Volkswagen Taos is fit for everyday life. Nimble in traffic, agile and tight spots, and still spacious enough for weekend getaways. While available 4-motion all-wheel drive gives confidence in rain and snow. The capable Taos, you deserve more confidence.
Starting point is 00:01:54 Visit vw.ca to learn more. SUV, German engineered for all. Something you should know. Fascinating Intel. world's top experts and practical advice you can use in your life. Today, something you should know with Mike Carruthers. Have you ever wondered what makes someone attractive? It may have less to do with beauty and more to do with your brain.
Starting point is 00:02:24 And I'll explain what that means as we begin this episode of something you should know. Hi, I'm Mike Kerr Brothers and thanks for being here. So research shows that we tend to prefer face. and find them attractive if they're easy to process, what scientists call processing fluency. That usually means faces that are more average, more symmetrical, and less visually complicated. We see that as attractive. Why? Because your brain likes things it can understand quickly. When something is easy to process, it actually feels better,
Starting point is 00:02:59 and we interpret that feeling as attractive. and this doesn't just apply to people. It's why simple logos like the Nike Swish or Coca-Cola script are also very effective. They're easy for your brain to recognize and remember so you like them more. So when you find someone attractive at first glance, it may not be because they're objectively more beautiful. It may be more about how hard your brain has to work when you look at them. And that is something you should know. AI is everywhere.
Starting point is 00:03:39 You hear about it constantly. We've addressed various aspects of AI on this podcast about how it's going to change work and creativity and everyday life. But how many people actually use AI in a meaningful way? Beyond the headlines and the hype, what is AI really good at? And where does it fall short? And if you want to use it, how do you use it without wasting time
Starting point is 00:04:03 or getting misled. Because for most people, the challenge isn't access to AI, it's understanding how to use it effectively. My guest has been thinking about exactly that. Christopher Mims is a columnist for the Wall Street Journal who covers technology, and he's developed what he calls 24 laws of AI, practical guidelines to help people make better use of it.
Starting point is 00:04:30 He's author of a book called How to AI, Cut through the hype, master the basics, transform your work. Hey Christopher, welcome to something you should know. Hi, Mike. Thank you for having me. So if you were to take a snapshot of AI usage today, where are we? How many people use it? How many people are afraid of it?
Starting point is 00:04:51 What do they use it for? What's the status quo right now? Roughly, more than half of people are using it regularly and work. their just regular day-to-day life. But an equal proportion are concerned that it's going to be worse for humanity on balance than it's going to be a boon. And the people who are using it, what are they using it for? People are still mostly using AI as an answer engine.
Starting point is 00:05:24 So I think this is one reason chat GPT is the Kleenex for the Xerox of the field, right? you say AI, that's the first thing people think of. Obviously, Google has made it available by default in their search results. So people are mostly treating it as an Oracle, a source of truth. But you also see a lot of folks kind of poking at it as a reasoning engine. So they'll say, you know, what do you think of X? Is this image a fake image? for example, is this image in AI generated image?
Starting point is 00:06:04 You know, or increasingly they will say, here's my tax return. You know, where are some areas I could save money? Or I want an alternative to QuickBooks. Help me generate a spreadsheet, right, to run my life or my finances. And when people use it to say, look at my finances and tell me what you think,
Starting point is 00:06:24 you know, my concern is always, when I use it, it's always very encouraging, always tells me how smart I am. And it also seems very confident in its answers. Yet we also hear that it can also make up things. It can lie. It can hallucinate. And it will do that confidently.
Starting point is 00:06:46 So I don't know how accurate it really is. Absolutely. I mean, my favorite day-to-day reminder of this, when I was writing my book, I used software called Otter to automatically transcribe all of the interviews that I did. There's lots of AI transcription software. It all works approximately the same at this point.
Starting point is 00:07:07 And it's very funny and a little bit strange to read the transcript of an interview you've just conducted. And in the AI generated summary of the AI generated transcript, see a very confident and obviously wrong statement. And it's really interesting how sometimes you can even kind of guess how it arrives. at that incorrect conclusion, you know, it's close to what was actually in the transcript. But anybody who works, you know, with AI regularly, whether they're using to generate code or populate spreadsheets or give answers, knows that it's just ineradicable, these so-called hallucinations that it has. And I understand from talking to people, and I've read that the whole hallucination
Starting point is 00:07:56 and lying thing with AI is getting better, right? That the technology is improving and it lies less than it used to. That's true. But we have to keep in mind that it used to be hallucinating constantly. So if you go from a hallucinating 20% of the time to 5% of the time, that is a significant improvement. But has it improved so much that it opens up, you know, applications where you can trust the AI completely?
Starting point is 00:08:24 Definitely not. Is there a difference between the different chatbot things like chat GPT and Claude? And there is this sense that they're kind of all the same, but you also see ads that like this one's better for this application and this one's better for writing and this one's better for whatever. And do we have any really good guide for that? I can tell people on any given day, which AI they should be using for which application.
Starting point is 00:08:59 The problem is two months from now, I might have to give them different advice. So they are all in an arms race with each other to match each other in terms of features and capabilities. So as of April, 26, if you want to write code, you're going to use OpenAI, or you're going to use Claude. If you just want to get questions answered,
Starting point is 00:09:30 Gemini is great because it taps into Google's entire search database, although, of course, chat GPT also taps into that database, but in a way that Google doesn't like but can't stop because ChatGPT and others are paying companies to scrape Google's search index. don't use co-work for most things. I mean, Microsoft's kind of having a challenging time right now. Co-work just doesn't,
Starting point is 00:10:01 it's not competitive with those other options. If you want to use a chat bot that's pretty good with Excel, you would go with Claude. Although Gemini and Chattebti B.T are catching up. So it really depends, but the bottom line is that if you play with them or you read around on the internet a little bit, you'll quickly determine which one is right for you.
Starting point is 00:10:29 And often, the one that is best for you is the one that you are already devoting the most time to, and that's because of context. So whatever their abilities, these AIs become a lot more useful when they just know more about you, what it is you want to do, what your preferences are,
Starting point is 00:10:50 what your writing stuff, is, what you're working on these days. Yeah, well, that seems like a big issue because, you know, I use ChatGPT for a lot of things and there might be a better one but ChatGPT knows so much about me
Starting point is 00:11:05 that seems like starting over would not be beneficial. Yeah, with Chatbots, my advice is the same as when people ask me, should I switch to you know, a new desktop computer OS or phone operating system or something?
Starting point is 00:11:21 I just say no. You know, the switching costs are always so much greater than whatever you're going to gain. You know, it's really a love the one you're with type of situation. People who are really invested in chat chbt have been using you for years. They just stick with chat chbt. If you're relatively new to this, but you use Gmail and Google Calendar and Google Docs, then Google's personal intelligence and a, you know, $7 a month Gemini subscription is going to do wonders for you. if you are already really bought in on perplexity, for example,
Starting point is 00:11:56 I know somebody who surprised me recently by saying, you know, I love their chatbot. That's the one I use all the time. Should he switch? No, because that's where he's invested all of his time and energy. And it knows so much about him.
Starting point is 00:12:10 So we are really entering this age of personalization with AI, where that matters more than anything in terms of what they can do for So generally speaking, what is, at this particular moment in time, what is AI good for and what is it not good for? Generally speaking, AI can be great at basic tasks like helping you edit a document, generate a presentation, right? Bouncing ideas around. That's what these generative AIs, these chatbots that we use are great at. there's, of course, a vast universe of, you know, what I call classic AI. All the AI that came before the chat GPT moment when it broke onto the scene in November of, what was that, 2021 now.
Starting point is 00:13:04 But those tend to be the kind of things that get used by scientists or, you know, meta or Google. You know, these are the classic machine learning and deep learning systems that are populating our social media algorithms and giving us directions. in our mapping app of choice. So we're using that kind of AI all the time, right? So AI is really great at helping us find things that we might want to consume or getting us to our destination. But that kind of AI is invisible.
Starting point is 00:13:38 And so when people ask, you know, what is AI good at? I tend to interpret that as, what is modern generative AI good at? And one reason that question is a little bit tricky is its ability, are expanding every day. Partly that's because the models are getting quote-unquote smarter, but in no small part it's because these chatbots are becoming a default interface for
Starting point is 00:14:06 existing software. So if you ask Claude, for example, to do something for you that involves or requires a calculation, a couple of years ago, it would have confidently answered using its large language model, its language-based brain. And it would have been wrong. Today, it will write some code, which will actually do the math, and it will give you an answer that is much more likely to be correct. So these chatbots are becoming more capable, in part,
Starting point is 00:14:42 because they are getting better at using existing software on the internet, writing their own little programs to answer our questions, that sort of thing. They're becoming tool users, right? Just like us. As good as AI is, it's not good at everything. And I want to ask you what you think AI is not good at in just a moment. RBC Training Ground has discovered potential in over 20,000 Canadian athletes and counting. Your story could be next.
Starting point is 00:15:14 If you've got the drive, they'll help you find your path to the Olympics. Let's see what you've got. Sign up for free at rBC training ground.ca. There's something else here now. Something new. From exclusively on Paramount Plus, it's the series Stephen King calls Scarious Hell. Everything here is impossible, but it's also real.
Starting point is 00:15:37 Sci-fi vision calls it the best show streaming right now. We're running out of time and we still don't know the rules. Don't miss what the movie blog calls something you need to watch. Saving those children is how we all go home. from binge all episodes exclusively on Paramount Plus. So Christopher, as amazing as AI is for what it does well, what does it not do so well? What is it bad at?
Starting point is 00:16:04 Well, there was a brief vogue where people would show off on X or someplace like that using something like ChatGBT's mobile app, and they would show it a picture. You know, they would say, you know, here is a, you know, a valve in this pipe in my house that broke. How do I fix it? And sometimes they were really surprised by its ability to, not just reason with language, but to see,
Starting point is 00:16:37 to look at an image of something and identify its contents or suggest ways to solve a problem. You know, I mean, Google has had an image recognition, FIgnition feature in its Google app that will help you identify plants or anything for years. But the truth is when you really kind of poke into the guts of it and you start to try to trick it, today's large language models, even the ones that are called vision language models, which means they have the ability to parse images as well, they're so bad at seeing that they kind of border on being blind. Like there's basic test. You can hold up two pens or something,
Starting point is 00:17:17 and it's not that hard to trick it into saying, oh, I see three or I see one or I see no pens. That's something these models are pretty terrible at. Another thing that they're bad at is anything that is outside of their training data. In other words, a human being, we have in our head, you know, a model of the world. And we don't really know how we construct this model, but it's separate from our language faculty. We do know that from modern neuroscience and brain studies.
Starting point is 00:17:46 so we're able to learn in one context based on very limited data and apply that in many other contexts but today's AIs if you show them something that is just completely outside of anything they've ever seen they just break because they're just statistical prediction machines or if you show them something that they rarely see they break but you said you said that you give it some that it doesn't know and it breaks. But my sense has, and my experience has been, if you ask AI something, it never says, I don't know, it always comes up with an, even a nonsensical question will get an answer. That's true. They are, to their credit, I think Google especially, is getting a little better at having their AI's answer in the negative, if you ask. But generally,
Starting point is 00:18:43 the way that AI's work fundamentally, they're just going to kind of go with whatever you feed them. And they certainly have no ability to evaluate the truth or falsity of a statement the way a human would. Where do you think it's headed? In other words, with so much technology, you know, you can look back and say, God, I can't believe it was so archaic back then. What are we going to be saying? And I know it's not fair to ask you to predict the future. But what in 10 years, 20 years are we going to look back at today and go, oh, my God, can you imagine how archaic chat cheap ETT and all these large language model things were?
Starting point is 00:19:27 Well, I think in 10 years, we're going to have many more examples of AI being given too much latitude and too little supervision and doing things that are disastrous. and so we're going to look back and say, wow, I can't believe we ever trusted such primitive systems to, you know, automatically target and engage certain types of enemies on a battlefield and look where that got us, for example. I also think that potentially in the next 10 years, there could be a breakthrough that is on the same level as, what happened when chat GPT broke onto the scene. And that's going to require a new kind of fundamental underlying system for that artificial intelligence. The one that underlies chatGBT and all large language models and many other things besides is called a transformer architecture.
Starting point is 00:20:31 That's the real secret of modern generative AI is that at Google somebody invented a thing called the transformer architecture, Google didn't know what to do with it, Open AI figured it out. The rest is history. We could have another breakthrough, another new fundamental kind of architecture that will get us to the next level of, you know, reasoning in AI's, perhaps they'll start to be able to reason abstractly. Perhaps they'll be able to learn from just a single example. That would be a huge breakthrough, right? Now, if you want to train an AI, of course, you've got to have thousands of examples of something.
Starting point is 00:21:15 And that's very inefficient. But humans are amazing. You can show a human something once. You can show an octopus something once, and they'll repeat it. Imagine if AI could do that. You know, I wanted to ask you, because when I use chat cheap E.T, I don't use the voice thing. I type. And the reason I type is because I think you have to be a little more thoughtful when you type
Starting point is 00:21:38 But I also worried, too, about whether it would really understand what I was saying. And what about that? What about chat, GPT, and the others in their ability to understand voice and to talk back? The voice recognition is outstanding. And the voice synthesis is really good, almost uncanny. So something that I've written about is these incredible leaps and bounds we've made in both understanding and production of speech by computers, for some folks are leading us to a world where they barely type anymore. I mean, you go into certain startups, and at every desk there's a goose neck
Starting point is 00:22:19 microphone, people are wearing headphones that're whispering into those microphones, prompts for their coding agents rather than typing out code. It's pretty remarkable. I guess my concern about having a conversation with AI where I'm just talking off the cuff is if I say something that maybe I didn't mean to say or it hears something and we end up going down a rabbit hole that we that I never intended to but if I type it out I'm a little more thoughtful and careful well the thing about having a conversation with an AI which makes up for the fact that you're just speaking off the cuff is that it can be iterative so when I'm typing if I'm typing a specification for an AI that's going to create a program for me. I'm sitting there doing all the
Starting point is 00:23:12 mental labor myself. I've got a I've got the blank page in front of me, the most daunting thing in the world. When I'm conversing with an AI, I can be like, hey, you know, I got this rough idea. Help me think it through. And of course, you're going to go back and forth. Now, the difference is the AI is going to influence your thinking. You have to be okay with that. Okay, so you're a tech columnist for the Wall Street Journal, and you know a lot more about this than I do than most people do, I think. What's one thing you really think people need to get about AI today? Here's the most concrete, most practical advice I can give anyone about AI today. If you're not paying for AI, you are not experiencing AI as it exists today.
Starting point is 00:24:06 Free versions keep getting better, but when you start to pay for, it doesn't matter, Claude, Chat, GPT, Gemini, you are tapping into their best models. You are always logged in. So you are building the database that the AI has about you. It's memory of what you're up to. And you're able to start to connect it to other things on the internet. For example, if you activate personal intelligence in Google and you're paying for a pro subscription,
Starting point is 00:24:41 when you're having a conversation with it and you're starting to ideate, it'll say, oh, you know, is this related to this thing that you're working on that I know about because I can see every single one of your Google Docs? And frequently, my answer is yes. Thanks for reminding me. That is a whole other level of utility for AI.
Starting point is 00:25:04 you have to pay for AI to get the full value out of it. You don't have to pay a lot. $20 has been the standard. You can pay less. If you go to Google, there's even like a $7 or $8 subscription now, which will get you Gemini Pro. Well, you know, we've talked about AI several times, and yet we always come at it a little differently with different people who have a different perspective, and I really appreciate yours. I've been talking with Christopher Mims. He's a columnist for the Wall Street Journal. And the name of his book is
Starting point is 00:25:38 How to AI. Cut through the hype, master the basics, transform your work. There's a link to his book at Amazon in the show notes. And Christopher, thank you so much for explaining it the way you did. I really enjoyed it. Yeah, Mike, this was a lot of fun. I appreciate your really thoughtful questions.
Starting point is 00:25:57 You know, sometimes with these interviews, it can feel a little rote. But I feel like the questions you asked challenged me in ways that I found really rewarding. This episode is brought to you by Nespresso. Hear that, that's your next obsession. Every coffee, a new world. Every sip, a new taste. This is the new Nespresso.
Starting point is 00:26:19 One touch, endless possibilities. Iced, flavored, long, short. Because some days call for that espresso kick. And sometimes a smooth, silky latte just wins. It's exceptional but effortless. like actually effortless. Simply press, brew, and explore. Nispresso, what else?
Starting point is 00:26:36 Keep exploring at nespresso.com. Hey, it's Hillary Frank from the longest, shortest time, an award-winning podcast about parenthood and reproductive health. We talk about things like sex ed, birth control, pregnancy, bodily autonomy, and of course, kids of all ages. But you don't have to be a parent to listen. If you like surprising, funny, poignant stories about human relationships, and, you know, periods, the longest shortest time is for you.
Starting point is 00:27:05 Find us in any podcast app or at longest shortest time.com. Have you ever had an interaction with someone or a group of people, and it just felt tense? Nothing obvious was said. No one was trying to start an argument, and yet something in the air made the conversation feel uncomfortable, guarded, even a little stressful. We usually assume that tension is personal or political or that someone did something wrong. But what if that feeling is actually a psychological response,
Starting point is 00:27:43 something that happens automatically when people perceive identity differences between each other? My guest calls this feeling churn, and he says it shows up in all kinds of everyday situations, at work, in the classroom, even in casual interactions, often without anyone realizing what's happening. Claude Steele is a social psychologist at Stanford University and one of the leading researchers on how identity and perception shape human behavior.
Starting point is 00:28:13 He's author of a book called Churn, The Tension That Divides Us and How to Overcome It. I, Claude, welcome to something you should know. Hi, Mike. That's a great pleasure to be here. So I must admit I'd never heard the term churn before. So explain a little more what churn is and where it shows up. Sure. Churn is my term for the tension that we can all feel in diverse settings, classrooms, workplaces, boardrooms, etc., athletic teams, when they're diverse.
Starting point is 00:28:46 And it's a tension over, you know, what to say and what not to say and how to behave and generally how our particular identity will affect our experience in that setting and our or maybe even how fairly we will be treated in that setting. So that's what I mean by churn. It's that tension. And I offer a new understanding of this tension, one that assumes it has less to do with prejudice and bias, which I think is the more typical way of thinking about this tension,
Starting point is 00:29:14 and more to do with just a simple effect that our identities can have on our ability to trust each other. So as you were just talking now and describing what churn is, I remember when I was in college, I took a class in, it was Middle Eastern history. And when I walked into the classroom the first time and subsequent times, I was the only non-Midlastern person in that room. And I got this weird feeling. It was nothing obvious. It just felt like I don't really belong here. because I'm not like them.
Starting point is 00:29:57 And that feeling, that's churn, right? That's churn. When you have that kind of feeling, that's what that term is referring to. Well, and I'm glad you said that there's not necessarily any prejudice involved. Because I didn't feel any prejudice. I just felt different. Like I didn't belong as part of their group, and they were the majority of the group in the room.
Starting point is 00:30:23 But there was no judgment attached. to that it was just a feeling of being different exactly I mean I think we're so used to thinking about the the term diversity in the context of a prejudices we have this sort of assumption that if we could eliminate prejudices and what would the problem be but interestingly in turn is something different than that it's it's something that happens to both the prejudice and unprejudiced alike it's just the detention you just described I'm I'm not one of them and they're not one of me and and how's that going to work
Starting point is 00:30:55 out here in this situation. And as a, you know, a society that is very diverse and probably getting more diverse, this can be a significant factor in the important settings of our lives. I don't, I don't mean, you know, sitting on a subway or something or on a bus, but, but, you know, in a classroom or in a workplace, a doctor's office. I think in those real life important situations where, you know, we're pursuing our goals and the like, this churn can be a factor that can sometimes, you know, make us want to avoid situations and avoid conversations. And it puts attention between us.
Starting point is 00:31:37 Well, it seems to me that in a lot of cases, this problem will fix itself if you are with the same people for a while. I have an example of, like, extreme churn that happened to me when I was young, When I was 12 years old, my family moved from the United States to the U.K. And we lived in a small town in the middle of England, Lemington Spa, and went to the local school there. And I walked into that school literally not knowing a soul. It was the scariest thing I've ever done.
Starting point is 00:32:12 And I still think about it. But it didn't take very long before I felt accepted as part of the group. and the churn disappeared for the most part. I mean, I still always felt a little different because, you know, my accent or their accent is different than my accent. But mostly, I just felt part of the group eventually. Yes.
Starting point is 00:32:37 If you have that opportunity in a setting, you know, a classroom, I teach a class over the course of a quarter, you know, what is that, about a quarter of a year, and the students there do, you know, pretty soon, as you describe, they get comfortable with each other because they've all said things now and they've all learned that in this particular class, we're not going to go into this hyper judging of each other mode. We're going to trust that people have good intentions
Starting point is 00:33:08 and people are going to try to see the best in the situation. So as that atmosphere takes over, churn lowers and gets manageable. And then the differences between us become interesting. And, you know, sources of enrichment. Lots of good things happen as this apprehension recedes. And we begin to see the opportunity and diversity. You know what I find interesting about this is the churn that you're talking about. I wonder if in a lot of cases it's in your head, but it isn't real.
Starting point is 00:33:44 And here's an example. If you're someone who's never been to the gym, you're out of shape, but you've decided to start going. And you go to a gym that's full of people who are beautiful, in-shaped people, you're going to think I'm very different. They're judging me. And my sense is, as somebody who goes to a gym pretty regularly, people aren't thinking that at all. They're not judging you because you're out of shape. They're probably more into what they look like than what you look like. than what you look like in the first place.
Starting point is 00:34:17 And if they think anything about you, they're probably thinking good for you. I mean, that's great. I'm glad you're here. Welcome. I don't think people judge the way we think they judge us. As I said, it's in your head. Yeah. That's a good example of churn, I think.
Starting point is 00:34:37 You know, it could be there a little. There could be judgments that are being made by other people. but they're really not important and probably more fundamentally the people that may make a passing judgment like that really are, as you say, impressed with your effort to deal with the situation. They're happy you're there. They want to be supportive. I think you've got a good example of what I mean. But in those cases, in those higher stakes cases, like at work,
Starting point is 00:35:08 if you're feeling like you don't belong there, that weird feeling, that you get inside that you're being judged and maybe ostracized. What are you supposed to do about it? Because people have their prejudices and they're going to make judgments about you and treat you differently. That's them. Nothing you could do about them. So what are you supposed to do?
Starting point is 00:35:31 What I propose is a relatively new suggestion for what to do in that situation. And that suggestion is to focus on building trust in the situation. through your own behavior. I'm trying to really get elemental here. I'm not advocating that you learn about every other culture and all of its details. I'm not advocating that you learn how to get rid of all your prejudices and biases and so on. What I'm advocating is that you focus on being a trustworthy person in that situation. Be responsive to people, listen to them, listen to them again, try to
Starting point is 00:36:13 try to help them function well in the setting, to support their functioning in the setting, be have a bit of a service orientation, if I might put it that way, that that combination of things and a focus on that kind of thing builds the trust that lowers the churn, is that it's, it is a lot simpler than maybe we've made it out to be. Can you give me an example of that? Yes. I have a story about Miles Davis and Gil Evans who are two great. jazz artists of the 20th century. Miles Davis is a cool, hardened African-American trumpet player, probably the best of that century, if you want my opinion.
Starting point is 00:36:59 Gil Evans is from rural Canada, a band leader, white, tall, skinny guy. He's kind of square. He's eating radishes out of a brown paper bag. in these sophisticated jazz clubs in the in New York City. But but they really became absolute best friends. And that friendship is rooted in the concrete connection that they're both very open to their shared humanity, even though they're very different kinds of people and come from very different backgrounds with different orientations and have maybe even have different
Starting point is 00:37:36 interests, maybe even have conflicting interests. Miles Davis is very concerned about, it's a sort of propriety, about employment of African-American jazz musicians at the time. Gil Evans is not African-American. And so, but they jump over those differences because they're not trying to prove to each other that they're not prejudiced or they're not distracted by those things. They really want to help each other in their goals
Starting point is 00:38:05 and in their careers. And that attitude of listening closely to each other and trying to be responsive to each other, other's needs establishes a profoundly close relationship between the two. But I also think a lot of us Americans have relationships that go across identity divides that one might think of as prohibitive, but that we learn to just connect very elementally to each other, to listen to each other. That formula, focusing there on that kind of behavior, returning the emails, being timely, trying to, those things just wipe aside these identity
Starting point is 00:38:50 differences and the worries that make up churn, those elemental behaviors that connect us to each other like, I like that guy. That guy hears who I am. He hears what I'm concerned about. And look, he's trying to help me deal with it. That is more powerful, I think, than we have. recognized. We have seen this challenge of the diverse society that we have, this challenge of bridging differences as more complicated. Somehow we have to get past, you know, prejudices and biases
Starting point is 00:39:28 and misunderstandings and, you know, mistrust takes complicates things and gets us down very difficult paths. I'm trying to reorient or refocus us on simpler things. So what would be some examples of things I could do to do that? Well, I do think listening, the really trying to see what the other person is concerned about. That itself engenders trust in the other person. That person begins to trust you when you show that kind of interest in them. So if I'm a teacher with a student, if I'm a doctor with a patient, if I'm a lawyer with all these circumstances, I'm proposing this focus. on simple things. First, really, really listening, asking questions. I can give you an example
Starting point is 00:40:18 of an experiment that I think that we did some years ago that illustrates what I'm talking about here. Sure, yeah, I love to hear it. We were interested in what would enable people across a racial divide to have an honest conversation about difficult topics. So we had white male Stanford students come into the lab one at a time, and they were told they were going to have a conversation with two other students and on the table in front of them are photographs of the two students they're going to talk to and for half of these participants the two photographs were of two white guys and for the other half they were of two black guys and then they find out that they're going to talk either about something easy to talk to anybody about love and relationships in college
Starting point is 00:41:04 and or they're going to talk about something kind of challenging racial profiling so that's the setup. They're white guys. They're going to talk to either two black guys or two other white guys about either love and relationships or racial profiling. Then the experimenter says, I'm going to go down the hall and get your conversation partners. And while I'm gone, would you arrange these three chairs for the conversation? Would you just pull them together for the conversation? And that's kind of what we're interested in. Depending on the condition of the experiment that they're in, How do they arrange those chairs? And you wouldn't be surprised to learn that when they're going to talk to two white guys about either topic or two black guys about love and relationships, they put the three chairs very close together.
Starting point is 00:41:51 But when they're going to talk to two black guys about racial profiling, there's attention in the room. There's churn going on. I don't want to say something here that would get me seen as racist or something. So they distance themselves in that condition. So how do you overcome that? That is really the central question of this work. And we tried a number of things. We tried some of the wisdoms that come out of diversity training.
Starting point is 00:42:23 That didn't really work very well. Sometimes it even backfires. But eventually we found something very simple that worked. And that's kind of what launched our thinking in this direction. What worked? and by work, I mean, what enabled those white participants to move their chairs close together with two black conversation partners talking about racial profiling? What frame of mind reduced their churn enough to do that?
Starting point is 00:42:52 And what it was was we simply said, look, you know, these are difficult conversations. Nobody really knows how to have them be perfectly smooth. It's part of our societies, our history. So look, just relax and view this as an opportunity to learn about somebody else's experience. And when you're in doubt, don't try to prove that you're not biased or prejudice. Don't do that at all. That could be a red flag. Just relax and ask questions.
Starting point is 00:43:24 When in doubt, ask questions. And with that instruction, with that mindset, so to speak, they moved in close for, they put their chairs close together to have a conversation with two black guys about racial profiling. That says a number of things. They really did want to have this conversation. They were interested. They just didn't know how. And the churn that they felt in this situation, what was, you know, made them initially
Starting point is 00:43:51 just want to avoid the conversation to not really have it. If they didn't have to, they would have left. But with a simple shift of mind. one that said, look, just relax, listen, ask questions, use it as a chance to just learn about somebody else in a situation like that. That reduced the churn, lowered it, enabled them to move in close for these conversations. Well, it's an interesting topic in the sense that it's, you know, you've really simplified it. You've made this like a lot easier than people get all twisted up in knots about this.
Starting point is 00:44:30 And maybe we don't have to. You know, that you're putting my aspiration in good words there. I mean, I think that is the aspiration. You know, it's a lot easier than we think. Well, this is one of those things that I think everyone has experienced and thought about, like, you know, what should I say? Did I say the right thing? I hope I didn't offend anybody.
Starting point is 00:44:55 And nobody ever really talks about it. So it's good to get a perspective on it from someone who's really studied this. I've been talking with Claude Steele. He is a social psychologist at Stanford University and author of the book, Churn, The Tension That Divides Us and How to Overcome It. And there's a link to that book in the show notes. Claude, I appreciate you being here.
Starting point is 00:45:15 Thank you very much. Okay, thank you. Since people started using personal computers, there's been this debate of should you shut the computer off completely for the night or when you're away for long periods of time, or leave it on, maybe just put it in sleep mode. But with modern computers today, there's really not much debate. Sleep mode is designed to be safe and efficient.
Starting point is 00:45:44 Your computer uses very little power and can wake up instantly. So for short breaks, sleep mode is usually the smarter move. But here's what most people don't realize. If you never shut down your computer or never restarted, performance can slowly degrade over time. Updates don't fully install. Memory doesn't fully reset. And small glitches can build up over time.
Starting point is 00:46:10 That's why tech experts now recommend a simple rule. Sleep mode when you're stepping away. Restart or shut down your computer at least every few days. So it's not one or the other. It's knowing when to do which. And that is something you should know. You know, our most consistent source of new listeners is when people like you tell somebody or share an episode and talk up this podcast and get those other people to listen and then they become listeners. It is, well, it's word of mouth.
Starting point is 00:46:43 It's very effective and we appreciate it if you'd help spread the word. I'm Mike Carruthers. Thanks for listening today to something you should know. I know you like interesting and thought-provoking conversations and ideas because you listen to something you should know. So let me recommend another podcast I know you will enjoy. It's the Jordan Harbinger Show. Jordan has a real talent for getting his guests to share stories
Starting point is 00:47:10 and offer thought-provoking insights. Over the years, I've sent a lot of people to listen and I get feedback from people who are so glad I introduce them to the Jordan Harbinger Show. Recently, he discussed Scientology and the children who are raised in that organization. It's a fascinating conversation. And he talked with Dr. Rhonda Patrick about how to protect your mind and body from the modern world. And it's tougher than you think. I've gotten to know Jordan pretty well.
Starting point is 00:47:39 We talk frequently, and I tell you, he is a very smart, insightful guy who does a hell of a podcast. Check out the Jordan Harbinger Show on Apple Podcast, Spotify, or wherever you listen to podcast. Hey, it's Hillary Frank from The Longest Shortest Time, an award-winning podcast about parenthood and reproductive health. There is so much going on right now in the world of reproductive health, and we're covering it all. Birth control, pregnancy, gender, bodily autonomy, menopause, consent, sperm, so many stories about sperm, and of course the joys and absurdities of raising kids of all ages. If you're new to the show, check out an episode called The Staircase. It's a personal story of mine about trying to get my kids school to teach sex ed.
Starting point is 00:48:28 Spoiler, I get it to happen, but not at all in the way that I wanted. We also talk to plenty of non-parents, so you don't have to be a parent to listen. If you like surprising, funny, poignant stories about human relationships and, you know, periods, the longest, shortest time is for you. Find us in any podcast app or at longest shortest time.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.