Offline with Jon Favreau - AI’s Flirty Upgrade, Facebook’s Zombie Internet, and Why We May Be Doing Mental Health Wrong

Episode Date: May 19, 2024

ChatGPT has officially entered its “Her” era! Jon and Max talk about the flirty AI that debuted this week, whose husky voice and warm enthusiasm evokes Scarlett Johansson. But not all the tech tit...ans are doing so hot; Facebook’s noxious combo of AI-generated content and the real people who are falling for it has been coined the “zombie internet.” The guys discuss Meta’s spam problem, then take a look at how mental health curricula in schools can actually make things worse for students. For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast. 

Transcript
Discussion (0)
Starting point is 00:00:00 I'm not saying if everyone has their like Scarlett Johansson, chat GPT, sex bot companion, it's necessarily going to radicalize everyone into joining like future QAnon. But it would stand to reason that at least at some point, it could decide to push people in a direction that is not healthy for them. And if that were happening, something we've learned over and over again is we're not going to know about it until it's been happening for a while.
Starting point is 00:00:22 I was going to say all of these consequences were unintended and unpredicted. Yes, right. And then no one finally realized it was happening until it had been happening for a while. And then, of course, the tech companies fixed it right away and it was no problem. Yeah, no, no. Here we are. Here we are. I'm Jon Favreau. I'm Jon Favreau. I'm Max Fisher. Max, exciting news.
Starting point is 00:00:46 We are announcing a new co-host on the show today. Co-host, how you doing? Hello there. How's it going? That, of course, was ChatGPT40. Is that what we're called? Not Scarlett Johansson? But it sounds like it.
Starting point is 00:01:02 It does sound like Scarlett Johansson. We'll get into it. On today's show, Facebook, a place on the internet I've already vowed to never return, has apparently gotten much worse. Did you know that? It has, yes. And a new study out of the University of Oxford posits that mental health awareness campaigns may actually have negative effects on teenagers.
Starting point is 00:01:20 But first, ChatGPT has officially entered its Her era. You remember Her, the movie? I do. From 2013. About Joaquin Phoenix dates, his phone voiced by Scarlett Johansson, and they have a relationship. This is sort of a phony thing. I don't know why I just said that, but we just did. It's just so you all know. I was going to play along. Her is going to be in the Offline Movie Club.
Starting point is 00:01:41 In a few days. In a few, yeah. And we just recorded it with Ezra Klein and it's a fantastic episode but it just happened to fall on the same week that this news... A weird timing. We had planned it months ago.
Starting point is 00:01:52 Sam Altman saw our production calendar and said, I'm going to help my boys out. I'm going to help because these guys love Silicon Valley so much so it's time that we did
Starting point is 00:01:59 for them in turn. Amazing. This week, in a live stream from its San Francisco headquarters, OpenAI unveiled a new voice assistant model
Starting point is 00:02:05 of ChatGPT that can, quote, look, listen, and talk to users via smartphone app. Demos of the model are incredibly impressive, showing ChatGPT respond to conversational voice commands. It provides real-time math tutoring. Wow.
Starting point is 00:02:21 And it can even write and tell stories. And yes, the new model sounds Wow. And it can even write and tell stories. Yeah. And yes, the new model sounds uncannily like Scarlett Johansson in the movie Her. Can we play a clip from the story section? So my friend Barrett here, he's been having trouble sleeping lately. And I want you to tell him a bedtime story about robots and love. Oh, a bedtime story about robots and love? I got you to tell them a bedtime story about robots and love. Oh, a bedtime story about robots and love?
Starting point is 00:02:47 I got you covered. Gather round, Barrett. Once upon a time, in a world not too different from ours, there was a robot named Bite. I want a little bit more emotion in your voice, a little bit more drama. Got it. Once upon a time, in a world not too different from ours, there was a robot named
Starting point is 00:03:06 no no no chat gpt i really want maximal emotion like maximal expressiveness much more than you were doing before understood let's amplify the drama once upon a time in a world not too different from ours there was a robot named bite i gotta say chatPT does sound a lot like Scarlett Johansson. And the guy sounds like every tech employee who's ever done a demo. That's what his voice sounds like. Employee demo GPT. Yes. Are you impressed?
Starting point is 00:03:36 You know what? I'm impressed. I will give it up. We have blasted across the uncanny valley. Some big caveats to what's impressive about this but i think they're kind of two big breakthroughs at this one is the like the the thing we're all reacting to the anthropomorphizing the fact that it like it pauses it laughs it like makes little jokes it kind of like responds in a playful way to people's prompts that like sounds very human
Starting point is 00:04:03 um like at one point there was one uh demo i was reading about where someone asked the new gpt to speak in a voice that sounded sarcastic and it responded that sounds amazing being sarcastic all the time isn't exhausting or anything and that's clever right it's like engaging with and playing off of the other like bucket of breakthroughs or like things that are impressive about this is it can now process and respond to videos and audio. Like that's what you heard. Is it like or sometimes it could like there's a demo where someone showed it like held their phone up in front of a pond of ducks and said, what are the ducks are doing? And it describes it.
Starting point is 00:04:37 In this one, there was a math equation. This was the so he held his phone up to an algebra equation. And it's like, can you help me with that? And she's like, yeah, no problem. Right. And there were, there was one I saw where someone points their phone at Buckingham Palace and says, can you tell whether the king is at home? And the phone responds, yes, there's this flag that is up that because it's up signifies
Starting point is 00:04:56 whether. How do they know that? It's scary. Right. So it's like, it's doing a, an impressive amount of like, it looks like deduction, but just like scanning these images for information and processing them back in a way that sounds like conversational human speech. Were you impressed by it as well? The speed for me was just incredible. That's a really big one.
Starting point is 00:05:18 Because the early versions of ChatGPT, even that, you'd like type in something and then it would wait and be processing. This was like real time conversation with the, you know, with very, very short pauses. They have actually the open eye people, open AI people have talked a lot about that. Like that isn't something I ever would have thought of as a big barrier to being a open AI or to making these GPTs feel more realistic, but they've talked a lot about something called native multimodal support where basically it's like instead of taking the input and processing it among multiple um ais it's all in one big machine that processes everything together that's part of what makes it so fast it's also how it can integrate across all these different systems some of the like i think caveats to this is like one which is not a criticism but it's just like to look under the hood
Starting point is 00:06:06 a little bit this is not as big of a technological breakthrough as past rounds it's really more of a like evolution and how they apply the existing technology and they just got like a little bit or a good deal more clever in terms of figuring out what the systems are effective at and what they're not effective at and leaning into the. And we've seen past versions of these that were able to replicate colloquial sounding human speech in text, and versions of these that are able to talk in a way that sounds human. So it's just kind of plugging together things they already had. I do think that like a couple of meaningful caveats to this to keep in mind.
Starting point is 00:06:43 One is that there's not much practical application. That's what I was wondering. Yeah. It's very cool. It's very impressive. But the only thing they've really shown that it can do is you can show it a math problem and it'll solve it. Or you can take like, you can show code. You're like, hold it up in front of your screen with a bunch of code and it'll tell you what's going on with that code, which is cool. But how many ways are you really going to use that? And it will do translation, but it's it's sequential not simultaneous as in you have to wait for it to translate to each other person rather than just having a conversation yeah and it's not actually that different from what google translate does already anyway well and i also think like voice assistants are just i don't use siri i
Starting point is 00:07:20 don't use siri a lot just because i guess if you're alone working at a desk, you can probably get a lot of work done with a voice assistant maybe or something like this. But I don't know, a lot of people around where you get your headphones. I mean, this is where it gets into the her of it all. But you have your headphones in and you're just, everyone's talking to their chat GPT assistant, friend, whatever. It's incredible that they got the, like, what was a year ago just a, like, input-output chat box to now be, like, a flouncy, flirty AI that talks to you. Speaking of that, we have another clip.
Starting point is 00:07:55 There's another big breakthrough here. Right. I wrote one last thing. I'd love if you could take a look at. Of course. I'd love to see what you wrote. Show it to me whenever you're ready. Okay. So this is what I wrote down. do you see oh I see I love chat GPT that's so sweet of you the reason I'm in a really good mood is because we
Starting point is 00:08:15 were doing a presentation showcasing how useful and amazing you are oh stop it you're making me blush. Wow. Sounds like. We're getting really close to when we're going to fuck the AI. It's really around the corner. If it's not, I'm not. Look, if people are doing it already, I'm not judging.
Starting point is 00:08:36 I support you. Those guys are for sure. Well, listen, they've been working hard on that system, so I applaud them. Why do they make this model so flirty? I mean, I think part of it is probably that that's what they can do. Sex sells and has for hundreds of years. Right. Sex sells.
Starting point is 00:08:54 This is what the technology, like, it's not that good at solving problems. Like, it gives you a lot of fake information. It hallucinates a knot. Solves some problems there. That's right. Wow. Can we turn the thermostat down in here it is heating up um i i will say that like even a year ago i would not have said they will the these ais will lean into relationship like services or like things that would like like the kevin roos column when he was talking about like oh it's like companion bots i really would would be like, like the Kevin Roos column when he was talking about like, oh, it's like companion bots.
Starting point is 00:09:26 I really would have thought like, that's not going to happen. It's not going to go anywhere. But seeing the level of breakthroughs they've had in this, I do think that we're closer to it than we may be anticipated. But I think there's a question about what, what will those relationships actually be like?
Starting point is 00:09:41 Yeah. I mean, as I was watching the demo, I'm like, all right, this is still, this is still ChatGPT, which is is still trained on it's a large language model it's still like it's just scouring the internet right um it's not much like chat gpt like how up-to-date it is uh is also you know changes all the time right um and so it will still give you the like i think it could give you the illusion that it's your pal
Starting point is 00:10:06 and then everyone you know you'll still get hallucinations right which is when they don't quite they just malfunction basically it's a fancier way for saying that they malfunction and say crazy shit once in a while like you'll still get that probably from this just because it's got a voice
Starting point is 00:10:19 but it is it might still try to get you to leave your wife which is something that the very first ones did definitely not sentient but I do think and Ezra raised this when we were talking about her if they start training it on
Starting point is 00:10:35 like your phone and your data and your experiences the relationship part of it could become a little more realistic seeming right because if they know everything about you if you have a shared past yeah right and they've already there are already some um dating app companies that have are talking about getting
Starting point is 00:10:57 ais that will train on like all of the data on your phone or not i'm sure they don't say all of it like your emails things like that in order to learn like what you like and what your interests are. And it's not a far leap from that to learning, I mean, to doing what the Facebook algorithm does, which is learning what is, what are the kind of emotions that are going to most engage your attention? What are the kinds of like prompts and interactions that are going to be most like emotionally impactful to you? That's already what these social media companies do. And if it's now delivered instead of what social media companies have been doing up till now, which is through, you know, a sorted and ranked feed of content from out on the internet,
Starting point is 00:11:33 but is instead from fake Scarlett Johansson talking sexy to you, like, I don't know, what's it going to ask you to do? Like, is it going to tell you to go rob a bank? Maybe. I was thinking more like it scans all of your messages and emails from the week, and then you get home on Friday, and they're like, sounds like you had a really tough week. Do you want to talk about it?
Starting point is 00:11:53 I'm sure. Yeah, absolutely. And a lot of people are like, yeah, I do want to talk about it, ChatGPT4O. Right, yeah. And I do, listen, I would see the allure of that technology. It does seem like that is something that is at least going to be tried because it is something that is clearly within reach. And it does feel like we are bound for some pretty big conversations pretty soon about what are those relationships? Like, what do they mean for us? Like, what is it?
Starting point is 00:12:18 How do we think about, like, where the, like, sexy companion bots fit into our lives? Is there a taboo around it? And, like, the Kevin Roos experiment with this where he set up those companion bots, like our lives? Is there a taboo around it? And like the Kevin Roos experiment with this where he set up those companion bots like clearly didn't go anywhere. But I think we are getting closer.
Starting point is 00:12:31 At least that's what Kevin says. That's right. I know. Just kidding. Look, we're not far from this chat GPT having phone sex with people. You could clearly do it
Starting point is 00:12:43 and I'm sure it would do a very possible job. Well, you mentioned the robbing the bank, telling you to rob a bank. So I was reading the, I think it was the Times analysis. Oh, the Buckingham Palace analysis. So, first of all, just a classic New York Times
Starting point is 00:12:57 understated sentence here. Some AI researchers have warned that the long-term implications of deploying programs that mimic and respond to emotions could be unwelcome. Yeah, no, probably. A team at Google DeepMind last month released a research paper exploring the ethical risks that may arise as AI assistants become more capable.
Starting point is 00:13:15 Among other things, the researchers warned that such assistants could become highly persuasive and addictive. No shit. Last year, a British man pled guilty to breaching security at windsor castle saying that a chatbot he had exchanged more than 5 000 messages with had encouraged him to do so i read that and i was like of course yeah this is like this is where it's this is going to be the problem right
Starting point is 00:13:36 i mean it was the same like the fucking people who've been you know radicalized and become qanon followers right it's like that shit except imagine ai. Which is, I think that's a great parallel to draw because that specifically QAnon was such a great lesson in no one at Facebook or YouTube said let's program all the algorithms to like red pill people into QAnon. The algorithms arrived at that
Starting point is 00:14:00 through this other set of goals about maximizing engagement, blah, blah, blah, whatever. But that is something that's happened repeatedly. Like flat eartherism was something that the YouTube algorithm learned to push to millions of people. Like Gamergate was something that's like 2016, a lot of like far right politics is like the algorithm learned for whatever set of reasons to push huge numbers of people to Breitbart. And something that we've also learned from social media is that as these experiences get more siloed, i.e. the TikTok experience, it becomes harder to understand and see when a big tech algorithm is pushing lots of people in a dangerous direction. I'm not saying if everyone has their like Scarlett Johansson, ChatGPT, Sexbot companion, it's necessarily going to radicalize everyone into joining like future QAnon. But it would stand to reason that at least at some point it could decide to push people in a direction that is not healthy for them.
Starting point is 00:14:52 And if that were happening, something we've learned over and over again is we're not going to know about it until it's been happening for a while. I was going to say all of these consequences were unintended and unpredicted. Yes, right. And then no one finally realized it was happening until it had been happening for a while and then, of course, the tech companies fixed it right away and it was no problem. Yeah, no, no. Here we are. Here we are. Our next story is about the tech platform
Starting point is 00:15:24 where people used to have pretend friends. When was the last time you logged into Facebook, Max? So I actually log in occasionally because I use Marketplace, which this is actually, I've heard from people who work at Meta that they have a ton of data that a huge amount of their user base
Starting point is 00:15:41 is just there for Marketplace. But it's pretty good. You sell some stuff, you move, it's really handy. Oh, okay. But it's pretty good. You sell some stuff, you move, it's really handy. But as to say that I was familiar with what you were about to describe the newsfeed, and I can tell you it is very upsetting. Well, yeah, if any of you have been on Facebook, anytime in the last year, you've probably noticed that the site's newsfeed has gotten dramatically worse. Over at 404 Media, Jason Kobler proposes that Facebook isn't just an example of the, quote, dead internet, which is the internet where bots talk to bots, but that Facebook has devolved
Starting point is 00:16:10 into something much worse, what he calls the zombie internet. According to Jason, Facebook has become a mix of bots, humans, and, quote, accounts that used to be human interacting with AI-generated spam. Quite a story arc for Facebook, I gotta say. It goes from rating the attractiveness of Harvard co-eds to bringing the world together to enabling genocide to becoming the premier destination for spam bots and AI garbage. So excited to find out what's next for us
Starting point is 00:16:39 on this great journey it's taking 4 billion of us on. It's gonna be awesome, I'm sure. Why do you think Facebook, more than any other platform, has become so overrun with the AI garbage? Well, so have you seen, because it's a very specific kind of AI garbage that is filling Facebook news feeds. Did you look at any of these? Incredibly disturbing.
Starting point is 00:16:55 It's very disturbing. And weird. Yeah, so if you open up your Facebook, there's a very good chance that you are going to see upwards of 50% of the content on your news feed is going to be these AI generated images that are going to be some combination of like uncanny valley engagement bait where a lot of it is it's like it's Jesus holding up a sign saying like throw Jesus a like. There's a good
Starting point is 00:17:15 description in the piece which is uh so that Jason says the types of content I've seen go viral have ranged from AI generated log cabins and kitchens and nature scenes and AI-filtered celebrities to shrimp Jesus. Shrimp Jesus shows up a lot. Jesus with hot flight attendants. Deformed children, deformed celebrities, hot soldiers, African children who, quote, built something, and variations of all of the above. Yeah, it's a lot of... If that sounds weird, go look at some of the images.
Starting point is 00:17:42 Even fucking weirder. It's a lot of people who are disfigured and that are holding up some sign that says, give me a like. It's all engagement bait. The idea is it gets people to post a comment beneath or to put a like. And it's spam. If you like, if you engage in any way, if you post a comment, you're going to get absolutely barraged with catfishing messages or spam saying,, click this link or it's like, thank you so much for this like, like please message me on Telegram
Starting point is 00:18:09 because I really need help with something that's just intended to spam people out of money. But to answer your question, why is Facebook being totally overrun? Part of it is this thing that this story gets into where like scammers have been taking over big Facebook pages. That's something that's been happening for a long time.
Starting point is 00:18:25 A lot of this comes down to the fact that it's a declining platform. It has been for a couple of years. So it's in this weird zombie death march state where the number of users is declining, especially the time they spend and the site is declining. So Facebook, in response to that, has wildly dialed up their algorithm. Zuckerberg said recently that they have gone from 15% of the content in your feed is selected by the algorithm from pages you don't follow, content you've never
Starting point is 00:18:50 engaged with, just like the algorithm thinks that you will engage with this, to now 30%. And it's surfacing a lot of that spam AI because it's weird, it's engaging, it's hard not to stare at it like we're talking about it and it just like catches your eye. Who's doing it is my question because like I get, I get why it benefits Facebook because it's more engagement. So they don't want to shut it down. Right. But like, who's like, I don't know if I put up this weird image and get a bunch of likes
Starting point is 00:19:16 that connect the dots, then what happens? So it's, it's scammers. It's engagement farmer scammers. They get a lot for some of it is you get a lot of people to engage with your page by posting up a lot of these images. Then you just sell ads against it. And if you're, you know, you're not making a ton of money, but if you're like a scammer in Romania and you've got 10 pages that each have 20 million viewers on them, then you can generate some cash from it. A lot of it is something that I saw someone on Twitter referred to as Grandma Bait, which is this specific thing where it's images that are meant to pull in older Facebook users,
Starting point is 00:19:49 which is now the predominance of the platform, who are maybe a little bit less attuned to scamming. And it's something that it's like, I baked my first cake, please give me a like, and they post it like, oh, that's so sweet, darling. And then it gets immediately 40 or 50 AI-generated comment responses that are trying to bait them into an interaction that ends with them eventually handing over
Starting point is 00:20:10 their credit card information. There was someone who... Boomer bait. It's boomer bait, yeah. And there was someone who... I actually thought this was kind of heartbreaking. Someone who responded to this thread about Grandma Travis, there's a great name for it,
Starting point is 00:20:21 who responded, my mother-in-law was scammed out of fifty thousand dollars via facebook and after confronting her and proving without a doubt that they were fake she did it again for some it is a form of addiction that need to be needed and valuable and i think that's true and it's yeah it's a fucking bummer that facebook is i don't think they're intending to prey on this but i'm sure they know it's happening but they're so they're desperate because the platform is doing so poorly to get whatever engagement they can. This isn't entirely related to this topic, but it is Facebook.
Starting point is 00:20:52 Like, there was a deactivation experiment. Oh, yeah. This is more how it affects politics. It's interesting now because I think this spam conversation is, like, Facebook used to be radicalizing people in politics. And now I don't know if it's just breaking people's brains or scamming them. Straight to scams. Right. But there was this experiment on people who deactivated their Facebook accounts around politics. Yes. So this was in 2020.
Starting point is 00:21:17 So this is a few years ago and they've changed their algorithms a few times since then. It was relevant for how we read this. They had 20,000 Facebook users and 16,000 Instagram users deactivate for six weeks during the election. They studied all the effects, what changed. And relative to past experiments like this, a lot less change than had in the past in terms of things like their engagement with politics,
Starting point is 00:21:40 their misinformation levels, which suggests that Facebook had changed the way their algorithm worked for the 2020 election. We knew that. We knew that they were especially well-behaved for that election because they had, I don't know if you remember this, there were like big uprisings among their employee base even, like people were walking out and there was a lot of scrutiny on them. Now they rolled back a lot of these protections after the election, so don't get too excited about it. But even in this study that found relatively few effects, among the people who deactivated, their support for Trump changed
Starting point is 00:22:11 in a way that is suggestive that being on Facebook and Instagram increases someone's likelihood of voting for Donald Trump by 2.6%. And there's this funny thing that happens where in the study, they're like, well, that's not statistically significant significant 2.6 percent but you know where it is significant the fucking electoral college i was gonna say that's that's election determining well i thought about that again today because the times did a another story on their new york times sienna polls and they um listed the characteristics of voters who told uh the times that they were not voting for Biden but did in 2020. So, they were Biden defectors. And the top characteristics are, you know, some are unsurprising, think the economy is poor, you know, want fundamental change.
Starting point is 00:22:58 Sure. Uses TikTok sometimes or often. Really? Yep, 40%. Man. TikTok sometimes or often. Really? Yep. 40%. I honestly, I feel like I have a hard time at this point navigating knowing how to talk about TikTok's influence about our politics because there's so many bad actors in this conversation. Yeah. And there's so many people saying that like, oh, TikTok is the only reason that people are upset about what's happening in Gaza. Or it's like
Starting point is 00:23:20 TikTok is tricking people into believing this or that. Or there's like other people who think they want to take away our TikTok because they don't like that it's enlightening our politics. But it does seem like it is having a pretty profound effect on our politics, which if you talk to people who spend a lot of time in the app and about the kind of news they receive relative to what's in the newspaper, it's different. Well, and this is where it's helpful to take the uh you know this is a chinese spyware app and right the kids are all going crazy because they're on the tiktok right if you take that away yes other social media platforms like facebook that's a good point right have the same kind of effect right is it because suddenly they're just getting like secret mega propaganda maybe not right but is it like they are being, people are being shown just like the world is a fucking mess, everything's awful,
Starting point is 00:24:09 and like, I think it relates more to the doomerism that you get on social media and sort of the rage bait and all those emotions that it fuels and what that does to your political leanings than it does any specific issue. Oh, that's a great point. I think that's exactly right because something that we learned when we were trying to, when all of us were trying to understand the effects of Facebook, Instagram, Twitter, and YouTube on the 2016 election, a big thing that we kept coming back to is that it really pushes people towards adversarial ways of looking at the world, that it's about my group versus your group, that it's about my identity is under threat
Starting point is 00:24:46 by something that is scary that we have to go confront. It's all about putting people into buckets and then they are good or bad based on where they fit in those buckets and this kind of activating around this sense of tribalism. And I think that you are really right that we are going to find in a couple of years when we start to have a better understanding
Starting point is 00:25:03 of what TikTok does, that it is going to engender a lot of cynicism, nihilism, pessimism about the world. The reasons for that are complicated and it has to do with like what performs the platform and what incentives the creatives have learned. But it has a profound effect. But I mean, all the things you just mentioned
Starting point is 00:25:19 are like jet fuel for authoritarianism. Yes. Literally every single one you mentioned, nihilism, cynicism, anti-institutionalism, lack of trust in any authority figures, like that is what's really what it's fueling here. Right, because it fuels a sense that because the entire system is broken and corrupt, we have to smash it, we need a strong hand. Which is not to say anyone who is on TikTok is going to become an authoritarian, but I think that you're right that it does. It primes people
Starting point is 00:25:49 for that in a way that I think we're beginning to understand. Some quick housekeeping before the break. If you're listening to this, you're probably not an undecided voter. Maybe, who knows? But I'm willing to bet you have at least a few of them in your contacts. So how do you convince people that democracy is on the line in 2024? I am back with season four of The Wilderness to explore the minds of voters who are disenchanted, disengaged, or otherwise uncommitted. I'm so excited for this.
Starting point is 00:26:19 I'm excited too. With the help of some of the smartest strategists, pollsters, and organizers in politics today, we set out to uncover what moves voters and how we can persuade the persuadables in your life. The trailer is out now in the Wilderness feed, and the first two episodes are releasing next Sunday. Oh my gosh, next Sunday? All right. We've got to get going.
Starting point is 00:26:38 Tune in every other Sunday after that throughout the summer for the goods you need to convince every last person in your life to show up this election year. Also, I got an update here from Organizer Else, VoteAid America. You're obviously on your Team East. I'm Team West. And we are dominating is my understanding.
Starting point is 00:27:00 Apparently, you guys have made 15,000 more voter contacts than we have what the fuck team west who's responsible for this oh yeah i am i am sorry sorry i'm gonna stop the steal what's going on thank you to team east if you are not playing along with organizer else it's great people love it it's really fun and you get to be if you're really fun. And you get to be, if you're on the East Coast, you get to stay winning, which is great. VoteSaveAmerica.com slash 2024. Sign up, but especially sign up if you live west of the Mississippi. That's my message today. I still love you if you live on the East Coast. New research out of the University of Oxford by Lucy Foulkes and Jack Andrews
Starting point is 00:27:54 argues that mental health awareness campaigns may actually have a negative effect on teens. The research points to results from a school-based trial that showed that students who underwent training on the basics of cognitive behavioral therapy did not emerge healthier than peers who did not participate in training and in some cases were worse off. Folks in Andrews call this phenomenon prevalence inflation, which is when a greater awareness of mental illness leads people to frame their everyday struggles as symptoms or diagnoses. Oof. Yeah. I'm treading carefully here. Max, what'd you think of the study?
Starting point is 00:28:32 I always appreciate when you let us know when you're about to give us a cancelable take. I can put my John's getting canceled blast glasses and helmet on. I have a very nuanced take on this. No, we do. To be clear, I do think that we're going to navigate this in a sensitive, thoughtful way.
Starting point is 00:28:51 It's just fun to make fun of you. So one caveat to this study is that they used cognitive behavioral therapy, which is a very specific kind of therapy that is maybe not like the perfect proxy for mental health awareness writ large. I don't know if you're familiar with it. I've done it before.
Starting point is 00:29:07 Yeah, what's the difference between? Give us a breakdown. So when you think of therapy, what you probably think about is like Tony Soprano sitting with Dr. Melfi, like trying to get to the core of his experience and the human experience. And CBT is much more practical.
Starting point is 00:29:23 It's like I said, of tools and methods that you learn for how to navigate and cope with feelings day to day. Like if you really have trouble with anxiety when you get on a plane, like mindfulness or something called dialectical behavior therapy, and they found the exact same results. So even though that is a limitation of this study, I do think that the research as a whole really supports her prevalence inflation theory. She had this great quote on it, the researcher Lucy Fox, it's creating this message that teenagers are vulnerable, they're likely to have problems, and the solution is to outsource them to a professional. When she says it's, what she's referring to is schools that bring in mental health programs that are mandatory for all students, which is something that is happening a lot post-2020 and is very well-intentioned. But I think that she is right that it's having this larger effect as well. So just to so people know what the study is, this trial followed 28,000 teenagers over eight years. So quite a big study, big sample size, long study. And half of the teenagers were trained by their teachers to direct their attention to the present moment, breathing, physical sensations, or everyday activities
Starting point is 00:30:41 in 10 lessons of 30 to 50 minutes a piece. So this is mindfulness. And the authors reported no support for our hypothesis that mindfulness training would improve students' mental health. In fact, students at highest risk for mental health problems did somewhat worse after receiving the training. So it's not like a, it's a pretty big study. So they speculate in the study that mental health training programs bring awareness to upsetting thoughts and in car i don't know why i'm laughing but it is just sort of it's almost i love it it's almost so obvious that it's like if you put people alone right with all of their problems and just be like just think and breathe with no other help yeah of
Starting point is 00:31:22 course that's not gonna work well do you know what that reminds me of? What? Doom scrolling. No kidding? It's like being on your phone. Well, and then another one, when you're doing a bunch of, when you have a bunch of people doing that together,
Starting point is 00:31:34 they call this co-rumination. And it's like, so it's basically... Podcasting. You're spinning each other. It's a bunch of kids spinning each other up about bad problems right with no solutions so and not really talking it through with one another either right i so i think that this study has come at an important moment which is good because as you said it's something
Starting point is 00:31:55 that has been in the pipeline for many many years and studies like it have to where we are at a point where i think we both have a growing appreciation for the importance of mental health especially for kids and especially for young kids. We are all much more versed in it, and that is in many ways a good thing. We have much more understanding about psychological symptoms, how to navigate them, how to think about working with and helping our colleagues or friends who have psychological symptoms. But at the same time, I think that we are also becoming aware that the all therapy everything is not always a useful or helpful frame and that it also comes with downsides. I'm not going Tom Cruise on you. I'm not saying that like therapy is bad.
Starting point is 00:32:35 I'm not saying that mental health awareness is bad. I don't think that's the conclusion of this study either. that, you know, rumination is a good word for it. That when we are just encouraging rumination, we are just encouraging people to think about having problems inside their own minds and to pathologize them and especially to label them is something I really want to talk about. I think we are learning that that comes a lot of downsides too. Well, I mean, my view on this is that it replicates so much of, big surprise, what's wrong with like just living your whole lives online, right? All right, so I'm feeling, I'm getting fed a TikTok about let's diagnose if you have anxiety.
Starting point is 00:33:17 Do you have racing thoughts? Do you panic sometimes? Are you like perseverating on some kind of bad interaction you had with someone? You probably have anxiety. And then it's like, then go talk kind of bad interaction you had with someone? You probably have anxiety. And then it's like, then go talk to someone. Maybe you get a medication. Maybe then you go tell people I have anxiety and I'm labeling it, right? And you may or may not have anxiety. You may or may not have issues. Probably you do. The solution, right? Like I think that the prevalence of people with mental health issues may not be inflated.
Starting point is 00:33:46 But the idea that, like, simple mindfulness or, like, looking at a TikTok or, like, Googling and trying to label yourself and go, like, that that's going to fix it is maybe not on the mark. Because, like, and I'm speaking from someone who, like, I thought, I was like, I'm never going to do therapy. I don't need therapy. And then I started therapy two years ago. And the reason that I like, I was like thinking, I'm like, why does it, why do I like it so much? Because it's not like I go and my therapist is like, do this, do that and do the other thing and you'll be great. It's, I'm like, oh, I get to go once a week and just talk to someone about everything that's going on. And like no filter. I can share everything.
Starting point is 00:34:28 And she talks, we hang out, we, you know, it's, it's like a, it's like a, it's like a relationship that you have and it feels, it just feels better to get a lot of the stuff off your chest that has been bothering you. And so the, the, the key is I don't feel alone with those thoughts because I have shared them with someone. And I think what some of this mindfulness stuff does is the opposite of that, which is it does put you alone with your thoughts. Or it tells you that like you and all these other kids are going to sit here and do this training and hear about the climate and wars and everything else and just sit and breathe and you'll be great. That I think doesn't work. That had not occurred to me, but that's a really good point. And it's making me think about conversations I used to have with a friend of mine who's a psychiatrist. She's a psychiatrist. She works in the UK for the NHS.
Starting point is 00:35:20 She's a big Freudian, which is, wow, is that ever a cult? But she would like talk a lot about, there's a ton of, as you can imagine, research into psychiatry. And a lot of it focuses on the relationship between the therapist and the client, like a huge amount of it, like almost as much or more as talking about diagnosing things or labeling things or the methods. And that does make sense because we know so much of how we process information, how much we process our emotions, how much we process our sense of identity is relational. That's just how we're wired.
Starting point is 00:35:53 That's just a weird quirk of the human experience. And it does seem like a very striking contrast with the mental health programs at these schools where I use you saying it's individual. And it does seem similar to, again, the experience on social media, where it is encouraging you, you know, one-on-one with your phone, you don't have a relationship with it, but telling you to diagnose yourself. And there is a lot of evidence that rates of self-diagnosis have shot way, way up for young people, especially ones who are on TikTok, but I'm sure it's on Reels as well.
Starting point is 00:36:31 And that that leads to very real psychological issues. The power of a label, it's incredible. I was reading there's this great New York Times Magazine story by Katie Englehart that came out earlier this year. She just won a Pulitzer for a different story. So congratulations to her. It's a lot about the way that treatment of anorexia is involving. And a lot of it has to do with treating people who have struggled with anorexia their entire lives and palliative care for anorexia. And this is a very, as you can imagine, very controversial topic. But something that people in this field will talk a lot about is like, look, you put out a paper that creates a new name, palliative care for anorexia, or that expands the labels or shrinks the labels for who has anorexia. That will materially affect people who are patients and how they think of their own symptoms and therefore the symptoms that they feel. The power of suggestion in this is incredibly powerful. And when you Google these things because if you're interested in your care, so you look up what's the research on it or you hear from your therapist and you are told you are anorexic or not anorexic, that materially changes your relationship with food just having that pathologizing label.
Starting point is 00:37:37 Do you know the story about the fight over updating the DSM? Okay, you know the DSM, right? It's the big psychological Bible. It's short for Diagnostic Statistical Manual of Mental Disorders. They updated from DSM 4 to 5 about 15 years ago. And there was this huge fight and debate, maybe not fight, but huge debate among the people who worked on the DSM because it was this argument of like, look, there have been huge evolutions in how we study and understand psychological diagnoses and mental disorders. We're much more empirical about it now. A lot of these labels are very fuzzy. There's overlap. It's not clear where the lines are. So we were going to really like use this new neurological research that we have now to tighten up these lines of who gets diagnosed with what, you know, be more formal on our definitions, be more empirical about it. And it was incredibly controversial because it meant that people were going to gain or lose
Starting point is 00:38:31 labels for certain diagnoses. And like, so happens when now we are in a purely identity-based culture. Well, again, another thing that the internet does, right, is it pushes us towards identity as like the most salient characteristic. And then therefore, if we are labeling ourselves or identifying ourselves as a diagnosis, or we cannot be labeled as that, and we want to, right? And we should also talk about like where the desire comes from. It's not like a bunch of people are like, I feel great. I'm going to decide today that I'm anxious because I saw a TikTok video. That's right. I think the challenge is people's feelings are valid and the anxiety they're having is valid. Whatever the label
Starting point is 00:39:10 is, like you still may need to go get help, right? But it's going to be different for different people. And so I do think sometimes the labeling becomes tricky because like people don't want to seem that if they're feeling very anxious, that like they weren't anxious enough to get this label and diagnosis. And so then they're not going to help. And that would be that's bad. Right. Yeah. But it is important to sort of like figure out what's actually going on with you and what the best way to help that help with that is well and that was what lucy fox who did this research was talking about is that when you give people a label the way that that is often interpreted in this specific cultural
Starting point is 00:39:52 moment that we are in is that that behavior is therefore fixed and that the that is something that you are going to need a crutch for and there's not something that you should work on or try to improve and it's also something you need to be accommodated on from other people and that is often true with mental disorders but the the increased prevalence of labeling and diagnosing changes how people interact with their own internal emotional experiences i'm not saying that that is necessarily a good or bad thing. Sometimes it is important to have a label for that. This is in the big DSM experiment, they talked a lot about the shifting labels for autism and how it's very important for a lot of people to have a fixed, concrete, specific definition of what that is so people know how to relate to it themselves.
Starting point is 00:40:39 But those labels have gotten so much fuzzier because of the way social media delivers them. Because like you said, it incentivizes creators to generate new labels to give people to say the reason that you're feeling this way isn't because sometimes people feel sad or because you're coming out of the pandemic or because you know the world is messy right now it's because you have a specific flaw that is built into your nature that it's innate to you that you just need to sit with. And so it's not surprising that that would make people sadder than if they maybe got some different messaging on it. Well, and I think the reason why it's challenging is the piece ends with Lucy Kim. She's a Yale senior who has lobbied for better mental health support on campus. And she described the prevalence and inflation hypothesis as,
Starting point is 00:41:27 quote, disheartening, dismissive, and potentially dangerous, providing another way to discount the experiences of young people. I think this is where it becomes challenging because really the study is not trying to discount the experiences of young people. The study is saying that some of the solutions, some of the treatment, some of the things that people are doing, these schools are doing to help people
Starting point is 00:41:51 with this anxiety are not effective. And that perhaps there are other methods that we should use to help people with this. And I think that, again, it all goes back to identities and labeling and stuff like that. But putting up And I think that like, again, it all goes back to like identities and labeling and stuff like that. But like putting up that wall to be like,
Starting point is 00:42:07 I don't know, the study says, the study says that maybe mindfulness doesn't work. That does not mean that the studies also said, oh, you're all lying and it's all just in your head.
Starting point is 00:42:16 And to be clear, I think we're not like scolding the kids for feeling sad and telling them like, suck it up and be tough. I think that what we are wrestling with
Starting point is 00:42:24 and what I certainly feel like I don't have a solution for, is what is the proper way to navigate both the things that are important and useful about pathologizing, about labels around, and diagnoses around certain mental and psychological states, and how do we think about navigating the downsides of those as well. And that's a tough problem, but it is definitely only going to get tougher and more salient. Yeah. All right.
Starting point is 00:42:47 Well, that's all for today. We just did some therapy, Max. That's what we do every week here. We always do. That's called potting, baby. We will be back here with you again next week. Bye, everyone. Offline is a Crooked Media production. It's written and hosted by me, Jon Favreau, along with Max Fisher.
Starting point is 00:43:13 It's produced by Austin Fisher. Emma Illick-Frank is our associate producer. Mixed and edited by Jordan Cantor. Audio support from Kyle Seglin and Charlotte Landis. Jordan Katz and Kenny Siegel take care of our music. Thanks to Ari Schwartz, Madeline Herringer, and Reed Cherlin for production support. Thank you. How does the brain process memories? Why is AI a solution and a problem for our climate? What is leadership in 2025 and beyond? The TED Radio Hour explores the
Starting point is 00:44:05 biggest questions and the most complicated ideas of our time with the world's greatest thinkers. Listen to the TED Radio Hour wherever you get your podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.