Offline with Jon Favreau - Social Media Warning Labels, LA Kids Go Offline, and the Rise of Slop AI

Episode Date: June 23, 2024

Has this pod saved America…from phone addiction?! We got Jon Lovett to take a rather extreme version of the Offline challenge in Fiji, AND America’s top doctor and friend of the pod Vivek Murthy i...s now calling for a Surgeon General’s warning label on social media platforms. Max and Jon bask in their success, then mourn the dismantling of the Stanford Internet Observatory, the nation’s leading mis- and disinformation research organization. Then, Max sits down with longtime tech journalist Brian Merchant to talk about whether AI development is slowing down, why workers should organize against the technology, and what good AI use cases and centaurs have in common. For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.

Transcript
Discussion (0)
Starting point is 00:00:00 We have all these great productive technologies, but the managerial class, the executive class, they've all realized how to capture the gains from those technologies. So it's not that if I get to use ChatGPT at work, it's like, oh, great, my job's done after 10 minutes because I just whipped up the assignment that I had for the day. No, I'm going to be expected to do 50 of those assignments. And then if there's 50 more assignments and they're each making that much more money, where's that economic gain going to go?
Starting point is 00:00:33 It's going to go to the folks at the top. And maybe I get a raise, but I'm certainly not benefiting from the full scope of the technological advancement or the increased productivity gains. And that's been the story of the last hundred years. I'm Jon Favreau. I'm Max Fisher.
Starting point is 00:00:56 And you just heard from today's guest, longtime technology columnist and writer, Brian Merchant. Brian has written for outlets like the LA Times and Vice's Motherboard. He's also the author of The One Device, The Secret History of the iPhone, and Blood in the Machine, which is a history of the Luddites and the modern movement against big tech. But Max, this week you chatted with him about AI, including one of the most ridiculous uses of AI we've seen yet. Want to say more? Yeah, people tune in for the interview. You're going to want to hear about this use of AI, but chatting with Brian was great. He's done a ton of digging into how AI is actually being used, who the actual clients are, the business models.
Starting point is 00:01:29 And it really changed my view of this technology and how it works. So people should tune in. Cool. All right. We'll get to your conversation with Brian in a moment. But first, Max, I think our podcast is working. It's working. It's working.
Starting point is 00:01:41 First, we got John Lovett to take a rather extreme version of the offline challenge in Fiji. Our secret project we've been working on for a year now. We're going to talk to him all about it when he is allowed to talk about his experience. That's right, because we respect contracts here. Of course. So that was one. Now, people in power are finally starting to take on the social media platforms, at least as it relates to kids. America's top doctor and friend of the pod,
Starting point is 00:02:05 Vivek Murthy, is calling for a Surgeon General's warning label on social media platforms. Of course, Surgeon General labels are most commonly seen on tobacco products, where they warn about the long-term dangers of smoking. Murthy argues in the New York Times that similar labels warning about social media's
Starting point is 00:02:24 mental health harms would go a long way in encouraging parents and their children to change their relationship with social media. One catch here is that the labels would require congressional action. We know how likely that is. But it's still one of the most significant calls for social media regulation on a national level
Starting point is 00:02:43 that we have heard yet. How do you feel about this? I'm bending my brain to try to imagine, try to picture what the social media warning label equivalent is going to be of the cigarette warning label of the blacked out lungs. Like, is it an outline of someone doom scrolling? Is it a big sad face emoji? Like, it's probably a picture of Jon Favreau. I actually feel would be the best warning label. Wow. That was tough. That was tough. I think it's like someone staring off into space.
Starting point is 00:03:08 Like someone just like looking at their phone. Yeah. Maybe with like zap of dingbats coming out of their head. Maybe they're sad. They're sad. Yeah. They're angry. Anyway.
Starting point is 00:03:15 No, I think this is great. I think it's great. The research on the effectiveness of cigarette warning labels is like slightly mixed. Some show that it's like really positive. Some it's tougher to disentangle the effect. but nobody thinks that it has any downsides. And even if it doesn't like radically push down social media usage directly, I think it could be really, really effective at normalizing the idea that number one, social media's harms are an established fact. Look, the government says social media is bad for you. That's pretty conclusive. And number two, it normalizing the idea that the government can and should regulate
Starting point is 00:03:49 social media, which is a seal that has been a little bit tough to break. And looking at that image every day, I think it's going to really change or would change people's mentality about how they think about like government involvement in social media, make that feel more acceptable and normal. If it passes Congress, I heard Dr. Murthy say that they'd go through a scientific process to determine like what the language is, right? So the language has got to be based on the research, right?
Starting point is 00:04:15 So it's a social, you know, the government says social media may cause harm, right? Whatever that may be. Maybe there'll be an image. Maybe it'll be me. So, you know, that all remains to be seen. You know, just to get the other side of the argument, since obviously you and I, big fan of this.
Starting point is 00:04:32 Caroline Mims-Nice writes in The Atlantic that Instagram is not a cigarette. That's the title of The Atlantic piece. I disagree. I mean, but no, I think it's worth unpacking. Of course. No, no, of course. It's true in that it doesn't cause cancer. Sure.
Starting point is 00:04:47 But it's also a lot easier to diagnose cancer than it is to diagnose anxiety and depression. Right. And yet, still, we have study after study linking kids' social media use to anxiety and depression. Over and over again. Is it conclusive? It is not. But as Jonathan Haidt persuasively argues on this very podcast last week, that has a lot more to do with how difficult it is to conduct social science experiments on kids that accurately measure anxiety and depression. So it's like a tough measurement.
Starting point is 00:05:16 And of course, the scientific community should be cautious in jumping to conclusions. We don't want them to do that. But the fact that we have all this evidence, we're just like not going to do anything about all the evidence until every study all over the world, all at once, all concludes the same things. Like it's going to be too late. Which is the exact same debate we had about cigarettes. It took like 30 or 40 years of like, well, there's this one study we don't like. How do you establish a clear link? And like, now we just take it for granted. But part of the reason that we take it for granted is that the regulation happened and that made us all kind of like come together and agree it's like a matter of consensus the government stepped in also if i can kind of like take a step back and take like a bigger
Starting point is 00:05:57 a conversation i feel like i have a lot with people and you are probably having a lot with people too in the context of recording the wilderness, is people who are like, I don't want Donald Trump to be president, but I am kind of like searching for ways to feel excited about mobilizing for Joe Biden. We're searching for ways to get people in my life excited about mobilizing for Joe Biden. And a lot of times I'll say, you know, well, I'll bring up the Supreme Court and I'll say like, think about the Supreme Court seats that will or will not flip based on who's president. I think this is a big one. Tech regulation, I really think, is something that we can look at the last four years.
Starting point is 00:06:32 It's made a huge difference. The work that Lena Kahn's FTC is doing every fucking week. There's another set of regulations. They're investigating the AI companies for antitrust. There are these huge antitrust cases against social media companies that we've kind of forgotten about, but that got set out on basically day one of the Biden administration that are really chugging along and really producing a lot that could yield to something really big. If you care about the future of technology in our society and in our country,
Starting point is 00:07:00 that is something I think you can get excited about going out and pulling the lever for Joe Biden because he has done, his administration has done, is continuing to do a lot will do a lot and we know what the Trump administration would do with social media and technology which is a bunch of morons from the comment section running the government if you want if you want morons from the comment section running the government right you got your candidate right yeah so I would just say if you care about tech and whether it should be regulated, how controlled or regulated or how out of control it is, what role it plays in our society, this is something where this election matters a lot. On the other side, I've also heard people say like, oh, this is, I don't know if this is going to solve the problem, right? Like,
Starting point is 00:07:39 even if this helps around the edges, even if this is like marginal improvement, if a couple people, if some small percentage of people look at social media and see a warning label and decide not to use it that day, like it's better than nothing. The companies behave differently. Ray makes and other people make on like gun regulations. They're like, oh, background checks won't solve the problem. Okay. Well, yeah, not all the problem. But if it keeps some guns out of the hands of some criminals, that's pretty good. So it's like, I think it's a good place to start. It doesn't prevent you from using social media if you really want to. It just gives you a warning.
Starting point is 00:08:18 Absolutely. And I think we can see the difference in the Biden administration approach versus past approaches in the difference of how AI companies are treated, because those have risen in the Biden administration. They're very concerned about regulation. They're working with the government proactively. That has really shaped them. You contrast that with like the rise of the social media companies and like we kind of didn't know any better. So they were kind of allowed to do whatever and like now look where we are. Yeah. So beyond the Surgeon General's op-ed, there was potentially even bigger news on kids and phones in New York and California this week. On Thursday, New York Governor Kathy Hochul signed into law two bills aimed at protecting children from the harms of social media.
Starting point is 00:08:54 One law limits data collection of minors on social media, while the other bans addictive feeds, which New York State defines as feeds powered by recommendation algorithms like TikTok. And right here in our backyard, the Los Angeles Unified School District, the second largest school district in the country, banned student cell phone and social media use during the school day. California Governor Gavin Newsom has already called for a similar statewide ban. Big week. I'm interested in what you think of these moves, especially the New York addictive fee law.
Starting point is 00:09:28 Like, how does that work? What do you think about it? I think it's huge. I mean, the big question is going to be, can it pass a court challenge, which is coming, of course, but it seems like they have really crafted it with a mind of like learning from past states
Starting point is 00:09:41 that have tried to pass versions of this that have all got knocked down in court. Why is that? How can we work around that? But this is a real, it's not just a symbolic law. This really allows parents to, or I should say it sets a standard that kids cannot have algorithmic social media feeds on their phone without proactive permission from their parents.
Starting point is 00:10:02 That's huge. I know. And it's given the DA the power to fine the companies $5,000 for each violation, which if you're talking about millions of kids opening their phones every day, that's enormous. It's got real teeth to it. I think this is a really big deal, not just for the kids in New York who are going to be protected by this
Starting point is 00:10:19 and also the kids here in LA who are going to be protected by not having to bring their phones into school because they feel like everyone else has it. I think it's really big for, again, like the social media warning label, just like breaking the seal, establishing a norm that this is something the government can and should do to protect kids from technology that I think we all agree is harmful for them. I think that the school ban, we've talked about this before. I really think this is a no-brainer. I'm more convinced after talking to Haidt last week. Schools want it.
Starting point is 00:10:50 Parents want it. Kids want it. I do not see an argument for why children, young children, need to have their phones in school. They're already not allowed to use them while class is happening, and they're just using them anyway. So, like, what is the difference, right? And again, this sounds like we're old people, but like we went to school for many years with no phones and we were able to communicate with our friends, talk to teachers if our parents needed us.
Starting point is 00:11:17 They could call the school and get us at any moment, right? Like there is no need for kids to be on their phones at school. They can leave school, go home, be on their phones then, be on their phones in the weekend. We're not taking them away, right? But I think the phone thing at the school, having them in the classroom and having them during school is just like, no.
Starting point is 00:11:37 And this is, so the thing about feeds, like kids who are scrolling TikTok are going to be sad to lose their TikTok feeds. Like not all of them, but I think the data shows that like generally about half of people are on TikTok like it and the other half say, I want the government to take this away from me. That's telling. It's a high number.
Starting point is 00:11:53 But kids don't want to have their phones at school. This is something we have learned from one school phone ban after another is the kids prefer it. So literally the only people who wants this are like Mark Zuckerberg. And you know what? He's going to be fine. So on the addictive feeds, the challenge we keep coming back to
Starting point is 00:12:11 is age verification. And I still don't know how that works. The mechanism. Yeah, the mechanism for making sure that, I mean, because you're seeing this now happen with some of these states that have passed like anti-porn laws, especially for young people. know, some of these states that have passed like, you know, anti-porn laws, especially for young people. And then some of these, you know, some of the porn websites will
Starting point is 00:12:30 be like, oh yeah, we're not operating in these states anymore because we can't do age verification now. Like, who knows? I don't know enough about it. Who's the bullshit? But like, I wonder what the social, this is, this goes to the court challenge, right? Like, I wonder if the social media companies will come back and be like, well, if we have to do age verification, like, it's going to hurt adults too and we won't be able to do algorithmic feeds for anyone. Right. So, it's a valid and important question. There are third-party companies that, like, offer this service that can be outsourced. Certainly, the companies can afford it.
Starting point is 00:13:01 I think how I kind of feel is, like, I think it's actually okay if it turns out to be, like, pretty porous and if it turns out there's a lot of ways to afford it. I think how I kind of feel is like, I think it's actually okay if it turns out to be like pretty porous. And if it turns out there's a lot of ways to cheat it, I think it just needs to be effective enough to force the social media companies to change their calculus and to want to proactively actually have it work. It's a little like the Surgeon General warning label too. It just starts sending out a signal that like, we're taking this seriously and it's a problem. i think if like where it ends up is that it's like you know we're not going to require every like 12 year old to get face scanned and like give up all their bio data so we can confirm their id like even if it only accurately identifies one in ten that is still enough where the social media
Starting point is 00:13:40 company is going to be like okay well even if it's only one in ten we can't just like ignore it outright because of these fines so we're going to actually like, okay, well, even if it's only one in 10, we can't just like ignore it outright because of these fines. So we're going to actually do the work to implement it, to make it work, which is what you saw in like the EU regulations, just has to have enough teeth and be enforceable enough that it actually changes their calculus. I can give you two quotes from the industry groups on this. Okay. This was in the Verge piece about this. Are they outraged and offended?
Starting point is 00:14:03 Net Choice said in a statement that the law would, quote, increase children's exposure to harmful content by requiring websites to order feeds chronologically, prioritizing recent posts about sensitive topics. Come on. Yeah. Who do they think that's convincing? And then. The algorithm shows you the worst thing first.
Starting point is 00:14:22 We all know this. Adam Kovacevic, who's the CEO of Center Left Tech Industry Group Chamber of Progress, said, quote, It's a well-intentioned effort, but it's aimed at the wrong target. Algorithmic curation makes teenagers' feeds healthier, and banning algorithms is going to make social media worse for teens. Can I tell you how—this is actually kind of a like gratitude moment for me five years ago six years ago to tell people like the problem is the algorithms that first of all they would have no idea what you meant even as they were surrounded by them and second of all they would be like well really isn't it just giving me like things from closer friends first and i think now we all
Starting point is 00:15:00 understand that like yes the algorithm is the harmful thing and the fact that these quotes are so stupid they're like gonna make no difference like i think really speaks to all understand that like, yes, the algorithm is the harmful thing. And the fact that these quotes are so stupid and they're like going to make no difference, like I think really speaks to the fact that like we have achieved consensus on this, which was not inevitable because it's a lot to wrap your mind around to see that like this is actually the thing that is so harmful and impactful. Okay. So we have a surprise special session of our recurring segment. John is getting canceled again. Oh no. But this time John John is getting canceled again. Oh no! But
Starting point is 00:15:25 this time, John is not getting canceled by Twitter. He's getting canceled by me on behalf of his children. Oh my gosh. So... Have you been talking to Charlie and Teddy? I have been talking to Charlie and Teddy. They're very concerned because there was a recent study written up in NPR that looked at data from
Starting point is 00:15:41 10,000 12 and 13 year olds and their screen time habits. And they found that the, I don't know if it was exactly the number one, but like one of the overwhelmingly top predictors for excessive or unhealthy screen time use for kids was the amount that their parents use their phone around them.
Starting point is 00:16:00 John, how does that make you feel? It makes me feel okay. Really? Okay, good. I genuinely like do not use my phone as much when I am around the kids. Teddy, maybe not. Okay. He's an infant. He doesn't know.
Starting point is 00:16:13 Sure. It's fair. Charlie, especially now. And it's, I mean, it is now that I know this, it's also for his mental health and future, but it's just for my own. I find if I'm on my phone around him that I'm more distracted and annoyed and I like can't focus on him. And like when he's asking me to do something, then I'm like annoyed because I'm looking at something on the phone. I'm like, what? That's not his problem. That's my problem.
Starting point is 00:16:37 So I really am trying to put my phone down more. Okay. And I did see this study and it actually made me think I'm like, okay, maybe I've been backsliding. I gotta catch myself more. It's always a constant process of trying to be as good as you can. For everyone listening who has young children, the challenge for us now is
Starting point is 00:16:56 Charlie has dropped the nap. There is no more nap. And so now that he's dropped the nap, there is quiet time in his room. And for the quiet time in his room, he does have like a kid's iPad where he can't like go onto the internet, but he can watch his Peppa Pig episodes. Sure. So it is like, it is screen time and it is TV. It's not like scrolling places.
Starting point is 00:17:17 But it does, it was like the first moment where I'm like, oh, shit. Then I'm like, what else is he going to do in his room for an hour? Just like sit and stare at the wall. Like he can't read yet. I would be looking at my screen, certainly if I were him. Right. At least he's not like scrolling for shit. But yeah, no, I think it's better. I don't want to be on my phone around him. Do you feel you've gotten better about this since we did the like offline challenge here? Because I remember you talking about like the first week when we went
Starting point is 00:17:42 off our phones, Charlie noticed, right? Yeah, for sure. Well, it's so funny. Today, before we started recording this, I woke up at five in the morning, as I do. I went downstairs and I was like ready to prep for this episode. And instead of being able to prep for this episode before everyone woke up, Charlie started yelling at 5.30 in the morning that he was ready to be up because he was scared of something in his room. So I go up and get him. I come downstairs. I'm like, what am I going to do? I have no other time to prep for this.
Starting point is 00:18:11 So I grab my laptop, sat on the couch. He's watching TV. He's watching Peppa Pig. I'm on this. And like, he is, Daddy, watch. Daddy, look at this.
Starting point is 00:18:20 Look at this part. Like, he didn't want to watch by himself. He wanted to watch with me. As a shared activity. Yeah. And I ended up, like, not prepping as much as you can tell i do feel like that i know you're prepped wonderfully you're doing great um i do feel like that is something i hear a lot as guidance is it's not nobody is saying like set all of your screens on fire i mean i'm saying that but i'm also not doing it so you don't have to listen to me they're saying like you know,
Starting point is 00:18:45 make it a social shared activity with your kids. For sure. All right. I'm uncanceling you. You're doing great. That's great. Well, probation.
Starting point is 00:18:53 In less great news, thanks to Trump's goons, the nation's leading disinformation research organization is being dismantled. The Stanford Internet Observatory, which until now has been run by offline friends
Starting point is 00:19:05 Alex Stamos and Rene Duresta, was set up to track and analyze election disinformation. But now, after years of lawsuits from right-wing groups like the one run by fucking Stephen Miller, as well as a Jim Jordan-led investigation in the Republican-controlled House, they've seen their funding sources dry up, forcing Stanford to dismantle the project.
Starting point is 00:19:25 So Media Matters is also facing similar funding problems due to lawsuits and political pressure from the right. Do you think there's any way for organizations like these to get around this problem? I mean, it is definitely a big and rapidly growing challenge. I feel like especially for like the Stanford and Internet Observatory subset of like people who aren't just watching what right-wing influencers and republicans say but know how to actually like
Starting point is 00:19:49 look into the kind of larger universe of like what's happening behind the scenes at a lot of these pages or like how are they manipulating data how are they manipulating their user doing this kind of deep analysis and i think that reflects the fact that these groups like the peter teal groups like understand that this is a very meaningful check on them. So I think part of it, it should be just like this should be a wake up call for us who don't follow this stuff, that this is really important work that these folks are doing, like for the betterment of democracy. I do think that the part of the problem the Standard Internet Observatory ran into is that they are a non-partisan group which is great and it's like and i love that and that's great and it's a really good ideal to strive for but i just the nature of our world today any reason why maybe media matters isn't like a
Starting point is 00:20:35 slightly better position is it just like it's a shitty reality that polarization means that in order to survive the attacks from the right you kind of need the left rallied around you, at least to some extent. And therefore, it's really helpful to have like, and I don't like that. Like, I don't like feeding into the polarization on it. But like Media Matters can go to its funders, can go to its listeners or its viewers and say, like, we are an important part of pushing back on the right. Stanford Internet Observatory did not want to say that. And they had good reasons for not wanting to say that. Part of what's also a little bit unusual about the Stanford Observatory is they're housed in Stanford, which has a very close relationship
Starting point is 00:21:12 to Silicon Valley and tech companies. And they have been very... David Sachs on your ass. Right. And they're funded by a lot of technology companies and they've been very careful about navigating that relationship and some of their output is, you know, frankly geared towards making the social media or the technology companies feel bought into them.
Starting point is 00:21:32 That was probably going to be untenable anyway. So I think maybe the answer is for folks or part of the answer for folks like this to be housed at a university that has a different set of funders and relationships
Starting point is 00:21:42 and maybe around the East Coast. Well, so, I mean, you just described the work they do and it's working research into propaganda, misinformation, how it travels, it's tracking it. What the right sees, or at least what they say they see, their big problem is just these people are helping the government and the deep state censor
Starting point is 00:22:02 the shit that we say happens, a lot of times is not true right uh on social media biden mind and now they're silencing us even though these people have like the biggest fucking megaphone ever um and this is all gonna the supreme court will decide soon um the case was originally missouri versus biden it's now murphy versus missouri and it's whether the government um like pressured these social media companies to uh censor information in the run-up to the 2020 for which there's no on hunter biden's laptop all this twitter files bullshit and it's the idea that there's no evidence whatsoever that the government was like you must do this it was like hey we think this is misinformation about vaccines letting you know yeah And the social media companies made their own
Starting point is 00:22:49 decision. But this is how the right sees it, is that it's some like, they're helping the censors and it's like some big fucking cabal. It's just ridiculous. I mean, I think part of this is not by no means the most important part of this, the biggest part of the problem, but it's just like something that we can maybe make a difference on. The which does a lot of very good work has also published in stories that feed into this right-wing narrative that like the government is trying to do mind control on you and censor the left by telling the social media companies what to say for which there's no evidence look it's in the past it happened i i think it would be great if that publication which does do so much good work did did something to kind of say, like, actually, it turns out that they're not trying to do Biden mind control by telling the social media companies what to do.
Starting point is 00:23:31 And like, it seems like an easy win. teals and how they're trying to pressure these groups because they want to get the right-wing propaganda out and like help people who treat the intercept as they should as their primary news source to like see that this is a threat to them too yeah i agree all right before we go some quick housekeeping our first ever book democracy or else how to save america in 10 easy steps is out this week wow congratulations man thank you thank you lunch uh at this point, we all know what we're up against. American democracy is in crisis. We just talked about it. We did. In one aspect.
Starting point is 00:24:10 It can feel daunting. That's why in Democracy or Else, we broke everything down into the 10 steps you need to follow to get informed, get involved, and get off the sidelines. Cricket's donating its profits from Democracy or Else to Vote Save America and 2024 campaigns. So you're already making a difference when you purchase big text, cool illustrations, short chapters, and jokes that Lovett will take credit for. I had to change that because it was like, Lovett's jokes are in there. It's like, I'm sorry.
Starting point is 00:24:36 There's four of us that wrote that. I've read it. I feel like you can tell the Lovett jokes. And it's kind of fun. Yeah, you definitely can tell the Lovett jokes. There are also other jokes that Josh wrote, that I wrote, that Tom wrote. Sure, yeah. Anyway, what more can you ask for? Pre-order your copy now
Starting point is 00:24:49 at Cricut.com slash book. Alright, after the break, Max talks to tech writer Brian Merchant. AI as we know it is maybe Thank you. All of which means that we're now at a point where we can look around and get a sense of what a world with AI is going to look like. Here to talk with me about the ways that this new world is clicking into focus and what it means for all of us is journalist and author Brian Merchant. Brian recently published Blood in the Machine, the Origins of the Rebellion Against Big Tech. And he chronicles our new AI era for publications such as The Atlantic and his excellent Substack. Brian, welcome to Offline. Thanks so much for having me, Max. So I want to start with this story that you
Starting point is 00:25:50 wrote about recently and that I think both of us were very freaked out by, but also feels very indicative of the new era of what some people are calling slop AI. It's a story involving AI and a bank called First Horizon. So walk us through what happened there because it is wild. Oh, yeah. This is maybe my all-time favorite dystopian AI enterprise use case so far. So this bank, First Horizon, it's like a banking chain in the southeast. Pretty big, has a bunch of branches, and therefore has a bunch of people with customer service needs
Starting point is 00:26:28 and customer service problems. So they call into this call center where they reach an automated system. We all know this, right? We call in an automated system. We get put on hold. We hit number five. It takes us not where we want to go. We hang up.
Starting point is 00:26:43 We try again. We get angry, right? And when you finally reach a person, you know, you unload on them maybe a little bit. Maybe you don't mean to. Maybe, hey, this is my, you know, this is my issue. I got to be somewhere. Can you solve this for me? Whatever.
Starting point is 00:26:58 You're very frustrated by the time you get on the phone. You're angry. You're, you know, you're trying to hold it in. Everybody's had this experience. Now, this means that these call center workers have a really high turnover rate, low quality of life on the job, and they quit in pretty high numbers. And some of them start to get frustrated and, quote, lose it, as bank management says. So they have turned to – it's a system designed by Cisco, actually, that company Cisco, where they're going to deploy an AI. And the AI is supposed to monitor you and your reaction if you're to get upset or agitated or it looks like you're going to, quote, lose it, the AI is going to then play a montage of your last family vacation to soothing music so you can, quote, reset. Now, maybe it doesn't sound so bad on its face, but there's two things that are important here.
Starting point is 00:28:01 Number one is that automated system that I mentioned up front. So this is decades of automating other jobs, like the poor call center worker that you finally reached, who could have done a much better job in answering that question in the first place. The bank doesn't want to pay all those workers, so they get rid of them in lieu of this automated system. So there's just fewer people to answer calls. Number two, the bank could do something like offer more time off, hire more workers back. But no, instead it's using this AI to sort of paper over the issue as a band-aid, right, so that they can kind of be distracted for a second.
Starting point is 00:28:37 And it really just kind of feels like what a lot of these AI deals are these days on the enterprise level, which is one manager talking to another, not really thinking about the actual workers or people impacted by the AI saying, you know what, this sounds like a great solution. If we play them soothing music, I bet we can increase productivity among our workforce by 3.4% or something like that, right? And meanwhile, none of the systemic issues get solved. Customers don't get any happier.
Starting point is 00:29:03 And workers only continue to get more and more aggravated and perhaps even more surveilled because you'd need a lot of surveillance technologies to watch what these workers are doing in order to… And to monitor their emotions. Exactly, right? world where we have these AIs that are monitoring us and how we feel to make sure that we are always in an emotional mental state that will maximize profitability for the companies that we work for. And it's actually, I think that nobody clocked this because it's a movie that has been kind of forgotten. George Lucas directed this movie called THX. Have you heard of this? Yeah, this is before Star Wars. Yes. Okay. Okay. So you do know it. And this AI used by First Horizons, which I think they ended up walking back,
Starting point is 00:29:48 but they did like float that they were going to do it, is straight out of that future dystopia where we're just like so much further isolated from each other that we are just being like guided by these AIs that just want us to be just happy enough and just productive enough to churn out some more labor. But I think to your point, part of what makes this story, I think, really important is the way that it speaks to what AI is actually being used for and who it's being used by.
Starting point is 00:30:15 Because it's very different than this kind of sales pitch that we got from like an open AI product demo that is like, wow, look at your super genius personal assistant that's like Jeeves from the Iron Man movies, or not Jeeves, Jarvis. Thank you. Yes. Or, you know, it's going to be this, it's going to cure cancer, or it's going to be an incredible artist that it has a very different use. So like, to what extent do you think, or what is the use case that we are learning about from incidents like this that like AI is actually for and who it's for? Right, yeah. No, that's a great question.
Starting point is 00:30:48 That's the million-dollar question right now. Because we're seeing right now, a year into this AI boom, that a lot of these companies, and it has been a lot. There's been a lot of Fortune 500 companies, a lot of consulting firms have bought hundreds of thousands of what they call seats in the enterprise version of generative AI, productivity software, things that are supposed to help organize your day, automate your sending of emails, write your schedules for you, do all these things. They're finding, I think, that it's not working as sort of, you know, holistically as they would have hoped. So we're seeing a lot of reports, and it's all kind of kept a little bit hush-hush because, you know, these consulting firms themselves stand to make a lot of money by turning around and selling them to other clients or recommending other clients or recommending other clients adopt them. So we have to sort of be wary that there's a vested interest on a lot
Starting point is 00:31:53 of layers here in adopting this stuff. But the message that's kind of coming back right now is that a lot of companies are kind of hitting the brakes because a lot of use cases like, you know, turning over your, you know, many of your main tasks to AI just isn't viable. The systems aren't good enough. They're not reliable enough. Workers have to spend a lot of the time sort of checking their output to make sure that they can, you know, pass it off as work that won't get them in hot water. There's security issues. There's issues with intellectual property. Can you even use the output of this in a certain context?
Starting point is 00:32:28 Or is your company potentially going to get sued if it does? So there are all of these questions with how these tools can be used in the workplace in an efficient way. And we're seeing this really sort of paper-thin kind of band at the top of use cases where it's like, well, actually, we can use it for that. And you had these two contrasting kind of forces where, you know, managers want this to work, right? They want to be able to automate as many jobs as they can. They want to decrease labor costs. They want to turn around and tell their bosses, the executives that, you know, oh, we were able to, you know, decrease labor costs by 10% this quarter,
Starting point is 00:33:09 and it was thanks to OpenAI. Don't I look like an innovator? Oh, I see. So they really want to push this stuff, especially into sort of corporate environments, but they're finding there's only select use cases where it can work or where people don't sort of rebel. Mid-journey or any of these tools that can generate images, that is kind of unfortunately for artists and working illustrators and graphic designers.
Starting point is 00:33:39 One of the use cases that we're seeing on the ground actually sort of being adopted and illustrators and graphic designers are kind of taking it on the chin right now. And so that is something that, especially if it's not going to be public facing, you know, if you're not going to have to put it on social media where you'll face the wrath of the public, if you use AI art, if it's just going to be for an internal presentation or a, you know, a PowerPoint presentation that you're going to give or sort of internal marketing materials, anything that's not going to face the public eye. A lot of people are turning that into mid-journey created work. So you have these sort of subsections where people are really vulnerable. But by and large, it is so far not turning out to be this one-size-fits-all.
Starting point is 00:34:27 Everybody's job is threatened. Although I should also note that it does benefit most managers to promulgate the belief that it does, right? Because it also serves as leverage. You're on your heel. If you always think that AI could do your job, then maybe you're not going to ask for that raise. I think this is a really important point. The idea that because we're at this kind of moment where a lot of us have seen AI actually being used, where it's the like AI Google search, or we've like interacted with it. And it's like, it's actually not that great. It's like kind of impressive in some ways, but like not really enough that it's going to like
Starting point is 00:35:03 do things that humans could do. But at the same time, we are seeing a lot of corporate clients for it. And I think if that seems weird and congruous, I think a lot of it speaks to your point that part of what it is in some ways designed to do is to appeal to corporations, senior managers, middle managers as a way, like, look, we understand this is not going to replace, it's not going to be good enough to replace human work, but it's good enough that you can sell your bosses on it being, like, innovative and forward-thinking, even if it's only halfway successful, and you can use it as leverage against employees. I mean, it's also, I think it's really pertinent that labor is
Starting point is 00:35:42 the most expensive it's ever been right now, which is great if you are a worker, and means that if you are a middle manager, you're facing a lot of incentives to cut those labor costs. Something that you have written about that I think is a really important point in talking about AI's impact on labor is that you're certainly not, I would say, optimistic about it, but you have said that this is not going to lead to a jobs apocalypse where entire sectors are erased. So talk about how you actually see, and I think the like first horizon speaks to this, how you actually see the impact of AI on labor, like in practice. Yeah. So, you know, ever since I published this book about the Luddites and, you know,
Starting point is 00:36:22 they were rebelling against kind of the first regime of automation that was put in place by factory owners and their early entrepreneurs. They were going to try to automate the production of cloth goods. So 18 you know, as sort of a, you know, an example of how automation hits workers, it's really useful because you see the ways that, you know, the machinery couldn't do as good a job as the skilled artisans, right? It wasn't that it was going to, you know, benefit all the workers. It was being used for a specific purpose by some factory owners who saw an opportunity to use the machines to sort of channel more profits into their own pockets and then to de-skill workers. Instead of a skilled worker using the machines, you could churn out a worse product overseen by someone who was a child. A lot of times it was literally a child.
Starting point is 00:37:24 A child labor was rampant in the Industrial Revolution. So we can look at that as kind of sort of an example of what might happen today too with a lot of creative labor where I don't think anybody's really concerned that the generative AI systems are going to outperform humans in writing text or creating images or writing code the fear is that companies will use it to sort of de-skill workers who already have these jobs or again as leverage against them and sort of drive down the prices uh that it costs to purchase illustrations these kind of things,
Starting point is 00:38:09 and then to slowly erode the quality of life for working people. Sorry, what does de-skill mean in that context? Yeah, de-skill is a term economists use to describe what happens when you have a technology that can do some of what, you know, a quote, skilled worker's job. You know, if I'm a copywriter, for example, and I've been a copywriter for 20 years, there are a great number of skills that go along with that job. That's sort of understanding the client's request, writing with some flair, you know, writing with good grammar, all these things. A machine, you know, might not be able to do all those things, but it can do a chunk of them.
Starting point is 00:38:49 And then you can hire somebody who has a basic understanding of grammar to check it to make sure it checks out. So you've de-skilled this copywriter. And now corporations are saying, well, we don't have to pay this guy, you know, 70 grand a year. We can just go for a chat GPT subscription and then hire somebody on contract to just check the output. And then you've de-skilled that worker. So when this automation first started, you write about a prediction that the economist John Maynard Keynes made about how this was going to be so great for workers.
Starting point is 00:39:31 Can you talk about what his prediction was and what he got wrong ultimately? was writing this paper sort of looking at technological trends and sort of speculating as to the world that his grandchildren would inherit in terms of the amount of work that they would have to do. And he saw how much productivity technology was creating, how much it was producing, and just extrapolating from that, he said, well, it's producing so much, we'll all be so productive. We won't have to work at all. We'll work five to 10 hours a week if we want to. Five to 10 hours. Five to 10 hours. By 100 years from when he was writing. And so
Starting point is 00:40:12 that's right about now. Obviously, that hasn't happened. And the reason is, is because we have all these great productive technologies, but the managerial class, the executive class, they've all realized how to capture the gains from those technologies. So it's not that if I get to use ChatGPT at work, it's like, oh, great, my job's done after 10 minutes because I just whipped up the assignment that I had for the day. No, I'm going to be expected to do 50 of those assignments. And then if there's 50 more assignments and they're each making that much more money, where's that economic gain going to go? It's going to go to the folks at the top. And maybe I get a raise, but I'm certainly not benefiting from the full scope of the technological advancement or the increased productivity gains. And that's been the story of the last 100 years.
Starting point is 00:41:06 That's why as technological productivity has continued to grow, we can do all kinds of wonderful things with technologies. Working class wages relatively remain stagnant for decades while the folks at the top started enjoying a bigger and bigger slice of the pie. Well, so just to take the other side of this, isn't there a case if you were to look at, for example, the introduction of automation and textile factories in the early 1800s, a case that on net the world is better off because I take your point that the quality of textiles went down, but like,
Starting point is 00:41:48 isn't it great that we have like mass produced clothing now that everything doesn't have to get bottlenecked by skilled laborers, which is very expensive and there's only so many people who can do it. Yeah. Yeah. It's a, it's a great question. It's the number one question I probably, you know, you know, to, to challenge this and it's fine. And what I always say is that there were a number of different paths available and it's not either or. And I think one of the biggest mischaracterizations of the Luddites who, you know, they with their backs against the wall, their families hungry,
Starting point is 00:42:18 they eventually took up their hammers as a tactic of last resort and did what they became famous for. And that sort of this campaign of machine breaking where they would smash the machines that were stealing their bread, quote. But they were not trying to halt progress. They weren't trying to keep things as they were. They were essentially demanding that a better mechanism for unleashing technological development, right? They were demanding a seat at the table, essentially, by wielding their hammers. They were doing what historians have called collective bargaining by riot, because you couldn't form a union back then.
Starting point is 00:42:59 It was illegal. There was no democracy. So they needed a way to sort of exert their collective voice the voice of the working people um and that's why they took up hammers it wasn't their choice they just didn't have any other options but that said we can imagine any number of futures that are not that that are between smash the machines and allow a small handful of factory owners to organize them into uh sort of monolithic operations that they then profit from at the expense of everybody else. You know, what if there were collectives and they would come up, they would spring up decades later after conditions were better for them
Starting point is 00:43:37 and when workers had a little bit more bargaining power. But you can imagine any number of futures where workers have more control over the machinery, that sort of all boats rise together. Maybe you could have smaller collectives where the machinery was introduced and you're not de-skilling workers, but you are becoming more productive. And it all comes down to the fact that the mode of technological development that we have had in place for the last 200 years is a profoundly anti-democratic one, right? Where if you got the money, you've got the influence, if you've got the connections, you call the shots at how technology is going to be developed and how it is going to be deployed
Starting point is 00:44:16 and who it's going to serve, right? Well, to your point, the clients of OpenAI are, they're not the illustrators, they're the companies. Exactly. And that's why we're seeing some very interesting pushback right now. Like, take the writer's strike. If you look at the thing that kind of bubbled up to the top of that, it didn't begin as their chief objective, trying to, you know, fight AI. But it sort of quickly became one of the most public objectives in challenging the ways that the studios wanted to use AI or they wanted to reserve the right to use AI. And the writers said no. And I've argued that they used a real, truly a Luddite tactic to say no,
Starting point is 00:44:58 to refuse this use and say, absolutely not. You cannot use AI. The studios don't get to decide how to use AI. And if anyone's going to decide how to do it, it'll be us. And so in that contract, they did ultimately win, which is quite a remarkable thing that I think it deserves even more attention than it got. They deserve the right to bring that ability to use AI or technology into their own labor process. They get to consent to how they use the technology. If one studio, if they want to use it, yeah, they have the power.
Starting point is 00:45:33 They're empowered to use AI. A lot of them won't because they have ethical concerns with the way the systems were trained and all of that. But it is up to them, not the bosses. And that speaks to all the other things we've been talking about too. Well, how do you get sort of the economic benefits to sort of impact workers more than just bosses? Well, that's a really good way. If they get to decide how it's going to make them productive, how they're going to benefit from it,
Starting point is 00:45:59 giving workers that power is a good way to do that. So we are at a fork in the road, so to speak, but the choice is not between progress or no progress, which I feel is how it's often framed, certainly by the AI companies, by Silicon Valley, and I think more broadly in the conversation. I understand why. If you're a consumer and you just feel like AI is in front of me,
Starting point is 00:46:20 it's a choice of do I want this or do I not want this? But it sounds like what you're saying is that the actual fork that we are between is between implementing this technology in the way that past automation has been implemented, which is primarily to serve the benefits of companies, which structurally makes sense because they're the ones who are making the deals with the companies. That's just like how the business is structured. And that is not something that certainly is beneficial for workers, but it's also not something that produces necessarily a good product that is better off for consumers or for the rest of us. And the other choice is for a more equitable distribution of gains, which is
Starting point is 00:46:55 hard to get to because as you were saying, like just the structure of like who the clients are of the AI companies and like who they are dealing with and who they are selling to is a very narrow band of people. But it does sound like it's, there are at least like people figuring out ways to kind of navigate this. I'm going to ask you a little bit about kind of like how this plays out in applications that like you or I would see as a consumer, because I think we can all think of a lot of ways that this is played out in a way that looks bad. Like Illustrator is an example you mentioned. And I think we see AI illustrations on our Instagram feeds, whatever, and they look terrible. And it's also bad for workers.
Starting point is 00:47:36 The New York Times is trying to lay off half of its art department right now to replace it with AI. That's scary. If you're an illustrator or if you're just anyone who works in a field that could be impacted by this. But again, just to like try to take the other side of this or try to think about the fact that there are ways that progress can at least theoretically be good for everyone. Like I can think of ways that AI could be helpful in the production of the show. Like it would be really helpful to be able to be like, hey, Brian Merchant is coming on. I would love to ask a chat bot that is not shitty to like help me understand some key points in his book, help me understand how this relates to other like works of literature on the history
Starting point is 00:48:14 of labor organizing, things like that. And that is something that would at least in theory not replace existing labor, but would free up, you know, it would free up me, it would free up the hardworking producers in the show to focus on other things. And maybe that's something that like helps make the show a little bit better, helps make our lives better. I mean, number one, am I being overly optimistic? And you can tell me if I am, but number two, how do we get there as opposed to what's happening with say illustrators? Yeah. So I think that's an interesting point. And I think it actually links back to the first topic that we were discussing, which is how AI is used as it is with First Horizon Bank
Starting point is 00:48:52 to sort of be a band-aid to sort of decades of other sort of measures of cost-cutting and bad automation and things like that. Because one question I always ask with AI, like what, you know, what is it really for? So not to pick on this example that you just used, but, you know, preparing more quickly for an interview and sort of getting the key points and sort of essentially, you know, automating the reading of the book or whatever. And sure, there may be books you don't want to read and it would be useful to sort of get that digest. To be clear, I do want to read your book. Good, I brought one.
Starting point is 00:49:34 But that's to me so... What is the sort of the force behind the compulsion to read this book more quickly? Like we have more things we want to do. And I think this juncture where we're thinking about AI and what it means, it gives us an occasion to ask, you know, why do we want to automate this task? Why do we want this to be turned over to AI? Is this something that we were previously doing
Starting point is 00:50:04 that we didn't need to do at all in our economy? Is this something that we wish we could spend more time doing? Is this really something that we want to automate away? Because I see a lot of the use cases floated, like, oh, it's going to help with marketing emails. Well, does that just mean everybody's going to get 100 times more marketing emails? That doesn't sound great like, oh, it's going to help with marketing emails. Well, does that just mean everybody's going to get a hundred times more marketing emails? Like that doesn't sound great. And again, it's going to mean the worker has to send a hundred times more of them too. Do we want AI generated art? I mean, there's, I think, you know, you could do interesting things with it if it wasn't simultaneously this existential risk to an entire trade that I think
Starting point is 00:50:44 most of us agree is pretty important that I think most of us agree is pretty important. I think most of us want illustrators that can make a living applying their trade and selling art. Are there some interesting use cases for AI art? Maybe. But what do we want companies to be able to use it for? do we want a company to to budget for paying uh you know an artist i i would say yes um and vis-a-vis the actual use cases where it shows up you know i you know maybe maybe there's a net gain on society where you can have some cheap auto-generated art but again i think now is a really good occasion as we're brushing up against a lot of these limits of what ai can do to really sort of a lot of these limits of what AI can do
Starting point is 00:51:25 to really sort of consider each of these use cases and ask, like, do we want our world to go in this direction and to further automate all of these things? I would say that there are some things that we really don't want to automate. And that's where, you know, giving workers more power over answering those questions so they're not answered for them is important. And that we can sort of selectively negotiate and negotiate the incentive structure of these. You know, there are a lot of cases nurse or a doctor, then that's good. But we would want it to be used as an additional tool by a trained professional, not to replace it. So we have to figure out how we're going to make and navigate all of these decisions.
Starting point is 00:52:25 Because again, if we just go on autopilot, then the market incentive, the profit motive is going to take over and it's going to answer all these questions for us. Well, let me throw another example at you because I think this speaks well to your point about not wanting to do it just on autopilot and how important it is to do it with the buy-in of the workers who are actually implementing it and overseeing that speaks to some of the unforeseen risks, which is, I'm sure you saw this study about MRIs and that there's this study that found that an AI is 7% likelier or spots 7% more cases of prostate cancer on MRIs and has fewer false positives than human operators, which seems on its face
Starting point is 00:53:06 unambiguously good, right? Yeah. And that would be great. What a wonderful tool that the doctor that's reading those MRIs could use to then double check, to consult. Again, that would be great as long as the adoption of that technology doesn't also sort of precipitate a demand for less of that doctor. Well, he's got the AI tool. He doesn't need to spend as much time with patients now. If it's a benefit, then that's great. If it's an additional sort of arrow in the quiver, I think Cory Doctorow, the tech writer and
Starting point is 00:53:45 commentator, has called this centaur AI, where you want the AI to be empowering the centaur, not guiding the horse. If it's augmenting the experience of the worker, then great. But if it's
Starting point is 00:54:02 driving the whole operation and forcing the worker to make different calls, then it's not something we want to deal with. Wait, who's the centaur? So if you have a centaur, right, then the worker, the person's at the front and the automation is at the back and you're drawing the power from it. You're still making the call. I see. Yeah. So that speaks to something like an AI that can better identify prostate cancer and MRIs. Are we sure that's something that is being utilized by nurses and doctors in a way that says, hey, this is a better way for me to identify cancer and not a way for hospitals
Starting point is 00:54:35 to say, let's slash a third of our nurse workforce in the MRI unit. Correct. And that's always going to be the trick is, you know, how do we make sure that that decision-making power, especially in, you know, crucial situations and in crucial occupations, not that, you know, there aren't many, many, many crucial occupations, but things like education and healthcare. You know, we're seeing a lot of encroachment by AI systems. And again, I'm not saying there aren't great things that can be done, but I think we should trust the people on the ground, right? The workers, the doctors, the nurses, the teachers, the people who have knowledge, who have institutional knowledge of where this might be useful to sort of look at the AI and say,
Starting point is 00:55:22 this could help me here. This could help me here. This could be a big problem over here. So we don't want to use it over here. And in fact, nurses have actually organized protests recently against Kaiser, a big hospital that is instituting what they think is too much AI for things like diagnostics. And it's leading to false diagnoses in a lot of cases. Again, maybe statistically it's better, but it's something that now, again, the nurses are expected to pick up the pace because they have this tool. It's being used by hospitals as an excuse not to hire more nurses and cut costs. And it's really still in kind of an experimental phase. And the nurses are saying,
Starting point is 00:56:06 we cannot have this in the hospital environment yet. It's not ready yet, and it's putting patients in danger. So, I think we want to listen to the healthcare workers in a case like that and figure out how we're going to move forward. And, you know, a common theme you're going to see is the nurses have a pretty strong union, and they're able to sort of register these demands. Where we're going to see more of the sort of damage done to professions is in more precarious, more atomized professions. Again, illustrators, freelancers, copywriters, people who work at home who don't have networks or unions or any even informal modes of organizing to kind of push back collectively. I do feel like that story of like the nurses protesting against how Kaiser is using AI, I feel like I encounter versions of that so frequently where I will talk to people who are
Starting point is 00:57:00 like, look, I think AI could be really useful for the kind of work that I do. And I can imagine all these uses for it. That would be great. And like, I'm really open to bringing this into my work, but that they feel like the way that it's being imposed on them is not to make their work better or produce a better product, but rather to shave down labor costs by saying like, well, you're going to be 60% more productive now. So that means we're going to reduce your team by 30%. So actually in the end, the product is the same or worse because there's not as many people to oversee the tool, which also takes time. But just the company is saving money and labor again at a time where labor costs are really high. So all companies are feeling really, really motivated to try to push on that. So, okay. Crystal ball time. I know it's like,
Starting point is 00:57:43 it's impossible to say where this technology is going. It's impossible to predict like so much of it is driven by market forces and like what's going to happen with the Nvidia stock price, whatever. But let's just like imagine the current trends stay as they are for the next five years in terms of the technology, in terms of how companies are using it, where it's being used. What do you think AI's place in our world looks like in five years? Yeah, that is such a thorny question because AI is unlike any other technological development and consumer trend that I've covered as a tech reporter in all my 15 years or so on the beat. And that's because the levels of hype are so gargantuan. The number of people who sincerely
Starting point is 00:58:33 believe that this can do all this stuff or that it poses an existential threat to the end of the world, which I could not be worried any less about. Yeah, I agree. I don't think it's going to turn into Skynet. Yeah. But there are real applications. And as we've discussed here, there are real applications that corporations are very excited to try to put into play. And so there's a real thirst for it. There's also been five years of somewhat sort of stagnant development out of Silicon Valley. And so they finally found something that everybody wants to use and try and talk about. So after the metaverse and Web3 and crypto, all these kind of, you know, maybe duds is too aggressive,
Starting point is 00:59:14 but things that didn't capture the imagination, right? We didn't call crypto a dud. Yeah, I think we didn't call crypto a fraudulent dud. But we are at this moment where there's so, Silicon Valley has basically pushed all the chips onto the table, onto AI. And so we have to reckon with the ramifications of that, whether or not the technology is world changing or not. The chips are there. NVIDIA is worth more than $3 trillion.
Starting point is 00:59:39 It's worth more than the entire London Stock Exchange combined. Wow, I didn't realize that. The most valuable company in the world, more than Microsoft, more than the entire London Stock Exchange combined. Wow, I didn't realize that. The most valuable company in the world, more than Microsoft, more than Apple. And people are kind of going, okay, we're in uncharted territory here because we're in a big, big bubble. And I think even AI supporters recognize that. So all that has to be taken into account first and then looking at the technology on the ground. And there are a lot of really smart people who are getting pretty skeptical that the state of the art is going to advance sort of with the current sort of technological systems that they have, these large language models that have just sort of already kind of been trained on just about as much data as they can. And so we're seeing the rates of return on
Starting point is 01:00:31 their improvement sort of declining. So it might not be the same exponential levels of, you know, improvement in how well the systems can produce good output. That doesn't mean they won't get better. How better they'll get is the million-dollar question. Everybody's been waiting for the successor to chat GPT 4.5. They want GPT 5, and that's supposed to be the next big boom. But it hasn't come, and it keeps not coming. And people are starting to wonder if it's because it's a little underwhelming. Is it going to really sort of deliver?
Starting point is 01:01:09 Because now the stakes are high. Now you've got hundreds of thousands of corporate clients buying this stuff who've been sold the bill of goods. So you have to deliver on those contracts. Oh, yeah. And you saw this reporting from the information a few months ago where sales reps from Microsoft and Google are saying, maybe try to tamp down the expectations just a little bit.
Starting point is 01:01:30 Just, you know, don't, we got to walk this line because this is still supposed to be able to have the power to turn into Skynet and, you know, destroy the world. But it might not be able to sort of write those marketing materials as well as we thought. Okay, yeah. If the companies themselves are saying, let's manage expectations and try to like... Some of them are. Sure, some of them are. That feels telling. Well, Brian, it was so great to chat.
Starting point is 01:01:57 I really enjoyed this conversation. Thank you so much for coming out. Yeah, me too. Thanks so much. Cheers. Cheers. Mixed and edited by Jordan Cantor, audio support from Kyle Seglin and Charlotte Landis. Jordan Katz and Kenny Siegel take care of our music. Thanks to Ari Schwartz, Madeline Herringer, and Reid Cherlin for production support. And to our digital team, Elijah Cohn and Dilan Villanueva, who film and share our episodes as videos every week. Thank you. the weekly show with Jon Stewart. Every Thursday, spend close to an hour with Jon and his special guests as they delve into current events, from the 2024 election to the ups and downs of the economy
Starting point is 01:03:08 and all of the chaos and corruption in between. The weekly show with Jon Stewart will leave you saying Thursday is your new favorite day of the week. TGIT, thank God it's Thursday. Yeah, thank you. Listen to the weekly show with Jon Stewart wherever you get your podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.