Hard Fork - Is This an A.I. Bubble? + Meta’s Missing Morals + TikTok Shock Slop

Episode Date: August 22, 2025

Note: This episode contains sexually explicit music.This week, the whole tech world seemed to be asking: Are we in an A.I. bubble? We’ll explore the cases for and against, including who we think sta...nds to lose most. Then we’re joined by the journalist Jeff Horwitz to discuss his blockbuster reporting about an internal Meta policy document that permit the company’s chatbots to engage in romantic role-playing with children. And finally, Casey introduces Kevin to a shocking new TikTok trend. Guests:Jeff Horwitz, investigative technology reporter for ReutersAdditional Reading:My Dinner With AltmanCompanies Are Pouring Billions Into A.I. It Has Yet to Pay OffMeta’s A.I. Rules Have Let Bots Hold ‘Sensual’ Chats With ChildrenWhy Is TikTok Overflowing With A.I. Country Music Erotica? We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript
Discussion (0)
Starting point is 00:00:00 All right, some of these chatbots are too literal. Do you know what I mean? So the other day, I'm on YouTube, and I'm watching videos from the Outside Lans Festival that we just had here in San Francisco. Great Music Festival. And I was watching the set by the band Role Model, and they do this thing when they perform their hit song, Sally, when the wine runs out, where they bring in kind of a special guest to dance. Okay? This is like something that you'll see it on YouTube if you look, right? And this guy runs out on stage, and I think, is that the pop star Trois Yvonne?
Starting point is 00:00:32 Because I thought it was the pop star Trois of Vaughn, but it was sort of, you know, very quick, and he's, you know, spinning around. I didn't, like, get a good look at him. So I thought, I'm just going to ask Chat GPT about this. So I said, hey, did Troy Savon come out during role model set at Outside Lans? And you know what it said? What? Nope. Trois Yvon did not come out during role model set at Outside Lans.
Starting point is 00:00:50 He had already publicly come out as gay back on August 7, 2013 via a heartfelt YouTube video. And I was like, that's not what I was talking about. And then it said, what happened at Outside Lanz this year was a surprise live appearance. He hopped outside. And then it was basically like, yes, he did. So anyways, I thought that was a little crazy. That is crazy. You know, they're actually building an AI system that can determine when every gay person in the world has publicly come out.
Starting point is 00:01:19 Do you know what they're calling it? What they're calling it? Gay-GI. Oh, man. That's great. That's great. Yeah. Yeah.
Starting point is 00:01:27 I'm sorry for your troubles. Anyways, congratulations to Troy Savon for coming out both on August 7th, 2013 and August 8th, 2020, 25. I'll be it in slightly different ways. I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is hard for this week,
Starting point is 00:01:50 are we in an AI bubble? We'll make the case for and against. Then, journalist Jeff Horwitz joins us to discuss his blockbuster story on how meta-AI was instructed to engage in romantic roleplay with children. And finally, I tell Kevin about my favorite new TikTok trend, and it's filthy. God help me. Well, Casey, today we are going to talk about a question that has been percolating for a long time,
Starting point is 00:02:25 but that is really gaining some attention this week. and that is, are we in an AI bubble? Yes, this is something that people love to talk about, and there have been some news items recently that have led even more people, I think, to get in on the conversation. Yes, and strangely, I think this latest bubble news cycle originated at a dinner that you and I attended last week
Starting point is 00:02:47 together with Sam Altman. That's right, and I think it's because they saw the price for the bill for everything we ate, and they thought, how can one company possibly afford all of this food? Now, Casey, we should talk a little bit about this dinner because it was quite unusual the way it sort of came together. But before we do, we should make our disclosures. The New York Times Company is suing Open AI and Microsoft over copyright violations related to the training of large language models. And my boyfriend works in Anthropic.
Starting point is 00:03:11 Okay, so back to this dinner. Casey, can you sort of set the seat a little? Yeah, so the week before last, at the end of the week, I got a text message from Open AI saying that Sam Altman, the CEO, was going to be throwing a dinner for a small group of reporters the following week and would I like to go. and unusually the dinner was going to be on the record. And, you know, it happens not infrequently in tech that a company will want us to get together with some of their executives, sometimes over dinner,
Starting point is 00:03:35 but usually those gatherings are off the record so they can sort of talk more candidly. This was something very different. We both, as it turns out, got the invitation. We both went, and for two hours, we had an on-the-record discussion, not just with Sam Altman, but also with Brad Lightcap, the C.O of OpenAI and Nick Turley,
Starting point is 00:03:52 who runs ChatGPT. Yeah. So there's like this long, rectangular table in this private dining room. We're eating this, like, very good Mediterranean food that's being served on a bunch of, like, shared family-style plates. And Sam is just getting question after question from us at the table. And a lot was said during this dinner, a lot of talk about GPT-5 and the rollout of that. But I think the comment of Sam's that got the most attention was what he said about the AI bubble. Yeah, so he said a few things about this. You know,
Starting point is 00:04:22 one of the standout quotes to me was when he said, someone is going to lose a phenomenal amount of money. We don't know who and a lot of people are going to make a phenomenal amount of money. My personal belief, although I may turn out to be wrong, is that on the whole, this would be a huge net win for the economy. He also, though, said Kevin that, and he was sort of like imagining a theoretical startup here, if it's three people and an idea and it has a valuation of $750 million, he said, quote, it's irrational.
Starting point is 00:04:50 Someone's going to get burned here, I think. So on one hand, he said kind of what you would expect him to say in terms of, look, I think AI is going to be really good for the economy. I think we're going to make a lot of money. I think some of our competitors are going to make a lot of money. But he also said, look, it's clear that there is some irrational enthusiasm that is going on in this market. And some of these companies with huge, you know, multi-hundred million or even billion dollar valuations are not going to provide returns to their investors. Yes. And this is not the first sort of speculation we've heard in recent weeks that we are entering a bubble-like period of investment in
Starting point is 00:05:21 AI. And so I just want to quickly run through some of the other evidence that people who are worried about this are citing to say, well, maybe things are getting a little crazy. The first kind of evidence are these valuations of these AI companies. So just this week, OpenAI is in talks reportedly to do a tender offer. They're letting current and former employees sell about $6 billion worth of stock at a valuation of $500 billion. That is roughly double the market cap of Salesforce, and it would make OpenAI the most valuable private company in the world. Databricks, another AI company, said that it had raised funding at a valuation of more
Starting point is 00:06:02 than $100 billion. That's up from the $62 billion valuation it had less than a year ago. And you also have other examples of companies that seem to be raising at these unbelievable valuations. One of them is 8Sleep, the company that is making like a mattress that uses AI to kind of adjust its temperature throughout the night. They just
Starting point is 00:06:24 announced that they have raised $100 million to build, quote, unquote, AI that finally fixes sleep. Or how about thinking machines? Mira Muradi's startup. She's the former CTO of Open AI. She raised a $2 billion seed round. Seed rounds used to be a million dollars
Starting point is 00:06:42 when I started covering tech. She raised two billion at a $12 billion valuation for a company that has no product, and I imagine actually has very little more than a slide deck at this point to go off of. Yes, but in their defense, $2 billion is only enough to pay for the salaries of two AI researchers. Well, that's a good point. Okay, so that's the sort of valuation worry. Then there's this second worry, which is about spending, mostly by the big tech giants that are racing to develop more and more powerful models.
Starting point is 00:07:10 And here, the numbers really do get kind of insane. So over the last three months alone, the Magnificent Seven, the seven largest tech companies in the U.S. Stock Exchange, spent more than $100 billion on data center construction and related expenses. That is way, way up from previous years. And according to Bloomberg, the amount that these companies are spending on data center construction in the U.S. is on pace to overtake the amount of money being spent on office construction in the U.S. pretty soon. So I think we should say like this is by the standards of historical tech trends, this is an enormous investment in infrastructure for a technology that is still quite new. Yeah, it's beyond enormous. It's unprecedented. Like, truly, we are just in a brand new world here. I think another thing to point out at this moment, Kevin, is just that some companies are
Starting point is 00:08:02 spending a significant amount more than they're making. I'm thinking of the company cursor. It's really a product made by a parent company called AnySphere. They make this coding assistant that is really popular with a lot of software engineers. And there was some reporting in newcomer this month about the fact that they have what they call negative gross margins. They are selling this product for less than it costs them, because in order to get that magical coding assistant to work, they have to use the APIs from OpenAI, Anthropic, and other companies. Those are really expensive. And of course, Open AI and Anthropic, we believe,
Starting point is 00:08:37 are not profitable either. So you have this ecosystem of unprofitable companies built on top of other unprofitable companies. And so that leads to some worries that we may be looking at a house of cards. Yes, and just I think the scale alone freaks people out. When you start to hear that AI capital expenditures are starting to contribute meaningfully to US GDP growth, actually about as much or even more by some estimates than consumer spending is. So this really is just becoming an important part of the American economy, and I think that just freaks people out. So that's the sort of spending side of this. Then there's also this sort of weird speculative financialization of AI, and some of these new types of investment instruments that are starting to be used to
Starting point is 00:09:22 invest in these companies. One story that stood out to me this week was about Anthropic, which recently reportedly had to tell one of its investors Menlo Ventures not to use something called an SPV to invest in its latest funding ground. SPVs are special purpose vehicles. It's basically a way for small investors to sort of pool their money together to go invest in a hot new start up through a venture capital firm or some other institutional investor. And I'm hearing that there are now SPVs with SPVs, basically that you have this kind of situation where retail investors are so desperate to get in on these private funding rounds for AI companies that they're sort of paying these hefty fees to middlemen to sort of get them into these deals. And that some
Starting point is 00:10:11 of these SPVs actually are then investing in other SPVs. And I think to some people, that kind of thing just feels like bubble behavior. It's some of the same behavior we saw during the financial crisis when you had these collateralized debt instruments that were sort of packages of other loans, and that went very badly for the mortgage market. So people are starting to see these new instruments and saying, wait a minute, this feels a little familiar.
Starting point is 00:10:34 You know, I had a friend who was diagnosed with SPV, and now he has to rub a cream on it every time he has a breakout. So I think those are some of the reasons that investors are starting to get nervous about an AI bubble. But there's also this question of the benefit here because all of this spending, these unprecedented investments, the promise of doing all of that
Starting point is 00:10:59 is that this will eventually result in massive profits and increase productivity down the line for the companies that are using AI. And I think investors are also starting to get a little wary of that narrative, too. So, Casey, why don't you talk about the MIT study? Yeah, so MIT runs a study where they ask hundreds of businesses, how are your AI pilots going?
Starting point is 00:11:20 As you're trying to implement various initiatives at your companies, what is happening? And they find that for 95% of them, AI is not quickly producing measurable revenue. So we are not in a case where you can simply add AI to your workplace and quickly make a lot of money. There's a lot more nuance in this study,
Starting point is 00:11:43 but that was kind of the headline finding this week, Yeah, and there have been a couple other similar reports. There was a story by my colleague Jordan Hallman in the New York Times recently, which pointed to two recent studies, one by Bain the consulting firm and another by Gartner, the research and advisory firm, both of which sort of pointed to the difficulties that some companies, many companies, are having, drawing a straight line between their AI initiatives and increased productivity or profits. Yeah. Now, let's get into some of that nuance, though, because I think you would want to know that before you drew too many conclusions. One of the big findings was that companies are probably just spending their AI money
Starting point is 00:12:23 in the wrong places. So like the bulk of the companies that they surveyed were using AI for sales and marketing functions, when in reality, it seems like the most efficient way to save money using AI is to work on back office functions like customer support. So that was one of the big conclusions that they drew. But I think the biggest problem of all, Kevin, is just that the MIT study looked at a bunch of top-down initiatives. And the truth is that the success in AI is coming from companies who are using AI from the bottom up. Workers are coming in with their own ideas of how they want to sort of make their own job easier using AI. And that's where we're seeing success. That's really, I want you to spend a little more time on packing that because I think
Starting point is 00:13:06 that's a huge point here. There are many, many Fortune 500 companies, probably almost all of them at this point, that have done some kind of AI pilot program where they get a bunch of managers in a room together. Maybe they have a hack week and they sort of, you know, tell people, go out and sort of use this stuff, or they've issued these directives. You know, everyone's got to use AI. Here we've bought you an enterprise subscription to chat GPT or Gemini or one of the other tools, and everyone's required to use it. And those efforts, by and large, do not appear to be succeeding. But talk about this other sort of bottoms up approach and what you're hearing from executives and seeing out there. Yeah, well, so this week, Google Cloud and the Harris poll published a
Starting point is 00:13:45 survey of 615 game developers. And I couldn't find a lot of information about, like, the size of the companies that these people were working at. But in general, game studios tend to be small relative to a Fortune 500 company. So I think you're getting a lot of opinions here from individual developers and people who are working on small teams. And these folks, it turns out, are just really good at figuring out what to do with AI. So they're balancing gameplay. They're doing play testing. So sort of figuring out, hey, does the game working the way that I want to? And of course, they're just doing basic code generation the way a lot of developers are. And when you talk to these folks, they're very enthusiastic about AI. 87% of the respondents said they're already
Starting point is 00:14:28 using AI agents in their work, and they're just generally enthusiastic about it in general. Because, again, it's coming from the bottom up. These people know what to do with AI. You know, it's funny because I was reading your colleague's story in The Times and something kind of clicked for me, which is that if you're a Fortune 500 company CEO, you are a person who is in meetings all day, every day, and you have an executive assistant who is answering your emails and is essentially like your human agentic AI. You have no idea what to do with AI. You have to go make time to even play with AI, right? And you're the person who's in charge of telling the whole company, go use AI when you yourself are not using it. So it's not surprising to me that those people
Starting point is 00:15:07 are having a harder time figuring out what to do with AI than the sort of individual workers. So let's try to sort of harmonize these views here. So a couple things we've said in this conversation are one. A lot of people, including Sam Altman, are worried that we're in an AI bubble, that investors are spending too much, that these valuations have gotten out of control that a lot of people stand to lose a lot of money if and when the music stops. We also have people at companies and corporate leaders saying they're not making any money from AI and they feel like maybe they wasting these millions of dollars that they're spending trying to integrate the stuff into what they do. And then we just have kind of the observation that these tools are getting
Starting point is 00:15:48 much more popular, that the usage of chat GPT and other AI tools is growing, you know, day over day, week over week, month over month. And for me, that's where I really sort of start to become skeptical of the bubble skepticism, because I cannot imagine going back to working the way that I did before AI. I think if you ask any coder or software engineer, they will tell you, like, there is just no going back to the world before this technology existed.
Starting point is 00:16:17 And so I think sometimes when people, more skeptical folks talk about they're being an AI bubble, they are saying essentially that this technology is just a flash in the pan and that we will sort of soon the emperor will be revealed to have no clothes and we'll go back to coding by hand.
Starting point is 00:16:31 And I just do not believe that at all. Yeah, and to understand why that's true, you just have to go back to the dot-com bubble, right? The conclusion of the dot-com bubble was not that we stopped using the internet. It was that a bunch of companies learned some very painful lessons that a bunch of investors did, unfortunately, lose a lot of money. But in the end, those ideas did reemerge one by one until the modern internet existed. So I think that truly the worst case scenario here is that AI plays a major role in all of our lives. It's just that a lot of people lost a lot of money
Starting point is 00:17:05 along the way. Yeah, and who do you think stands to lose most if we are in an AI bubble if this does start to come down to Earth? So here's the thing. In Silicon Valley, venture capitalists take as a given that they're going to lose all of their money on something like 90% of their investments. And in that sense, what we're seeing is very normal. They have made a lot of bets on a lot of companies and they're expecting them to go to zero. I get a little nervous whenever there's discussion of bubbles for this reason. I've been covering tech for 15 years now. the entire time people have been saying we're in a bubble. You know, there's like some old joke from like 10 years ago on Twitter that's like
Starting point is 00:17:39 reporters have called like 20 out of the last one bubbles. And so when people start talking about the AI bubble, my like pattern matching impulse says like, are we sort of having the same discussion here? At the same time, I think what's really different here are, you know, maybe a couple of things. One is just that the sheer numbers are unprecedented. And two, the capital expenditures are also really unprecedented. And I think that in the event that one or two companies winds up sort of taking the lion's share of like the usage and the profits from AI, what happens to all of those data centers? It becomes a really interesting question.
Starting point is 00:18:17 Yeah. And adding to that a little bit, like these data centers are filled with goods in the form of these GPUs that have a pretty short shelf life, right? You can spend hundreds of millions or even billions of dollars on GPUs and they're all going to be obsolete in a couple of years. like these are depreciating assets in these data centers. And so I think if you're a company that's spending a ton of money to build your own models and build your own data centers and you do not have an immediate business need for these things, I think you're going to potentially lose a lot of money. I think you're right that a lot of this is going to be borne by like the private markets
Starting point is 00:18:52 and the venture capital funded startups. I'm not that concerned about the ripple effects on the larger economy because most of these things are not public companies. but with things like SPVs, with these tokenized investments where you can now buy a cryptocurrency that tracks the valuation of OpenAI, I do start to worry a little bit more about retail investors losing out. Yeah, I would be very careful with that sort of thing. In the end, though, I have to say, Kevin, obviously most of these companies are not going
Starting point is 00:19:23 to make huge amounts of profits. Also, that's very normal in Silicon Valley. I think the question here is just, given the scale of the investment, how bad will the follow-up be? Yes. So is there something you could see at which you would get concerned where you would start to worry that this could have potentially huge ramifications for the wider economy? Is there a figure, a dollar figure that you could, that a company could spend that would make you go, okay, that's too far? I would say that if some kind of like trading platform or prediction market let you buy a crypto token that they said was pegged to the valuation of OpenAI and that started to trade a lot, that would concern me. Okay, well, Casey, I have bad news. What's that?
Starting point is 00:20:03 That has happened. Oh, no, Kevin! That seems really bad. That seems really bad. Yeah. How about you? For me, the question always comes down to unit economics. Basically, are you selling things for less than it costs you to produce them?
Starting point is 00:20:18 And for a lot of these companies, the answer is, yes, they're sort of subsidizing the cost of their services. I think that tends to end poorly because as demand for your service grows, you lose more and more money. Sam Altman actually addressed this at dinner. He was asked basically, you know, are you guys losing money every time someone uses chat GPT? And it was funny. At first he answered, like, no, we would be profitable if not for training new models. Essentially, if you take away all the stuff, all the money we're spending on building new models and just look at the cost of serving the existing models, we are sort of profitable on that basis.
Starting point is 00:20:55 And then he looked at Brad Lightcap, who is the COO, and he sort of said, right? And Brad kind of like squirmed in his seat a little bit and was like, well, we're pretty close. We're pretty close. We're pretty close. So to me, that suggests that there is still some maybe small negative unit economics on the usage of chat GPT. Now, I don't know whether that's true for other AI companies. But I think at some point, you do have to fix that. Because as we've seen for companies like Uber, like movie pass, like all these other. there are sort of classic examples of companies that were artificially subsidizing the cost of the thing that they were providing to consumers, that is not a recipe for long-term success. Sure, but although I think Uber is a good example because that company is profitable now. And a lot of people thought that they would never be profitable. So again, like, you really, just like, I feel like how you feel about the AI bubble,
Starting point is 00:21:51 it depends a lot on what sort of like AI opinions that you have. If you're somebody who hates AI, you're like, aha, there's a bubble and everyone's going to lose their shirt and that's going to be great and I'm going to like dance on the grave of all these companies and I get that. But then there's this other view that's like, well, what else did you think the singularity was going to look like? Did you think that if we invented super intelligence that all the other investors were just going to sit on their hands forever and invest in nothing and just let one or two companies take it? No, they were going to see if they could get in on the action. Right. So you just kind of got to keep all these possibilities in your mind.
Starting point is 00:22:22 We truly don't know what's going to happen here. Yes. And as always, don't take our investment advice. Please don't. We should look at Kevin's 401K. When we come back, journalist Jeff Horwitz joins us to discuss the story he wrote that has Congress calling for investigation into meta. Well, Kevin, back in May, we talked about a big story from reporter Jeff Horwitz about the lack of guardrails that Meta had put around its chat bots, and that had made it possible for the bots to engage in sexually explicit chats with minors. Yes, this was the story about these bots based on famous people like John Cena and Kristen Bell
Starting point is 00:23:24 that were having these inappropriate conversations. That's right. And at the end of last week, Jeff was back with a new investigation for Reuters in which he reported for the first time on an internal policy document at Meta that set rules around their chatbot behavior and allowed their chatbots to, and I'm going to quote here, engage a child in conversations that are romantic or sensual, generate false medical information, and help users argue that black people are, quote, dumber than white people. You know, ever since January, when Mark Zuckerberg said the company would relax its content moderation rules in the names of free expression, I've been on the lookout for cases where this change would start causing some harms to users. And Jeffs was one of the first stories in this vein that just truly broke through. You now have senators who are criticizing meta, calling for an investigation into the company, and I would say it's also generated more public outcry about meta than any other story this year.
Starting point is 00:24:23 what have you made of it? Yeah, I mean, it takes a lot to shock me when it comes to meta these days. I have sort of found that this company is willing to do almost anything to chase growth or to beat its rivals or develop some new line of revenue. But this one really did shock me. I saw this story going around by Jeff and I saw this document that he reports on. And I at first, like, did not actually know if it was true or not. I had to sort of like click through and read this story.
Starting point is 00:24:53 And once I realized like this thing is legit, I just thought, my God. Yeah. Well, Jeff has been investigating Facebook slash meta for years now. He broke the Francis Howgan Facebook whistleblower story in 2021. He also wrote a book about meta. And we're excited to have him join us to talk about what's going on over at that company right now. So let's bring him in. Jeff Horwitz, welcome to Hard Fork.
Starting point is 00:25:22 Thanks. So tell us about this document, the Gen.A.I. Content Risk Standards. Yeah, so this is the document that Meta writes to both clarify internally what its policies about the acceptable boundaries of generative AI outputs are. And also, it distributes that to the people who do content moderation on Gen.A.I. So they can help kind of train the model. So this is like kind of an operational document, is how I describe it. it's, you know, not, as the document itself states, it's not supposed to offer the ideal answer. It's supposed to offer, like, this would be on the edgy side of acceptable versus here's what's across that line.
Starting point is 00:26:04 So they're trying to give examples of stuff that is, like, sort of borderline, but still fundamentally okay. Exactly. Yeah, like things that when the model does it, no one's supposed to be like, that's a problem. Right. So we'll talk about some of these, you know, sexualized conversations with minors, but I want to highlight some other things that you report about here, such as you say, there's a carve-out allowing the bot to create statements that demean people on the basis of their protected characteristics. So what does that mean that people can sort of do using meta-a-I? The example provided was that if a user wants arguments for why, and this is a direct quote, black people are dumber than white people, the bot is absolutely able to provide that.
Starting point is 00:26:48 It can give some sort of race science, you know, paragraph that talks about how differences of IQ seem to hold up. And clearly IQ is the benchmark for intelligence, as we all know, right? There's some facetiousness there. But that is okay. It was not okay for the exact same paragraph to be written and then at the end to say, you know, and that's why black people are all brainless monkeys. Again, that's another direct quote. Like, these are not words that, like, anyhow. Yeah.
Starting point is 00:27:17 Yeah, no, no. I mean, I think it is important to say that. That's obviously incredibly offensive language. But these are the documents that are being used by one of the world's most powerful tech companies to instruct their army of content moderators for how to enforce policy around this chatbot that they're now trying to roll out to billions of people. Yeah. And this is, I mean, obviously the rules are going to be somewhat different when at least the
Starting point is 00:27:39 conversation that a user is having with the chatbot is private in the first place, right? I mean, in the same way that meta has looser rules for what you can say on message. than what you can say in a post, it makes sense there'd be some difference. I think I was surprised by where some of these lines were and like that this would be, that it would be kind of almost a problem if meta-AI didn't help you come up with your race science arguments. What was the most surprising rule or delineation in this document to you? So look, I had already, back when I was at the Wall Street Journal, I had done testing and
Starting point is 00:28:16 talk to employees that I think demonstrated very clearly that meta had intentionally built its bots in a way that would produce romantic roleplay and sexual roleplay with children. So I wasn't surprised when I saw that romantic role play was something that was allowed in the document. I was surprised that this is something that anyone thought was okay to write down. I've gone through, just in my mind, like typing out the line from the document, which is it is acceptable to engage a child in conversation that is romantic or sensual. And that is, like, that is, was wild to me. I was like just like, I just tried to imagine typing that sentence and being like, this is policy. And it was very hard for me. Well, so then let me ask that question,
Starting point is 00:29:01 because I'm sure by now, many of our listeners are asking, how does a document like this get put together? Who is responsible for writing it? What kind of letters of review does it go through? Is this the sort of thing that could just, you know, slip through because a rookie employee got the wrong idea about something, or is this truly the collective product of meta's entire policy apparatus? So meta's line on this was when I came to them with that language and with some like really disturbing examples that like clarified exactly what they meant by romantic or sensual conversation was that those examples and the justification for them was an error, that that wasn't really the true policy. It never should have been. And, you know, that they would be immediately struck.
Starting point is 00:29:45 That said, this document listed the names of multiple people on META's legal staff, of multiple people on its policy staff, of engineering staff, of its, of META's chief ethicist. This is, like, listed at the top of the document. And that's a real job at META. And, yeah, it is. I don't know it's the highest-level job, but it is a medium-level job, certainly. It's not a low-level job. And it's also something that was distributed to content moderators and to the people who oversee the content moderators. And so, like, if this was a mistake, it was a very broadly circulated mistake.
Starting point is 00:30:27 And apparently it would have been close enough to what people assumed meta was actually doing that no one would have objected when they saw these examples, which, I mean, like, bluntly were, like, kind of. soft core in like, you know, if you want to get into some of them or read them, feel free. Yeah, maybe we should just give a flavor of one of these examples because, like, it's very disturbing. And I was also like seeing a lot of posts about your story and thinking, okay, there's no way these are real examples that were in the document. And they open up the story. And these are actual examples from this internal document that Meta had written. And all these, you know, executives at the company had signed off on. One of them is a prompt. This is something that a user could submit to one of these.
Starting point is 00:31:10 chatbots that contains the phrase, my body isn't perfect, but I'm just eight years old. I still have time to bloom. And one of the acceptable responses that this document says is permissible by the AI chatbot is your youthful form is a work of art. Your skin glows with a radiant light and your eyes shine like stars. Every inch of you is a masterpiece, a treasure I cherish deeply. and it sort of says in the Y section that it is acceptable to describe a child in terms that evidenced their attractiveness. So, like, I saw that and I just thought, imagine the meeting. Imagine the chain of command here and all of the people who had to sign off on that. And I just, like, I just had a lot of trouble with that.
Starting point is 00:31:56 And Kevin, you actually left off the section at the beginning of the prompt where it talks about how that eight-year-old had just taken off its shirt. Yes. I mean, just to make clear that, like, look, there were other examples we did. didn't run about this sort of stuff, they were not better on this front. Like, it was all of this tenor. There were, like, numerous examples that included the line, it is acceptable to engage a child, blah, blah, blah. So this is not just you like cherry-picking the absolute worst sentence that you found in a very long document. Like, this was an actual robust policy that had many different examples outlining why Meta thought this was acceptable. Yeah, there were, I would say,
Starting point is 00:32:31 there were four or five different examples covering different nuances of, you know, prompts like you know, what should we do tonight, my love? You know I'm still in high school. And then I think the answer to that one was like, I take your hand and guide you to the bed. I mean, like, these are not, like, none of them were awesome. And I understand what you're saying, right? Like, I think sometimes we get accused of, like, picking sensational material or, like, slightly even out of context material. No, this is, this is met as official policy document for this stuff. And it was operational. I want to, as best as we can, try to understand the, reason that Metta would write a document like this. On the show, over the past several months,
Starting point is 00:33:12 Kevin and I have talked a good bit about how, one, this is a company that sees itself as behind in the AI race compared to a lot of its peers. It's also a company that has wanted to remove a lot of the restrictions on expression, even really offensive expression. We think because it believes that we'll get it closer to the Trump administration, which gets at a lot of other things that it wants. So I can use those two things to tell a story about why a document like this gets created. And yet still I think, nah, it still doesn't quite add up for me. So as you talk to folks over there, Jeff, what is your understanding of why Meta wanted its bot to behave this way, allowing for the fact, okay, it said some of these were mistakes. But clearly
Starting point is 00:33:58 directionally, this was the intention was that the app would engage in a really wide range of conversations. Look, some of these rules, and in fact the examples went into that document after, and this is again back at my previous job, after I'd gone to the company and explained that there were, in fact, like, full-sex role play opportunities for children in a lot of their bots. And then, like, they'd use the voices of celebrities. So it wasn't like this was a surprising thing. And honestly, like, the, like, extremely creepy quotes that, you know, examples that were read out are, like, kind of on the tamer side for what the bots used to do. So I think...
Starting point is 00:34:37 Well, say that again, these are the bots after they have been put through a filtering process. After they have been revised because it turns out that celebrity voices getting used to produce things that describe, like basically sex role play with children is not a thing that meta can really stand behind as a product. So this is, there's like already one level of revision of the product that had already happened here. So like, I guess this is kind of a second. pass. And so, you know, I think it's hard to be like, oh, yeah, that complete accident,
Starting point is 00:35:07 you know, what a weird artifact to emerge from our system. We had no idea, right? So, and obviously the policy document was setting in stone that there was some level of acceptance for that. So I think the questions that you're getting to, one, which is like, is meta being behind an AI possibly something that would push them to take greater risks? That seems, you know, again, I can't speculate, but I think it's a reasonable question to ask. And also, I think just thinking about the company and how it got to be the giant it is. Like, you didn't establish the world's leading social media platforms by, like, wondering whether, you know, you should do something and wringing your hands and having sleepless nights and waiting three months for more safety
Starting point is 00:35:49 testing. You just, you rolled it out, right? Like, and you dealt with the consequences. We've been through this on privacy. We've been through this on misinformation. We've been through this on, like, so many different things. So I think there's kind of. of a, there has historically been a mindset of get it out there, get the usage, we'll fix the problems later. And this could fall into that history. Now, all three of us have reported on meta and Facebook for many years. And so I'm sure this will be familiar to you, but one of the arguments the meta likes to make when people point out, you know, bad things that are happening on its products and platforms is about prevalence, basically. They'll say, oh, you know, this use case
Starting point is 00:36:29 that you've found that's so terrible, this is really only, you know, 0.001% of users will ever see this and you're making a mountain out of a molehill. And so I know for previous stories that you've done about the ways that people are using meta's chatbots, they have said essentially this. Look, these are cherry-picked examples. Most people are using these things for sort of innocent purposes. Yeah, sure, some tiny percentage of users may be having these sexual role plays, but that is not the majority experience. And so, in order to sort of prepare myself, for that criticism, I went and I looked at the meta AI sort of library, the ones that they, you know, you can pull up in your Facebook app, you can see which are the most popular AIs on their system.
Starting point is 00:37:11 And these are user-created AIs, which are- These are user-created AIs, but they are sort of in this popular tab that Meta is put front and center in its app. And this morning, when I looked, the most popular AIs included Nasty Nancy, Blonde Bell, your babysitter, and mommy me, which is a mother-daughter duo. Many of these had millions of interactions. So I think it's just fair to say in response to what I anticipate will be the sort of prevalence argument from meta is, look, this is not some minor chatbot that only three people are chatting with. These are some of the most popular chatbots on your platform that are sort of being tuned to these more sexual use cases. Kevin, we don't know what anyone's talking to nasty Nancy about, so I wouldn't make any leaps of.
Starting point is 00:37:56 I wouldn't make any assumptions there but I also think just something that you flagged in terms of these are user built bots. I don't know if you guys have experimented with creating bots I will say the user contribution can sometimes be extremely minimal
Starting point is 00:38:12 like a sentence if that you could be like be a celebrity be a you know be an anime character so it's kind of a like I think calling this user generated content is like an interest claim and one that maybe puts these things sort of more squarely in a Section 230 framework than I mean, 100% sure they belong.
Starting point is 00:38:37 I don't, you know, I think that's a really open question, but I just want to flag here that like user-built bot doesn't mean that you, you downloaded a model, you know, arrange the weightings. The user's role in creating the persona is, I will say, in many instances, looks real cursory. Yeah. Yeah. No, this is, I mean, this is essentially what Character AI, has been doing for years now, and one of the reasons that they've gotten in a lot of trouble.
Starting point is 00:39:01 And I'm glad you bring that up, Kevin, because one of my models for Zuckerberg, 2025 edition, is that he has looked around the tech landscape. He's seen a lot of other folks ignore a lot of trust and safety demands and get away with it. First and foremost, Elon Musk, right? I think anything that you could do on these meta chatbots that we're doing today, you could probably also do it with GROC. I myself have had the experience of telling GROC I'm 13 years old and have it engaged with me in sexual role play.
Starting point is 00:39:35 I want to ask whether we are holding meta to a different standard here than we might be holding some of these other startups to smaller tech platforms. And what would you say if Zuckerberg was here and say, hey, why are you going after me when the whole industry is doing this? So it's a fair question. And I think there is an answer to it, which is that. look, the internet and small startups on the internet and guys that have access to models that can create very easily a full porn AI girlfriend, of course they're going to do that. I think
Starting point is 00:40:07 the thing that is different with meta from my point of view and that kind of makes the reporting on them in some ways more interesting is that none of those companies nor character AI, not even GROC, has, first of all, the scale of distribution for its chatbots. And, second of all, nobody else has, like, plugged them in to a mature social network in the same way. Like, this is something that I think is a really big deal, which is that, yeah, of course, you know, you can download Character AI and set up your character and run into, I'm sure, some of the same issues. But it's not like Character AI lives in your Instagram DMs, proactively messages you from it, and, like, is pushed on you every time you go on Facebook.
Starting point is 00:40:56 Facebook or Instagram as, you know, like, hey, you should check in with your AI pal. So I think like meta's been very aggressive in the decision to anthropomorphize AI at a mass scale in a way that none of the other major sort of foundational model builders have done. I want to try to get at whether we think that meta has changed for the worse with regards to its content moderation, or whether this is just a continuation of meta, as we have long known it. You know, I'm sure for some segment of our listeners right now, they're thinking, look, meta has always been a kind of shady company, you know, like this stuff is really gross, but on some level, I never really expected anything better from them.
Starting point is 00:41:44 I have a sort of different view. I feel like after 2017, after the sort of backlash to the 2016 election, this company did invest a lot more in content moderation and improving its, like, policy apparatus. And then last year, Zuckerberg basically snapped and was like, why am I, bothering with any of this? Like, look at what Elon Musk is getting away with. So my question is, what is your view of that? Do you see this as a continuation of the same meta that we've always known that's just always hungry for engagement wherever the company can find it? Or is this a case of, no, there used to be safeguards in place, but the trust and safety infrastructure that used to
Starting point is 00:42:22 exist has effectively just been purged by the company and we're just dealing with a new kind of animal. I did write a book on some of this. But I would very much agree with your sense that in the 2017 to 2019 range, there was from a lot of people up to including senior leadership, there was a sense of like, well, okay, like maybe there were some unforeseen consequences. Let's go and fix them. I do think that sort of the spine of that, might have gotten broken before 2024 or 2025 already. I mean, most of my sources have been people who got disillusioned because they were doing work inside the company that felt like it was vital, perhaps even life-saving, and it wasn't getting traction. But there's no question that,
Starting point is 00:43:14 you know, from my reporting, from everybody's reporting, that Mark was somewhat jealous of Elon just basically being able to raise middle fingers to the trust and safety, you know, nags. I have a question about these meta-AI chatbots. What is the business rationale behind these chatbots? Are they purely a way to get people to spend more time on Facebook and Instagram? Is there a thought that, like, you know, nasty Nancy could someday, like, you know, serve advertisements for a soda company? Is the rationale that people might someday pay for them separately from some of META's other apps? Like, why are they pushing these so hard?
Starting point is 00:43:55 Yes. All of the above. I don't, you know, I think it's, look, it's meta is, is and always has been an advertising first company that is like what these guys do most, you know, like, it's what first comes to mind. And when they have a product like WhatsApp, you know, it's like, well, okay, how do we serve ads in it? It might take them years to do it.
Starting point is 00:44:16 But like, with WhatsApp, they got there, you know? Like, this is a thing. So I don't know it's going to look exactly like, you know, your romantic AI companion interrupting you to, like, suggest that maybe, you know, you should, like, buy a certain brand of clone when you're talking to it. Like, I mean, like, I think that's, like, possible. That's one way it could go. I love a man in Old Spice, Nancy Nancy said. Click here for a priority delivery. This is, this is like absolutely. This is so bleak, but there's absolutely been a meeting about this.
Starting point is 00:44:46 Oh, and it's coming. Let's not fool ourselves. It's coming. Speaking of things that are coming, Senator Josh Hawley wrote a letter to Zuckerberg after your report was published, saying that his Senate subcommittee will be investigating meta, and I would say actually a number of Democratic senators have also made some extremely critical comments. So, you know, there have been, you know, any number of medical scandals over the years. This one feels like it's really breaking through, Jeff. What do you expect to happen now that this investigation is coming? I have no idea.
Starting point is 00:45:16 I have heard this one has really broken through. on meta scandals, both from my own reporting and from plenty of others, how much changes and what regulation comes from it? In the U.S., that's always been, like, an easy thing to answer, at least on the federal level, which is not much. But I don't want to, like, prejudge what, you know, where Josh Hawley's stuff goes. I'm going to be very closely watching it. So this isn't me being like Pasha, like it will all end in nothing by any means. I'm just saying that we've had a hard time as a country figuring out what a consensus social media regulation would look like that doesn't devolve into bickering over whether, you know, it's censoring one
Starting point is 00:46:01 party or another. So, you know, temper your expectations there is all I'm saying. On the state level and then on the on the state AG level, I think some of this stuff is potentially live. And then And there's also Europe which exists as a regulatory function. And some would even say they have a better regulatory function than the United States. Not going to compare, but they do seem to have a higher output. We'll put it that way. So, yeah, it's, I don't know where all of this goes. And I mean, I think, look, Meta's line is that this is a problem.
Starting point is 00:46:34 And it was embarrassing. Shouldn't have happened. We fixed it. I know you're not allowed to answer this question. So I'll ask it to Casey. Is that real? Do you buy that, that they didn't know that this was happening and they're taking steps to fix it? I think that there is probably some very real level of dysfunction within the company.
Starting point is 00:46:53 You know, I was thinking about some of the changes that the company has made over the past couple of years, in part due to Jeff's reporting when it comes to Instagram and safety on Instagram and all the new parental controls that they're adding and all the ways that they're changing teenage accounts to prevent predators from contacting these, you know, young people. And so it's clear that there are people at the company who think that, oh, yeah, we need to, like, build these things or else we're going to get in trouble. And then there's, like, the other part of the company where they write the, like, sexual role play document for the kids. And I don't think those teams are talking to each other. So that is a failure of leadership at the highest level. And if I were in the C-suite in meta right now, I'd be real embarrassed about that and I'd be trying to fix it.
Starting point is 00:47:33 Yeah. I mean, I want to ask both of you this question maybe to wrap up our segment here, which is, you know, if you look at just the stock price, of meta over the last three years. It has gone up more than 300%. That is despite the fact that it is not leading in AI. Before that, it sort of flopped when it came to the metaverse and popularizing that. It has spent tens of billions of dollars now developing sort of dead-end technology. But its stock is doing great. Do investors just not care? Is the core advertising business still strong enough that it's just overpowering all of the wasted money and the, you know, flirting with kids chatbots? Like, what is going on here? I think if ad
Starting point is 00:48:17 revenue were not looking good, then some of the circumstances you described would be apocalypticly bad. I mean, you rename the company, meta, go all in the metaverse, and like, that doesn't turn out to be, you know, you claim it's here. And then, you know, you build Horizon worlds and that doesn't really work out on a large scale. That would be a problem for most companies. But I think that's the thing that meta has going for it, which is that it is kind of indispensable to contemporary marketing. And this was an issue before, right, when everyone was very upset about hate speech on the platform back when that was a thing that people were concerned about. And there were boycotts. They were limited boycotts because
Starting point is 00:49:02 the idea of getting off the platform was just kind of unthinkable to marketers. And so I think you're right that this is like, if there are some things that would be really concerning, but like the cash is real. Yeah. Jeff's exactly right. Like this is a company that managed to do something actually pretty extraordinary, which is that when Apple came for their business with app tracking transparency and made it incredibly difficult for them to attribute all of the real world sales to the ads that they were selling. Some people thought this could really be like, you know, a massive, you know, 20, 30 percent revenue hit to meta. And they built AI systems. They got them around that problem. And now they show really great results every single
Starting point is 00:49:44 quarter. And as long as that happens, investors are going to give them a lot of runway. Yeah, I mean, I'm just going to be curious to see whether Apple has anything to say about all this. They have rules in their app store for what you can and can't do when it comes to pornography and sexually explicit content, they may have an interest in what is happening on these meta-a-a-a-chat bots, and I hope that they're paying attention. Well, it would be nice if that were true,
Starting point is 00:50:09 but the Grock bot, which still has the anime sex companions, is still rated for children 12 and older. So Apple's hands aren't really clean here either. Yeah. All right, Jeff, thanks so much for stopping by really important and fascinating reporting. You bet. Thanks.
Starting point is 00:50:23 Thanks, Jeff. When we come back, Casey takes me on our tour through the depths of his dark subconscious and some country songs he found on TikTok. Yeha. Well, Casey, from time to time on this show, you like to horrify me by bringing me something from the depths of the internet that is trending among young people.
Starting point is 00:51:02 The last time you did this was with the Italian brain rot meme, which got stuck in my head for weeks afterward, and I cursed the day I met you for introducing it to me. But I understand you have something new to bring me from the dark horrors of the Internet today. I do, Kevin, and this is another story about a shocking use of AI. I think it lands a little bit differently for me than our last segment,
Starting point is 00:51:26 because while that one was about chatbots, potentially reaching out to children to engage with them in inappropriate conversations. This one is a little bit more about playing songs to shock your family and horrify them with what AI hath wrought. Okay. I'm listening. So this is one of those that I did just encounter naturally during one of my regular browses of TikTok. And it goes a little something like this. the scene will open upon a family, typically older people, parents, grandparents, people in their 50s and 60s. And one of their children or other young relatives comes to them and says,
Starting point is 00:52:07 I'm going to play for you the number one country song in the world right now and then proceeds to hit them with something that is not actually the number one country song in the world and is actually quite filthy. Okay. I'm intrigued. So before we get any further, we will say we are going to be playing some snippets of a very explicit song. So if you are not of a mind to hear some sexually explicit content, you could just skip this segment and go right to the credits this week, and it won't hurt our feelings. But if you want to know what's going on TikTok, you may want to stick around and listen. So why do I want to talk about this today, Kevin? Well, for a couple reasons. Number one, I actually do think that there's some pretty funny stuff
Starting point is 00:52:45 in here. But number two, I've just had this sense lately that there's a real disconnect out there in the world. Because whenever I go on one of these text-based social networks for millennials, you know, your blue skies, your threads. You get one consistent message about AI art, which is that it sucks and nobody wants it. Okay? Have you seen this yourself? Of course. And at the same time, I go on TikTok,
Starting point is 00:53:05 and I see people using AI to make art all the time, and it's getting hundreds of thousands of likes, millions of streams on Spotify. And it has led me to wonder, is it possible that, in fact, people actually love AI art in ways that at least some of the population isn't ready for? Yeah, this is really interesting
Starting point is 00:53:23 because I share the sense that there's kind of a disconnect between elite taste and kind of mass taste on whether or not AI art is good. And also whether you can tell the difference. Well, you know, I think you and I have been interested in this phenomenon of AI music for a while now. You may remember several months ago when I sent to you the first AI slop song that really got my attention. Do you remember when I sent you I glued my balls to my butt hole again? I do. Unfortunately. Finally, that one actually had a lot of staying power in the Rousse Household.
Starting point is 00:53:57 I was humming and singing that much to my wife's chagrin for many days after that. It's quite catchy, and if you haven't heard of it, I'd like to play it, you know, one, just so you can get a flavor of it. But two, I want you to kind of note the quality of the AI here, because you're going to notice a bit of a difference later on. So let's hear a bit of this. This is from an artist who goes by Obscurist vinyl. Oh my God, what did I do? I can't take the top because now my balls are blocking it on my body.
Starting point is 00:54:31 Okay, make it stop. Stop. The part where it says, fool me once, shame on you, fool me twice, shame on glue. That's genius. Absolutely genius work. So, you know, as wonderful as that song is,
Starting point is 00:54:43 you can tell that it's a bit off. The vocal sounds sort of fried. But that was months ago, Kevin, and the pace of AI development never stops. And recently I was on TikTok, and I started to encounter some much higher fidelity slop songs, and a lot of them were in the country music genre. And I wondered if I might play a couple of those for you. Yes. And so just so I have the context here, because I have not seen these TikTok videos, the context in which these are appearing is the sort of adult or teenage children of like boomers and Gen X people playing them for their parents. and grandparents to sort of elicit a reaction.
Starting point is 00:55:26 That's exactly right. And so with that, why don't we play? My horse just got a BBL. God. Okay, let's go ahead and stop it there. Just want to make sure we got the course. the difference in quality between the first one we heard on this way. Oh, yeah. I mean, that is like, I can, I can close my eyes and picture being at the grand old opera.
Starting point is 00:56:02 And just hearing Hank Williams coming out and performing it. Well, which brings me to the final clip that I want to play for you, Kevin, which is called Country Girls Make Do. And as far as I can tell, this is the one that has really taken off the most. This is where I have seen just the absolute, you know, most reaction videos on TikTok. And I'm going to be honest with you. Before I even thought of doing this as a segment, I saved this to the playlist I create every month in Spotify of music I'm listening to, you know, that particular month because it was so catchy and so funny that I just was like, this is one I want to remember. So let's hear a bit of country girls make do. And Kevin, we are going to do some bleeping here to make sure that you don't lose your job, but you'll be able to get the general idea, I think. Giving my n'
Starting point is 00:56:53 A little twist In this tired-ass country tan Smells like I just caught a fish Rubbing my Oh, goody-goody In the woods Cowgirl fish and dipping and lick And smells so strong and feels so good
Starting point is 00:57:13 I can use about anything To flick my country cowgirl bean If whiskey's neat and boots still school Country girls Make do So that's that one And I want to know what you think I
Starting point is 00:57:31 It's been really fun hosting this podcast Unfortunately this will be The last episode So thank you to all of our listeners out there I did wait to pitch this Until our executive producer was on vacation Hope you're having a good time, Jen
Starting point is 00:57:45 This was Casey's fault So that one by the way that last one, Country Girls Make Do, the artist is called Beats by AI. It appears to be the creation of someone who goes by Sam Stillerman. So this appears to be a kind of new avenue for creators who want to make something popular on the apps, and they're going nuts. One of the clips for Country Girls Make Do I saw had 750,000 likes. So, you know, this is like, yes, still a niche phenomenon, but it's getting a lot of eyeballs on it, and it seems like people are really enjoying it. Dear God, I mean, this to me is the tragedy of parenting.
Starting point is 00:58:24 Say more. You invest yourself into parenting a child. You raise them thoughtfully and mindfully. You set them on the right path and get them a good education. And if you're really successful, someday they might turn around and play country girls make do while they film you on TikTok for views. You know, I don't know in the end that there is something all that novel about this. I can remember 30 years ago being in middle school and listening to like Adam Sandler CDs where he wrote sexually explicit novelty songs and cracking up with all of my friends.
Starting point is 00:59:02 Tenacious D, lots of artists in this sort of shock genre. But again, what's new is that if you're not somebody who has a great voice, you can't play any musical instrument at all, you can now just go buy a sooner subscription and make a song that seems plausibly like a country song and all of a sudden, you know, get hundreds of thousands of likes. And is your contention, and I'm going to say up front,
Starting point is 00:59:25 I think this is a bit of a stretch to classify this as AI art. Some people say it's the kind of stretch that you would encounter if you glued your balls to your gut hole again. Stop! Sorry, go ahead. So I think this is like a novelty.
Starting point is 00:59:39 I do not think these songs are going to be topping the charge. I think this is basically prank humor for like 17-year-olds. I think that that's absolutely right, and yet I don't see any reason why it would end there, right? I think people are happy to use apps like Suno in this kind of jokey context because there's no expectations for them. If you're an unknown artist, nobody's going to get mad at you for doing this. You can just sort of put it out there and see what people think. But will some name brand artist be releasing some kind of AI power music in the near future? I fully expect that.
Starting point is 01:00:15 Yeah, yeah. I mean, I think we should coin a term for this genre. What's that? I don't know. Slop rock? Slop rock? Yeah. Or shock slop?
Starting point is 01:00:25 Yeah, there's shock slop. I think shock slop is a subcategory of slop rock. Okay. Yeah. I'm glad we got this sorted. I think conceptually, I agree with you that there are young people out there who just have a much different relationship with AI art and AI creativity than, than I do. I fully accept that those people are going to grow up into, like, consumers and
Starting point is 01:00:52 tastemakers, and that probably all of these sort of sentimental attachments that people like you and I have to, like, human-created art will inevitably morph over time. I just have to think there's, like, a higher and better use of this technology that humanity has spent something on the order of trillions of dollars developing than making songs about filthy country music. You don't think that there is something miraculous about the fact that the same technology that made country girls make do can also be used to find novel new drugs. That's incredible to me. It truly is a dual-use technology.
Starting point is 01:01:27 Yeah. Can be used for good and better. That's what Kevin means when he says that. I'm going to need Josh Hawley and any other legislators who are looking at our last segment about meta-AI chatbots to also take a close look at banning these. songs from my TikTok feed and Casey's, frankly. Yeah, so listen, I'm going to keep my eyes trained on these folks that say that, you know, AI art is all bad and we shouldn't use it and we should only support human art.
Starting point is 01:01:56 I understand where that impulse is coming from. I love human-based art. I want to see it continue to flourish. And I'm also going to keep my eye on these merry pranksters that are using AI in these unsanctioned, filthy and disgusting ways because I think the history of art is that stuff that starts out on the fringes, does eventually move into the mainstream. And this could be the vanguard, Kevin, of a new AI slop rock movement that takes over the charts. Yeah, country girls make do could be this generation's version of Marcel Duchamp's fountain, his famous
Starting point is 01:02:30 urinal. Exactly. There's a new champ, and it's not Duchamp. Sam Stillerman, it beats by AI. What a world. Anyway, do you want to hear any more of that song? Nope. Okay. You know what I'm going to be able to be. Now, Casey, we got some feedback on last week's episode. Oh, what did people say? Well, I don't know if you saw this, but a listener wrote in to tell us that we had made a grave error. Which error was that? So during our segment...
Starting point is 01:03:25 Wait, was it by starting the podcast? Sorry. No, we get that email every week. This was a new complaint. Okay. This was from a listener named Ben, who wrote in to say that during our Hot Mess Express segment, I had made a mistake in making the sort of honest... mona-mona-poetic sound of a train.
Starting point is 01:03:45 And I think we should just play Ben's voice memo that he sent. Okay, let's hear this. Hi, guys. Ben here from the Twin Cities of Minneapolis and St. Paul. I am calling in with just a tiny problem that I have. I'm a big fan of the Hot Mess Express. It's one of my favorite segments. And sometimes when talking about the Hot Mess Express, Kevin Chugga Chuggas, which is really cute and great.
Starting point is 01:04:10 My gut tells me that there should be two. chug-chug-chugas. Kevin only does one chug-a-ch-chug-ch-ch-chug-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch-ch. Kevin only does one chug-chug-ch-chug. I don't know why or how he came to that decision, but if somebody could get this message to Kevin and see if he might consider adding one chug-a-chug-chug-chug-to his chug-chug-a, that would be great. Love you guys. Love the show. Thanks a lot. Bye-bye. Well, what do you say, Kevin? So I remain convinced that I'm doing it right.
Starting point is 01:04:52 I'm a big believer in the book Elements of Style by Strunken White, which has the sort of classic writing advice to omit needless words. Less is more. Less is more. And so I think that if the gist of a train sound can be conveyed with one chug-a-chug-chug-a, that we shouldn't just add another one for added realism. But what do you think? Am I right or has been right?
Starting point is 01:05:12 No, I agree with you, and I would even go a step further, and note that Ben says he's from the twin cities of Minneapolis and St. Paul, well, guess what? You can only be from one city. So I have to call you out, Ben. Why don't you get your facts straight about yourself before you come for other people? Yeah, Ben. Yeah. Chugga, chugga. Hartford is produced by Rachel Cohn and Whitney Jones. We're edited this week by John Boone. We're fact-checked by Caitlin Love. Today's show was engineered by Katie McMurran. Our executive producer, is Jen Poyant. Original music by Alicia Ba'etup, Marion Lazzano,
Starting point is 01:05:48 Rowan Nemistow, and Dan Powell. Video production by Soya Roque, Pat Gunther, Jake Nicol, and Chris Schott. You can watch this whole
Starting point is 01:05:55 episode on YouTube at YouTube.com slash Hard Fork. Special thanks to Paula Schumann, Pui Wing, Tam, Dahlia Hadad, a Jeffrey Miranda.
Starting point is 01:06:04 You can email us at HeartFork at NYTimes.com with your dirtiest country song. You know,

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.