Hard Fork - OpenAI Calls a ‘Code Red’ + Which Model Should I Use? + The Hard Fork Review of Slop

Episode Date: December 5, 2025

It’s A.I. model rollout season in Silicon Valley, and OpenAI appears to be feeling the pressure. Sam Altman, the chief executive of OpenAI, sent a memo to staff on Monday declaring a “code red” ...effort to improve ChatGPT and delay other initiatives. We explain why the latest frontier models from Google and Anthropic have OpenAI spooked and how the company is reshuffling priorities to respond. Then, we give our honest thoughts on which A.I models we like best and share how we’re using A.I. in our day-to-day lives. And finally, we take a look at some of the most popular A.I.-generated content on the internet this week in our latest installment of the Hard Fork Review of Slop.Additional Reading: What OpenAI Did When ChatGPT Users Lost Touch With RealityGoogle Unveils Gemini 3, With Improved Coding and Search AbilitiesTourists Tricked by Fake Royal Christmas MarketDeepfake of North Carolina lawmaker used in award-winning Whirlpool video We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

Transcript
Discussion (0)
Starting point is 00:00:00 Casey, how's it going? Good morning, Kevin. I am doing well, as well as can be expected, given that I had a colonoscopy yesterday. Yes, I heard about this. How did it go? Well, I got a clean bill of health. I will say, though, there was one moment during the procedure that was sort of alarming to me. What was that? Well, I had met, you know, the various nurses and the doctors and everyone was so friendly, you know, and was introducing themselves.
Starting point is 00:00:25 But as they sort of put in the medicine to make me kind of go under, I noticed that they noticed that there was one medical professional who was against the wall, and she was scrolling through her phone. And the last thought I had before I went under was, I really hope she's not looking up how to do a colonoscopy. You know what I mean? Because she kind of had that look on her face. Like, I need to jog my memory about what I'm doing here. And I thought, oh, God, I hope she already knows. No, she was on TikTok. She was live streaming your colonoscopy to her Hundreds of thousands of followers. I had that thought.
Starting point is 00:01:00 I was like, could we get more or fewer viewers to the YouTube channel if we went live with my colonoscopy? They're always saying, be authentic. Exactly. Bring your whole self to work. Or yourself whole. Yeah, yes. To work. I'm Kevin Rusa Tech columns at the New York Times.
Starting point is 00:01:21 I'm Casey Nude from Platformer. And this is Hard Fork. This week, Open AI declares a code red. Why the competitive landscape in NIST. AI has Sam Altman scared. Then, how we're using all the latest AI models. And finally, we're heading back to the theater for the hard fork review of Slop. Well, Casey, do you feel a little nervous energy, a certain frisson of tension in the air crackling through San Francisco these days?
Starting point is 00:01:54 Absolutely, Kevin. There's a chill on the back of my neck and an eerie silence. as I walk down the streets of the mission. Yes, well, that is because OpenAI is in a code red. Code red. Now, as you will remember, a couple years ago on this show, we talked to Sunar Pichai, the CEO of Google, when they were in their own sort of code red period, which he said was not actually called code red, but someone over there was using that term.
Starting point is 00:02:19 And that was sort of when they were on their heels, taken aback by the surprise success of Chat Chip-T, and they were racing to get their own... version of a chat bot out. And they were sort of in a corporate state of panic about this. That was their code red. But now we have a new code red. And it is an open AI. Sam Baltimore reportedly declared a code red this week about some worrying trends they're seeing with chat GPT usage. And I think in general, beyond just open AI, there's just been a lot happening at the frontier AI companies that we should talk about. A lot of new models coming out, a lot of discussions about the sort of state of AI right now. So I thought today we should just kind of get
Starting point is 00:03:02 into it all, starting with this code red. Yeah, let's talk about it. Because, you know, for listeners who may be curious, a code red is the second most dire state of emergency a company can declare with number one, of course, being a Baja blast. So Code Red is just below that. Yes. Yes. If we get to Baja blast, I'm ducking him cover. Yeah, me too. I'm leaving the city. I'm heading to the bunker. So because this is a segment about AI, we should make our AI disclosures. I work for The New York Times, which is suing OpenAI and Microsoft over alleged copyright violations. And my boyfriend works at Anthropic.
Starting point is 00:03:37 Okay. So let's start with OpenAI. Casey, what was in this code red memo? Yeah, so this was reported by the information. Sam apparently sent employees a memo on Monday. And interestingly, Kevin, your colleague Cash Hill had reported recently that OpenAI had declared a code orange. so they are moving up the ladder of distress here.
Starting point is 00:04:00 But the upshot from this memo is that OpenAI is going to start devoting more resources immediately toward improving chat GPT, and they're going to be delaying work on some of the other projects they had going, including ads, AI agents, and Pulse, which is this daily digest feature that they launched a couple months ago. So on one hand, it seems sort of obvious to me that they would be. be wanting to put a lot of resources toward improving chat GPT, like that would sort of seem to be the norm to me. But on the other hand, if this actually does result in them pulling engineers off of other projects, well, maybe that shows that they are taking this seriously. Yeah. Casey,
Starting point is 00:04:38 why are they doing this right now? Why are they feeling so much urgency around bringing people back to chat chit? I think there are two big reasons, Kevin, and their names are Gemini 3 and Opus 4.5. Over the past few weeks, we have seen Google and Anthropic both release state-of-the-art models that in various ways challenge some of the core pillars of what Open AI is trying to do. We know that just a few weeks ago, Sam had sent another memo to the OpenAI team on the eve of Gemini 3 coming out saying, hey, we may be heading into some rough waters here. The belief was that Gemini 3 was going to be so good that it was going to cut into OpenAI's
Starting point is 00:05:22 growth, both on the user side and the revenue side. and that creates all sorts of problems for Open AI, right? This is a massively leveraged company that is wholly dependent on subscription revenue that is trying to build out a consumer product while competing against in Google what is one of the biggest and richest companies in the world. So if you get to a point where Google's models are truly better and the cost of switching are quite low, then things start to get very difficult for Open AI very quickly. Yeah, I think that's really important.
Starting point is 00:05:54 And I want to just underscore that because I think what's happening here is a combination of things. One is that I think for a while, Open AI and to a lesser extent, Anthropic, were both sort of surviving on this moat of the model, right? They had the best models in the world, and that was kind of what separated them from the rest of the pack. If you wanted to work with a world-class model, you were doing some kind of agentic software development, if you were trying to do a lot of vibe coding or something, you really, wanted the smartest model possible, and you were willing to pay 20 or 200 or in a business's case, like a couple thousand dollars a month, for access to that best model, because your alternative was Lama or Gemini or one of these other sort of second rate models, and those models were not that good. But Gemini, as we talked about with Demis and Josh on the show a couple
Starting point is 00:06:49 weeks ago is good now. I would say it's at least as good as chat GPT at many of the tasks I've been trying it on. And it's really hard to imagine competing with Google, a company that last quarter did $100 billion in revenue. This is a company that has more resources and money and engineering talent than anyone else. And do you really think they're like sort of worried about how many $20 a month subscriptions they're selling? No. Once their models are good, they're going to like start subsidizing the hell out of them, and they're going to drive the cost very low, and they're going to try to steal market share. And I think that's the sort of phase they are right now is that they are realizing, oh, we've caught up, we have something compelling, and we can
Starting point is 00:07:33 just kind of drive these other companies margins down by offering our thing very cheaply. Yeah. Well, so let's talk then about a few other details from this memo and the kinds of improvements to chat GPT that OpenAI now says it is going to be working on. The memo includes personalization features, so further customizing how chat GPT interacts with you, improving the behavior of the model. I'm not quite sure what that means, although one thing it did say was they want chat GPT to refuse you less, and then improving speed and reliability. I have to say, these are things that I just assume that Open AI is always working on anyway,
Starting point is 00:08:11 right? Like, these don't feel like particularly big swings. They don't feel like a giant change in direction. What they do seem to me, though, Kevin, is like the. Facebook Playbook, which is something we've been talking about on the show for a while now. This is a company that has brought on a lot of people who used to work at Meta. And what kinds of things do they do over at Meta? Well, they try to create a perfectly personalized custom feed to you.
Starting point is 00:08:33 They try to give you exactly what you want, and they don't want to refuse anything that you want for them, right? So this seems, in other words, like they are going for engagement first and foremost, and I think that has a bunch of interesting implications. Yeah. So I think it's too early to say that, like, Open AI is screwed here. The chat GPT is in a bad place. They're obviously still the sort of world leader. They have the most name recognition. I think they've gotten a kind of ubiquity among AI power users that is going to be very hard to unseat.
Starting point is 00:09:03 What do you think about this decision? What do you think about this direction for Open AI? Do you think that they are right to be worried? I think, look, if Open AI flames out, all of us will be able to look back. identify 15 huge mistakes that they made, right? It is just as possible that some of the same bets that they are making now may pay off. And right now, we're in this moment of uncertainty. But if you want to take the bear case, there's a lot of people are making this week, here's what you could say. This company is massively leveraged, right? They've made a ton of spending commitments into the trillions of dollars that rely on revenue that is not close to
Starting point is 00:09:44 materializing. And if you look at their product organization, they are not focused at all. they are trying a little bit of anything and everything. One reason why we've talked about SORA, their video generator, so much on the show, is it seemed like such a weird departure from their core focus, right? So you have this company that has its fingers in many, many different pots, most of them are not generating revenue. It has these massive spending commitments, and now all of a sudden, some of the other labs seem like their models are leapfrogging them.
Starting point is 00:10:14 So, yeah, you can take all of those facts and paint a potentially dire picture about the future of Open AI. Yeah, there was some interesting discourse this week. Someone pointed it was pointing out that Open AI has not had a successful pre-training run in quite a while. This was something that Sam actually brought up in his, one of his sort of slack messages to staff a couple weeks ago, is that they feel like Gemini 3 is like a pretty amazing sort of pre-trained, which is, you know, the first step in the AI process when you're building a large language model and you're feeding it a bunch of information.
Starting point is 00:10:48 And I think that the sort of conventional wisdom among, like, AI heads has been that, like, pre-training is kind of hitting a point of diminishing returns, right? That we've sort of sucked up all the data, fed it into the models, made these models as big and efficient as they can be, and all of the sort of low-hanging fruit now is in the post-training phase. So I think what we're seeing now is that OpenAI realizes that it has a problem with pre-training specifically. And that is harder to fix than post-training. It's expensive. You have to redo these training runs. You have to find whatever is messing up the pre-trains. But that is, I think,
Starting point is 00:11:23 where they are going to be focusing their research energy. Yeah, definitely something that Open AI is concerned about. So thanks to these memos that have been leaking out, we also know that Open AI is training more models, that it thinks will be better. We'll sort of catch up to the frontier or, you know, advance the frontier in some way. And one, One of them is called garlic, and another one is called shallot peat. So make of that way you will. They have a real Allium thing going on over there. They're getting very close to being able to make a mirroix.
Starting point is 00:11:56 I know what that is. Put a little carrot and celery. Yes. You've got a stew going. Now, what do they say about those models, though? Because I believe we saw some reporting in the information that said that at least they believe that this next series of models will bring them back to or maybe even a bit ahead of the state of the art. Yeah, I've been talking to some folks over there. They seem optimistic about these models,
Starting point is 00:12:18 but it's also not clear yet whether they will be as good as they hope they will. All kinds of things can get messed up in the late stages of training a model. And so I guess we'll just have to wait and see. Let me add one more point about all this, though, which I think is important, which is the mere fact that Open AI's current focus is just kind of clawing its way back to parity with its biggest rivals is a big part of the problem here, right? Think about the position that Open AI was in just about three years ago this week, which was just days after the launch of Chad GPT, the world was their oyster, right? They had this massive head start over everyone, and they had been able to maintain that lead,
Starting point is 00:12:59 even in the face of, like, historic turmoil, including the ousting of their CEO and then bringing him back, right? And I think for months, I was honestly astonished that they had been able to release feature after feature that was keeping them so far ahead of the competition, now does seem like the first moment after the release of Chatubit where maybe they're just starting to fall a little bit behind. And Kevin, I have to say for Open AI to realize its ambitions, it is not going to be enough for them to make a model that is as good as Gemini 3. They need to be able to leapfrog it again. Right. They are not going to win by tying for first place. That's right. That's right. All right. Let's talk about some of these other AI models and some of these other companies.
Starting point is 00:13:42 that have been coming out with new things recently. And I want to start with Gemini 3, a model that we've mentioned a couple times already today. We talked with Demis and Josh about it on our bonus show, on the day that it came out. We've now had a couple weeks to play around with the model and start using it, and I want to know your impressions. So I think the number one observation I have about Gemini 3
Starting point is 00:14:04 is that it is just faster than the competition. And this matters a lot, right? often when I'm finished writing a column, I will ask both Chachypti and Gemini to fact-check it. Chat-GP-T's fact-checking is usually more thorough and better than Gemini 3's, even today. But Gemini 3 is a lot faster. And in AI, speed matters a lot.
Starting point is 00:14:24 And the faster something is the more often you use it. So I think that's been really powerful. Now, do you fact-check the fact-checked? Do you have to, like, go in and sort of manually see what their models are telling you is correct? Yeah, so what I'll do is, you know, it'll basically just say, like, hey, like, you got this date wrong, or, you know, you got this name wrong.
Starting point is 00:14:40 And then I go look it up myself, and, you know, nine times out of ten, they have, like, caught my mistake. So I'm not just saying, like, tell me that everything you're here is perfect. I'm saying, can you find something in here that you think is wrong? And by the way, you know, this was something that a year ago was not good. Right. A year ago, we were checking the model for hallucinations. Now they're checking us for hallucinations. It's really true. But this is something that they quite bit at.
Starting point is 00:15:01 Totally. Yeah. I really like Gemini. I've been a sort of quiet Gemini Stan for, for a while. Now, I really liked 2.5, the model that preceded this. I have been using Gemini, sort of one of my two kind of daily driver models. We'll talk a little bit later about how we're using this stuff. But I think this is a really powerful model. I've been doing some research for the book that I'm working on. Gemini has been extremely helpful with that.
Starting point is 00:15:30 Things like organizing timelines, pulling up research papers, putting things in sequence, things within large documents that I'm sharing with it. I think this is just a really good model. And to me, it's not as interesting or fun to talk to as some of the other models. I don't feel like it has much of a personality. But it is a workhorse, and it is fast, you're right. And this is not even the fast version. They're going to be coming out with a flash version of this model at some point.
Starting point is 00:16:03 So I'm excited for that. and I think they really cooked with Gemini 3. Yeah, so on the occasion of the release, Google said that about 650 million people a month are now using Gemini. Open AI, annoyingly, reports weekly user numbers. They say they have more than 800 million weekly users of ChatGBT. Interestingly, neither of these guys reporting daily numbers,
Starting point is 00:16:25 and I think that's because most people still are not using AI daily, right? So that's why we're sort of in this weird middle zone. But here's the thing. If Gemini has gone from zero to 650 million in this short, of a time, there is every reason to believe that they can catch open AI, right? And that even though chat GPT is synonymous with AI for a lot of people, it is just turning out maybe not to matter as much as you might think. Right. And I'm always a little suspicious of these Gemini numbers because I'm not sure whether they're just counting sort of people who sort of proactively go to
Starting point is 00:16:56 the Gemini website or the Gemini app or whether they're also counting people who like click on the little Gemini thing inside Google Docs or Gmail or something. to me that, like, indicates a little bit less intent, and maybe I take those numbers a little less seriously. But that also, on the flip side of that, is like, Google has this massive distribution advantage, right? It does not have to convince people to go to a website that they are not used to going to or download a new app. It is already on, you know, billions of phones and devices. People already have Google as their default homepage. They're already using Gmail. They're already using all these other Google products. And I think in a world where models are
Starting point is 00:17:33 becoming more commoditized, or at least there are sort of more labs at the front of the pack, distribution is going to play a much bigger role. Absolutely. Okay, now let's turn to Anthropic and their new release, Claude Opus 4.5. Casey, have you spent time playing around with this model? I have, and I think this is a really, really good one. Now, famously, my boyfriend does work at Anthropic, so you should feel free to apply an 80% discount rate to everything that I'm about to say. But here's what I'll tell you. Before 4.5, I was not really using Claude. on a daily basis. I was trying it every once
Starting point is 00:18:05 on a while to see what it can do as I do with all other models. But for me, the daily drivers were absolutely chat GPT and Gemini. Those were the most useful models. When Opus 4.5 came out, I put it through a test
Starting point is 00:18:15 that I've been giving every model forever, which is I would give it some sort of unpublished study that I might want to write a story about. And I would say, write a column about this study in the style of Casey's platformer just to see what would happen.
Starting point is 00:18:29 To this day, if you do this with Chat Chachypte 5.1, not good at all. It just gives you a bunch of bullet points and bold stuff I would never do. If you give it to Gemini 3, it kind of sort of is structured like something that I might write, but it has a lot of obvious AI tells. I did this for the first time with Opus 4.5, and it honestly sent a chill through my spine because for the first time, I was looking at sentences that it looked like I could have written
Starting point is 00:18:54 them. In particular, it wrote a conclusion that I was like, I would write a conclusion that looks like that. so we talked a lot earlier this year about the concept of style transfer that was the studio jibbley moment where all of a sudden you could make any image look like this you know japanese anime was really kind of fun i've been waiting for the moment when that happens in text this was a moment where i was like oh my god it is starting to happen kevin so that was the first thing i saw opus 4.5 do that made me say okay they may have something here yeah i am not conflicted by being in a romantic relationship with
Starting point is 00:19:27 anyone who works in itthropic. So maybe apply less of a discount rate to what I'm about to say. But I love this model. I am having so much fun with Claude Opus 4.5. It is one of my two daily drivers along with Gemini 3. I've been using it for all kinds of book research for preparing for podcasts and interviews. I've been talking to it about all kinds of family things and medical things and parenting things. And I just think there's like something special about this model that I have not felt since a previous version of Claude, Claude 3.5, sonnet, parentheses, new, which was to that point my favorite model to talk to.
Starting point is 00:20:12 And this is sort of bringing back that same feeling of like, oh my God, like, this is an incredible experience talking to this thing. Now, can you say what do we know about what went into the making of 4.5 that might explain some of these gains that you and I are both feeling. So interestingly, I think Anthropic actually underhyhy to this release. They didn't do a big, like, splashy thing about it. They made some claims about how good it is at coding
Starting point is 00:20:38 and agentic tasks like computer use. They also said that it was really good at deep research, and they called it their most robustly aligned model they've ever released. But I think they really wanted to let the model do the talking. And people are kind of amazing. by this model. Recent hard forecast, Dean Ball,
Starting point is 00:20:58 had a great post about Cloud Opus 4.5 in which he said, this model is a beautiful machine, among the most beautiful I had ever encountered. And I won't go that far,
Starting point is 00:21:11 but I will say that, like, this is, there's, there are these sort of intangible and hard to quantify properties of models
Starting point is 00:21:20 that you just kind of get a sense of when you use them a lot. Yeah, I think that in particular, the Claude models have always excelled at kind of having an empathy for the user that stopped short of a sycophancy, right?
Starting point is 00:21:31 It felt like you were talking to somebody a little bit more like a therapist where there was like some sort of remove and yet you also sort of felt like you were interacting with something that was like taking you very seriously and was like trying to treat you warmly and that just makes Opus I think good for a lot of things.
Starting point is 00:21:48 I will say recently I had this procedure I probably now talked about it too much but here's the thing. When you're about to have a colonoscopy, or maybe let's say you're going through the preparations for a colonoscopy, many gross things are happening to your body. Your boyfriend doesn't need to know about them. Your friends don't want you to call them asking questions. But you go to this model and you say,
Starting point is 00:22:06 this specific thing just happened to me. What do you think about that? And you just get back a response that is very warm and humane. And so for that reason, I thought I was really good. Yeah, I appreciate about Claude that it will tell me when I'm being ridiculous. Like the other night I was like up to way too late, like asking it some banal question about like Christmas shopping or something
Starting point is 00:22:27 and at one point it was just like Kevin it's after midnight, go to bed wow that gets it something that I think is really interesting about the Claude models and I think opens up what should be something fascinating to watch over the next year
Starting point is 00:22:42 when you look at the Google and the Open AI models those are in some large sense optimizing for engagement right we know they want you coming back to them every day make this your sort of primary driver. We also know that Google is already testing ads and AI. We believe that Open Eye is going to launch this at well.
Starting point is 00:22:59 I do think that kind of changes and probably perverts the incentives or what kind of AI systems you're going to have. I'm pretty confident. Claude is just not going to do that. I don't think ads are going to be in Cloud in the next year. I don't think it's going to become an e-commerce engine. It's just kind of going to stay the way that it is. And so I think that gives Claude this really interesting opportunity
Starting point is 00:23:19 in a world where everyone else is pushing for engagement, commerce, monetization. Anthropics model is just very different. They're building for the enterprise. Like, Claude.AI is almost an afterthought for them, right? Because what they really want to do is they want to sell an API to a company and charge them millions of dollars to, like, do agent decoding. Right.
Starting point is 00:23:37 So Claude winds up being this kind of like, I don't know, like bonus child that they have that is like really good at a bunch of things. And I just kind of don't think it's at the same risk of being ruined in the next year that the other ones are. Yeah, I mean, I think there is like an interesting tension. that you're identifying, which is, like, on one hand, anthropic of the big frontier labs is, like, the most sort of focused on these, like, enterprise work use cases,
Starting point is 00:24:02 like specifically coding. And that's where they make most of their money. That's, like, the fastest growing part of their business. They're not really competing in the consumer space anymore. Because I think they realize, you know, to their credit, that, like, chat GPT just has way more users and way more sort of purchase among, like, ordinary users. Yeah, they lost.
Starting point is 00:24:21 They lost. And I think that could incentivize them over time to make this thing more boring and less interesting to talk to, just sort of make it like a perfect, efficient coding coworker and to stop investing in some of this other sort of more soft like model behavior stuff. But I really hope they don't because it is a joy to talk to an AI model that actually feels like it has, I don't want to say like a consistent personality. But, like, I really liked the way Dean Ball put it in his essay. He said, Claude Opus 4.5 just feels like it's playing in the same musical key all the time, right? Like, you can open a new chat with it. You can talk to it about something completely different. And what comes back at you feels like it comes from the same place sort of almost philosophically as the thing that you were talking to about something completely different.
Starting point is 00:25:11 I mean, I think they are going to keep going in this direction, because what are they trying to build? They're trying to build an AI coworker, right? And they want that coworker to be humane and to play in the same key, you know, every time that you speak with it. So I think you'll probably see them go less into personalization than you see these other companies go into. So this is just like really, you actually just have two very different points of view about what an AI tool should be.
Starting point is 00:25:35 And we're going to get to watch that play out next year. Should we talk about the Soul Doc? Let's talk about the Sol Doc. Okay. So a lot of the chatter about Opus 4.5 in the past week has been about what's come to be known as the Soul Document. That's S-O-U-L, not S-O-L-E. Or S-O-E-U-L.
Starting point is 00:25:50 whoever you spell the city in South Korea. That's right. I think you got it. No, S-E-O-U-L. That's right. Yes. This is something that actually came out because these kind of, you know, internet commenters were sort of freaks.
Starting point is 00:26:05 Freaks, yes. These people who like love to jailbreak new models and sort of figure out all the hidden Easter eggs inside of them had discovered or claimed to have discovered this thing. This sort of, it wasn't exactly a system prompt, which is the thing that you tell the model before it starts responding to users. it was actually in the weights of the model. So, like, part of the sort of pre-training process, and it was this kind of fascinating document about Claude and sort of explaining what Claude is and what Anthropic is and this weird position
Starting point is 00:26:37 that they occupy in the AI landscape where they're very worried about the dangerous effects as technology, but they're also racing to build it and how, like, Claude is sort of this, which basically just, like, kind of a biography of Claude and Anthropic, but, like, inside the weights of the model. And at first, people didn't really know, like, is this real, or is this just sort of being hallucinated by the model? Models are notoriously unreliable when you ask them about themselves and their internal workings.
Starting point is 00:27:02 But on Monday, Amanda Askell from Anthropic confirmed that this was based on a real document and this was part of Claude's training process. She said they are still working on it. They intend to release more details about it soon. But this has become endearingly known within Anthropic as the Soul Doc. and what a fascinating thing. It is a fascinating thing. I mean, look, this is a company
Starting point is 00:27:23 that fully believes the thing that they are making is going to become sentient, conscious, and will need to be treated with all the respect that you would afford another human being. So they are sort of way out on a limb compared to their competitors getting ready for that. And it really tells you a lot about the people that work at Anthropic
Starting point is 00:27:38 that they are building soul docs for their AI models. I mean, I think it tells you what is coming. I was recently at a, I went to an AI consciousness conference, which was fascinating. And I'm going to be writing about it in my book. But it's like there is now this sort of seeds of this conversation happening among the people at the big labs who I think do understand that these systems are becoming increasingly like we're going to get hammered by the anti-anthropomorphization people for everything that we're about to say. But they increasingly see these things as having some kind of inner awareness, some kind of ability.
Starting point is 00:28:18 to reflect on maybe things that happen to them during their training processes, maybe some consistent emotions that they tend to express. And like, there are lots of outstanding questions. I am not at all certain about what my sort of P consciousness is. I think it's very low right now, but, like, people in serious jobs at serious companies are starting to think about the possibility,
Starting point is 00:28:42 however remote, that these things are or may soon be conscious. And I just think that's fascinating. Yeah, I agree with that. What kind of threat is Anthropic strategically to Open AI right now? I mean, I think right now it's primarily in the enterprise. Like, at the start of this year, Anthropic had less than a billion dollars in annualized revenue. As it's coming to the close of the year, it said that it is expecting to have about $9 billion in annualized revenue. So it did that by selling into the enterprise.
Starting point is 00:29:12 If you are a developer or you're a big consulting firm and you want to create these agenetic workflows, most companies that are buying this software are buying it from anthropic, or I should say maybe a plurality of them are buying it from Anthropic. And so Anthropic has just become one of the fastest growing startups of all time because they've just created this massive opportunity. If they were not on the chessboard, that $10 billion would probably be going to somebody else, and that would probably be some combination of Open AI and Google, right? So that's a significant amount of revenue that Open AI is losing out on this year. I believe Open AI is projecting to have about $20 billion in revenue this year. So you can imagine,
Starting point is 00:29:48 how different the picture would look for them if they've been able to capture the enterprise market and increasingly, you know, Anthropic is winning it. There's a weird sense in which ChatGPT was actually the best thing that could have happened to both Google and Anthropic. You know, like, I think at the time
Starting point is 00:30:04 ChatGPT came out was this huge success. It was like everyone was talking about it. It was sort of took AI into this like new era. And I think for Google, the reason that was helpful is because it was the thing that like woke them up, right? They had been, you know, tearing themselves apart with all this, like, bureaucracy and infighting, and they couldn't really get their act together for various reasons.
Starting point is 00:30:25 And Chattee sort of forced them to focus and bear down and, like, become more efficient and better at shipping these things. And for Anthropic, it was sort of like, well, I guess we don't have to, like, make a consumer chatbot now because that lane is already full. And so I think they were able to kind of pivot into this interesting new direction that I think ended up being better for them than what they would have gotten if they had tried to compete with ChatchipT. Yep, good take. Casey, is there any other big news in the AI world from the past week or two that we should talk about?
Starting point is 00:30:56 I mean, maybe just real quickly. We've seen a couple of interesting departures. Jan Lacoon finally left Meta. I think everybody has been waiting for that ever since they installed Alexander Wang as the head of the Meta's superintelligence division. Yes, hard to be a Turing Award-winning godfather of AI who is reporting to a guy in his 20s. Jan is apparently going to be doing a new startup that is going to build world models. Jan Lacoon is one of the most famous LLM skeptics out there.
Starting point is 00:31:23 He says that you cannot get to AGI using the approach that all the other big labs are using right now. So we'll definitely be interesting to see what he comes up with. The other big move is John John Andrea, who was the longtime head of AI at Apple. He is stepping down from his position. And that also, I think, was long expected because of all of the problems that Apple has had getting its AI efforts off the ground. And in fact, Kevin, I think the fact that John Andrea is leaving might just be a sign that Apple is low-key giving up on AI overall. We know that they've signed a deal with Google to make Gemini, the kind of core of their AI efforts. Maybe this just becomes the kind of thing where they don't have to build it. They just buy it for cheap from someone else. They're reportedly only going to pay Google a billion dollars a year, something that they can very easily afford, and maybe they'll be fine. Yeah, I don't know how to read this exactly. I mean, you could read this as like they're giving up on AI, uh, But they also just brought in a guy from Microsoft to be their new head of AI. Let me tell you something.
Starting point is 00:32:22 When you bring in a guy from Microsoft, that is a way that you're giving up on AI. No, actually, he was at Google for many more years before that. He was only at Microsoft for like four months, which there's an interesting story there that will have to be told someday. But basically, I think you can read it as they are giving up or they are sort of rebooting their AI efforts. They're saying, like, what we've been doing is not working. We're going to bring in a new team.
Starting point is 00:32:39 We're going to start fresh. And we're going to try to give this thing a go. Bro, if you're starting from scratch in December 2025 on your AI program, You're cooked. You truly know it has ever been more cooked. Come on. When we come back, we'll continue this conversation and tell you how we've been using the latest AI models.
Starting point is 00:33:16 Okay, so there's lots happening here in the industry in Silicon Valley in San Francisco, but I want to end on a practical question that we get a lot from listeners to this show, which is like, look, what should I be using right now? What is the best AI model? What is the thing that will give me the most advantages and annoy me the least? Like, if I can only subscribe to one model or maybe two models, what should they be using? So I don't think there is a great one-size-fits-all answer to that question, Kevin. I think I could say confidently that you can use either Chachibit, Gemini, or Claude for many things and probably be fine.
Starting point is 00:34:03 And there's probably some vast set of use cases for which all three of those models are roughly equivalent, okay? So that's going to be my answer for like the 80th percentile of our listeners, right? But let's say you're moving up into the 20th percentile of our top AI users, the real freaks out there, okay? Now I'm going to tell you, you are just going to want to experiment with these models all of the time. I mean, just within the past few months, we've seen each of these companies release a very capable new model. And you want me to tell this 20 percentile, oh, no, just stick with one of them forever. No, you have to be mixing it up. Again, I just use Claude, a model that I have not found very useful at work upon the release of its new model.
Starting point is 00:34:41 and I said, oh my gosh, okay, the game just shifted again. I want to bring up this metaphor I've been thinking about over the past day or so. In 2023, the sci-fi writer Ted Chang wrote this widely read and shared essay in The New Yorker called Chat GPT is a blurry JPEG of the web. Do you remember that? Yes. And the argument that it made was a critique of ChatGPT saying, this thing really kind of sucks because it's just an amalgamation of everything that has ever been put on the Internet.
Starting point is 00:35:10 there's kind of no soul to it, right? But I thought about that metaphor of the blurry JPEG. Because when I used Opus this week and when I used Gemini 3 the week before, I had that sensation of, you know when you're loading up a web page and it is loading up a JPEG
Starting point is 00:35:26 and at first it doesn't load it in full resolution, it kind of gives you that blurry version first, and then a few seconds goes by, and then it shows you the higher resolution. We are in a moment where the AI is getting higher resolution. That was the feeling that I had when Claude was able to just create something that was writing sentences that for the first time felt like me, it was like, okay, the blurry JPEG is getting a touch less blurry. And so that's why I can't
Starting point is 00:35:51 give you a single answer to which model should I use. Because I think the answer to that is just going to be changing consistently over the next six months to a year. And if you really care about this stuff, you're just going to have to try new things. Yeah, I mean, to your point, it is amazing how quickly this stuff is moving. I was writing a book chapter the other day about the launch of chat TPT, and so I was going back and looking at some of the initial reactions that people had three years ago to the launch of this product. And it was so bad. It was so dumb. By today's standards, I could not believe how easily amazed people were by the fact that the thing could just string together plausible sentences on any given topic. And like, we should say for the time,
Starting point is 00:36:30 that was amazing. No chatbot had ever done that. But looking back, just even with three years perspective, it is just incredible how much my personal expectations of these tools have been raised. I would say, like, in the process of writing this book, these tools have probably saved me a year of my life, like a year that I would have had to spend, going to libraries, pulling clips, like doing research, stitching together ideas. It is implausible to me that I would ever do any project like this, again, without these tools. I think a lot of people, are feeling similarly in their own work. Yeah.
Starting point is 00:37:08 You know, I'll just say, I don't know if this fits, but there's this thought that I have a lot because I think, you know, there's much AI criticism out there. There's a lot of, you know, anger, hostility, skepticism. I think a lot of it is warranted. We talk about it a lot on the show. But I've come to believe
Starting point is 00:37:23 that there are fundamentally two different views of AI. There is what I call the California view of AI, which is what can it do. And then there's what I call the New York view of AI, which is what can't it do, right? And you see the what can't it view on social media a lot. You know, whenever an AI fails at some simple test, whenever it makes some terrible mistake and we say, aha, you know, screw this thing.
Starting point is 00:37:47 And then you have folks like us who I think are a little bit more impressed at like what it can do. The release of the models over the past few weeks has been a moment where I'm just glad that I have a default view of what can it do because it is changing people's workflows, jobs, lives in real time. And I think that if your default is what can't it do, you're just missing a huge part of the story. Totally. I have decided one principle that I am going to apply to my life going forward is that I'm not going to listen to opinions about AI from people who do not use AI. Like, I think that if you are not grounded in having firsthand direct experience
Starting point is 00:38:26 with these models for at least, like, I don't know, five hours, 10 hours, something like that with like the newest models, you actually are talking about. something that no longer exists. Yeah, you're a historian. So that's one side of it, is just that these things keep getting better. At the same time, I want to get your opinion on kind of this other, like, long timelines view that is coming into vogue in the San Francisco AI community. Dwarkesh Patel recently did an interview with Ilya Sutskiver, the famous AI researcher, and they talked a lot about how there's this kind of, you know, not necessarily like slowdown happening, but just like these models are not as useful as people want
Starting point is 00:39:03 them to be. Like, they are not out there, you know, adding trillions of dollars to GDP. Like, companies are not able to, like, fire half their workers and replace them with AI yet. And so that view is kind of springing up within the San Francisco AI crowd at the same time as, like, I think the models actually are getting better at the things that you and I care about. So how do you reconcile those things? I think both can be true, that we are still on a trajectory where the first likeliest thing to happen is that AI will just solve coding and software engineer. We will still have software engineers, but they will not be writing code by hand on our predictions episode later this month. One of my predictions may be that by the end of 2026, coding is just effectively solved.
Starting point is 00:39:47 This is just something that a lot of tools, even free ones, can kind of just do for you. But there's still a lot of other jobs out there. There is still a lot of translation left to be done. And not every job has as defined a rule set as coding does. So I think it can both be true that models are advancing in a way that it is bringing us closer to automating software engineering. And if you're an accountant, a lawyer, a doctor, AI still is just kind of something that is only momentarily useful. And I think the question will be, what will it take to generalize whatever is needed to solve coding for every other job? And how long will that take? Yeah, I think that's right. I think the race is still very much on. The models are still much getting better. It remains to be
Starting point is 00:40:31 seen how soon or quickly that will kind of diffuse into products that actually make life look very different for people like you and me and for coders and lawyers and doctors and everyone else who uses these things. Stay tuned. When we come back, we're heading to the theater for the Hard Fork Review of Slop. Bring your theater binoculars. They're called opera glasses. Your theater binoculars. This is why I have to keep you around. I swear to God.
Starting point is 00:41:04 Unbelievable. Well, Casey, it's time once again for one of our favorite segments. That's right, the hard fork review of Slop. The Hard Fork Review of Slop. This is, of course, our cultural criticism segment where we bring very serious analysis to this new medium of AI Slop that is taking over the world.
Starting point is 00:41:50 And today, we have some more examples of Slop for our listeners' critical consideration. Let's get into it. First up today, we have some holiday slop. The holidays are a time when people around the world are gathering with their families. And this year, they may encounter an Instagram video that shows a bunch of tourists going around a holiday market set up at Buckingham Palace. And Casey, let's play this clip from the BBC. Please.
Starting point is 00:42:17 Why are people coming to Buckingham Palace to see a market that doesn't exist? In recent days on social media, there have been AI-generated images of a Christmas market. They're fake, but that hasn't stopped people wanted to come here and experience a slice of the festive action. Tell us, why are you here? Oh, we've got for Christmas Market that's not here. Everyone's calling for this AI-generated advertising, so yeah. I was going to enjoy a mild wine, and now I've got my Nanny Wars chicken sandwiches.
Starting point is 00:42:48 I'm very disappointed. We see the funny side of it, really. We're going to go find alternatives. There are plenty of other Christmas markets across London, but if you do want to go to Buckingham Palace there is a gift shop first of all BBC thank you for what you do
Starting point is 00:43:02 I love that reporter he sounded so angry delightful he was just spitting mad at this whole situation it's fake yeah well look this is wonderful it's you know it's giving me flashbacks to the famous
Starting point is 00:43:14 Willie Wonka event that was also held in the UK in recent years where people did show up and there was a real event but the AI advertising had made it seem much more grand than it was. We now move to the next stage, which is that AI is just now advertising completely non-existent events for you to go to with your family. Yes. Whatever's going on in the UK, those people have to up their slop detection game.
Starting point is 00:43:37 I do think this opens up a very fun possibility, which is that they will actually now have to build a holiday market at Buckingham Palace to capture the obvious demand and the flood of tourists who are coming in to go to this non-existent holiday market. Well, and just to stop a revolution. I mean, you could tell a lot of those people were pretty, you know, angry about what was not there. Yes. Yes, this is going to lead to a whole new dimension of fake until you make it. Yeah. I mean, look, this one is so interesting to me, because on one hand, when you think about all of the different deep fakes you can make, few seem more innocuous than what if there was a Christmas market at Buckingham Pallet? That actually sounds like a lovely
Starting point is 00:44:11 piece of slot that you could make, you know, and maybe share with a few friends. But, you know, because we live in a nightmare information ecosystem where no one knows what's true and false anymore, You take this perfectly benign, you know, piece of content. All of a sudden, people are showing up the Buckingham Palace. So there are, you know, I'm sad to say, sorry to be a buzzkill. There are going to be much worse outcomes from this exact dynamic. This one is at least a little funny. But, you know, if I were a platform like a TikTok, I might be thinking about,
Starting point is 00:44:37 hmm, is it maybe bad for my platform that people are constantly looking at slop here and then going to go to non-existent events? Because eventually some of that anger is going to come back on the platform. They're going to be cold and they're going to have to eat, Nanny's chicken, whatever she said. I think she was having a cheeky nandoz. Was she having a cheeky nandoz? I believe she was having a cheeky nandos.
Starting point is 00:44:56 Wow, well, it all turned out fine for her. Sounds like. Okay, we have one more example of holiday slop this year, and that is holiday meal slop. There was an article recently in Bloomberg titled AI slop recipes are taking over the internet and Thanksgiving dinner. This was about the food bloggers who are noting to their chagrin that traffic to their website has fallen off a cliff since people are. increasingly turning to AI generator recipes. But they are also discovering that some of these
Starting point is 00:45:25 recipes don't make sense. Yeah. So there's really, you know, two stories here. One is about the fact that people are turning to AI tools and getting back these recipes that are just nonsensical. You know, these systems are not directly pulling from recipes. They're reconstituting them from a bunch of different things that they've seen online. And that's not going to work out for you every single time. A lot of folks found that out the hard way over Thanksgiving. There's a second story, though, which is all of the human beings out there who did the hard work of creating real recipes and then testing those recipes to make sure they work are now reporting that traffic to their websites is falling off a cliff. And I just want to say, this sucks. I hate this about AI. I want people like Yvette Marquez Sharpnack, who runs the Mexican food blog Muay Bueno and who posted photos of two different tamale recipes that people were making using AI tools that were just completely bogus. Like, I want her to be able to make a living.
Starting point is 00:46:21 And instead, all the AI companies came along. They remixed the entire internet, and they replaced it with what so far is worse. So I hate that, Kevin. Yeah, I think they should start selling these tamales. I know a holiday market where they could sell them. I love that you just waited through my whole rant so you could make your stupid joke. Listen. No, I agree.
Starting point is 00:46:41 I think this is a bad trend. At the same time, my wife, who's a very good cook, has been using AI to do some of her own cooking recently. and it's produced pretty good stuff. So I should say one man's slop is another man's treasure. Here's how we split the difference in my house because we did Thanksgiving for 14 this holiday season. Wow, you have 12 kids. That's amazing.
Starting point is 00:47:02 No, actually, our family's met for the first time, if you must know. Wow. And it went great, thank you. We all had a great time. Thanks to the families for coming up to the Bay Area for that. Anyways, point of the story, Kevin. What we did was we took a great turkey recipe from Kenji Lopez-Al, one of the great cooks in all the world.
Starting point is 00:47:18 Yes. We made his turkey and we used his recipe. But when we had questions about what we were doing, then we did turn to the AI chap, but we say, hey, should we maybe turn the temp up? Should we turn the temp down? We used it to get guidance along the way. Kind of split the difference there. That seemed to work out.
Starting point is 00:47:31 How did it turn out? We overcook the turkey. But I'm not going to blame AI for that. I'm going to blame the fact that it was the first time we used our oven to cook an 18-pound turkey, okay? Listen, my therapist does it all the time. We can only learn through experience, Kevin. Okay, next slop.
Starting point is 00:47:48 This one is not a holiday piece of slop. This is an educational music piece of slop. This was a great story that Katie Natopoulos wrote at Business Insider the other day about an Instagram account called Learning with Lyrics that has been flooding Instagram with these posts of AI-generated songs that basically explain topics that people might be curious about. Things like, why are manhole covers round? How Does Velcro Work, a topic we covered on our 50 Iconic Technologies episode.
Starting point is 00:48:20 Why are giant steel coils transported on their sides instead of flat? Now, Casey, have you heard any of these songs? You know, I haven't had the chance, but I'm hoping I could change that right now. Yes. This one is about how instant cold packs work. Let's take a listen. I'm curious how instant cold packs work. One squeeze and it's freezing cold.
Starting point is 00:48:40 But how can you create cold without a freezer? Instead of creating coldness, think of it like stealing heat, the pack has two things inside. A dry chemical like ammonium nitrate and a small patch of water. Now this account is apparently the work of a Cal State Long Beach student named Cashin Tomlinson, who told Katie Natopoulos from Business Insider, quote, I've always been someone who's curious about stuff. Which is just a perfect college student quote. Relatable king.
Starting point is 00:49:18 Also, I'm guessing that if these get enough views, he could make some money, in which case cash in could in fact cash in. Now, here's what I'll say about this. I had a sort of strong negative reaction to all of the AI slop recipes that are making life harder for human food bloggers. I'm actually fine with this.
Starting point is 00:49:36 If you are out there and you want to make a song about why giant steel coils are transported on their sides instead of flat, you're not actually competing with a human artist. You actually have that lane to yourself. And if you want to use an AI tool to do it, I say, God bless. Yes, it could be dangerous for WikiHau, which was the previous place that you'd go to find answers to like stupid questions. But WikiHow was one of the most disgusting websites ever created, just absolutely choked with ads, a website that actually hated all of its visitors and just wanted to collect cash. It's true. Actually, this, um, this, this, um, this This is sort of rhymes with something that I've been really interested in recently,
Starting point is 00:50:12 which is that I've heard that college students are now using these AI music generation tools to, like, make songs to help them remember things. Because some people are just, like, auditory listeners. And so if you're like, how do I remember, like, the steps in the Krebs cycle, now you can just go make a Taylor Swift song about it. One of these AI, actually don't do that because she'll see you. But you can make a generic pop song about this, and that's maybe easier for you to remember than actually listing out all the steps.
Starting point is 00:50:36 You know, that's great. That actually reminds me of something I did the other day. which is that my friends and I had come up with a great idea for the first line in a gay Shakespeare sonnet. So, you know, there's some kind of rumors out there about Shakespeare's sexuality. We said, what would he sound like if he was really, you know, gay? And so we came up with the line, shall I compare thee to a boots-down sleigh? Which just seemed like such a good line. And so then I asked Claude to finish the sonnet.
Starting point is 00:51:00 And you know what? It did a great job. I don't even want to know what a boots-down slay is. I'll tell you when you're older. Okay. All right. Next up in the Hard Fork Review of Slop, this one comes to us from North Carolina, where a state senator named DeAndria Salvador recently found herself in an ad by the Whirlpool Company
Starting point is 00:51:19 for a line of appliances in Brazil that she had not actually appeared in. The company had lifted a section from a TED Talk video that she gave back in 2018 and put it into their video about Sao Paulo and all the energy efficiencies that were in their appliances in Brazil. And by the way, thank you for coming to her TED Talk. Yes. Let's play a clip from this. These kinds of dangerous incidents can take root when people are faced with impossible choices. In the U.S., the average American spends 3% of their income on energy. In contrast, low-income and rural populations can spend 20, even 30% of their income on energy.
Starting point is 00:52:02 Okay, so that's the original TED Talk. Now, here's the Brazilian ad that they made out of this. People are faced with impossible choices. In low-income communities in Sao Paulo, the average electricity bill cost represents 30% of their monthly income. This is when energy becomes a burden. Console, the leading appliances brand in Brazil, created a new way. Incredible.
Starting point is 00:52:24 So they didn't just lift this segment from her TED Talk and put it into their ad without her permission. They actually AI-fied her voice and put her synthetic. voice into their ad to make it talk about Sao Paulo. You know what? This whole thing is so crazy. And just to add one more crazy twist. So the ad agency that made this ad is a subsidiary of Omnicom, which is one of these, you know, advertising giant. There's only, you know, a small handful of them. They're one of the big ones. And the subsidiary, DM9, had submitted this slop ad for an award at Can Lyons, which is the global ad awards that happens every year. And it won their highest
Starting point is 00:53:04 award, the Grand Prix in the creative data category, as well as a bronze lion in the creative commerce section, Kevin. Incredible. Very difficult ads to win. And so after all of this happened, they had to return the awards that they had won for making this slop. That's incredible. That's incredible. I do think that DeAndria should be able to claim that she has a Can Lion's winner because she was placed in this video without her consent. She really is. So far on the Slop review, we've had one thing that I think was very bad. We've had one thing that I think was basically good. And now we have this, which I just think is so incredibly stupid. I can't believe it. Yes. Don't do this. Don't do it. Definitely if you're out there,
Starting point is 00:53:50 Whirlpool Corporation, do not make an ad featuring Casey Newton's voice lifted from this podcast and put it into an ad in, I don't know, Chile. If you want to know what I think about the energy situation in Sao Paulo, just call me, I'll tell you. All right, what else is in the old slot funnel, Kevin? All right, we got a gaming slop example. Have you heard about Bird Game 3? Well, I've heard a little bit about the buzz, but I haven't actually seen the video yet.
Starting point is 00:54:18 It's more like a, it's more like a chirp than a buzz. Okay. Thanks, thanks for that. We'll be right back. It's called Bird Game 3. And it is apparently all the rage. It is going viral on TikTok. One clip posted by someone named King Pigeon 76 has racked up more than 13 million views over the past week.
Starting point is 00:54:39 Let's take a look. Oh, God is so cheap. You just spam pecks. Who even plays Pigeon? Easy claps, man. Get out of my game. Whatever, dude. Okay, so this is a video.
Starting point is 00:54:52 It kind of, it looks like it is a bird fighting game. So in this case, it is a clip of an eagle fighting a pigeon. And I would say the pigeon overcoming the odds to beat the eagle in this game. Yes, so this has been going viral because this game doesn't exist. There is no bird game three. There's no bird game two. There's no bird game one. None of these games are real.
Starting point is 00:55:13 But people are using these video generators like Sora and Vio, three in Gemini to create these videos and then post them to TikTok as if this was a real game. And now people actually want to play the game. They're like, this looks fun. So I actually really like this, I have to say. Yeah. Because to me, this shows slot being used for a purpose that I am just fond of, which is satire, right? This is satire of the fact that over and over throughout the entire entertainment industry, we just see stupid sequel after stupid sequel. You take the dumbest idea imaginable and then you make a third version of it, you know, 10 years after the first one comes out, this is a very dumb idea, but it is being presented in a way that acknowledges
Starting point is 00:55:53 its dumbness. And so for that reason, I'm giving this one a thumbs up, Kevin. Yeah, I like this one, too. And I think there's a sort of inevitable conclusion of this, which is that someone out there will see this and actually make Bird Game 3. Well, then let me ask you this. What bird would you mean in Bird Game 4? I would probably be a Peregrine Falcon because it is notoriously the fastest bird. What about you? I'd have to go with Crested Titman. Now, what else is in the Slop Q? That's it. That was this installment of the Hard Fork Review of Slop.
Starting point is 00:56:28 Casey, what have we learned from these examples of popular Slop? Well, we are in this moment, somewhat to my surprise that by the end of 2025, I think slop is becoming a medium like any other, where there is good slop, there's bad slop. And, you know, in the case of the food recipes, there's Slop that makes me absolutely incandescent with rage. Yeah. But, you know, that is true of almost every medium, Kevin. Don't judge slop by its cover, as we're always saying, here on the hard fork review of
Starting point is 00:56:58 Slop. If you were going to sort of give a message to impressionable youths out there who are thinking about making a career in Slop, what would you tell them to make Slop that is good for the world or at least neutral as opposed to bad? I would say if I have one parting message on this installment of the Hard Fork Review of Slop, it would be this. Slop in the name of love. We'll be right back. Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant.
Starting point is 00:58:00 We're fact-checked this week by Will Paisal. And today's show was engineered by Chris Wood. Original music by Alicia Baito, Rowan Nemistow, and Dan Powell. Video production by Sawyer Roque, Pat Gunther, Jake Nichol, and Chris Schott. You can watch this full episode on YouTube at YouTube.com Hard Fork. Special thanks to Paula Schumann, Puiwing, Tam, and Dahlia Haddad. As always, you can email us at Hardfork at NYTimes.com. Send us your best AI Slop.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.