Hard Fork - California Regulates A.I. Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop

Episode Date: October 17, 2025

This week, Gov. Gavin Newsom of California signed a big package of A.I. and social media bills into law — including the first state law in the United States to regulate A.I. companions. We talk thro...ugh what stood out to us in the package and whether these protections will work. Then, the subpoena that has the whole tech world talking. Nathan Calvin, general counsel at Encode, an A.I. safety advocacy group that has been critical of OpenAI, talks with us about why he thinks the company is investigating him. And finally, grab your opera glasses. It’s time for the first edition of The Hard Fork Review of Slop.Guests:Nathan Calvin, vice president of state affairs and general counsel at EncodeAdditional Reading:California Regulates A.I. CompanionsOpenAI Thinks Its Critics Are Funded by Billionaires. Now It’s Going After Them.We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

Transcript
Discussion (0)
Starting point is 00:00:00 Casey, I heard the big news this week in tech is that Waymo is going to London. You know, I saw that, and I thought British people are going to hate it. I had a different question was, how are they going to teach you to drive on the other side of the road? That's a very good question. Just switch the software? Go on the other side now. Everything you used to do, do it in reverse. That's like the autonomous vehicle equivalent of dark mode is when you have to drive on the other side of the road.
Starting point is 00:00:29 You know? It's not available at launch, but eventually they bring it out. Do you think they have to put the steering wheels that don't do anything on the other side of the car, too? Presumably. I'm Kevin Rousa Tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, the first state law to regulate AI companions is here.
Starting point is 00:00:51 Will it be enough? Then, OpenAI is waging legal battles against its critics. And code lawyer Nathan Calvin joins us to explore. explain why the company served him with a subpoena. And finally, it's time for the first ever hard for review of Slop. Grab your opera glasses, Kevin. Well, Casey, it's been a big week for tech regulation in the state of California. That's right, Kevin.
Starting point is 00:01:23 Everywhere I look, it's bills, bills, bills. I'm like, what is this, a Destiny's Child song? Very topical 90s reference there. Listen, a lot of our listeners are seniors, and they're going to really appreciate that one. So on Monday of this week, Governor Gavin Newsom of California signed into law a bunch of new tech-related bills that had been making their way through the state legislature in California. And we're going to talk about them today. And if you're not a listener of ours who lives in the state of California, you may be asking, why are you devoting an entire segment to tech. regulation in California. And Casey, what is our response to that? Well, Kevin, I think you and I both believe that while AI has the
Starting point is 00:02:04 potential to do some good, it's also clearly causing some harm. And right now, the AI companies are operating with very minimal regulations on what they do. And that's just been a growing source of concern. We have talked over the past year on this show about teenagers who have died by suicide after having very tragic interactions with chatbots. And I think there has been a growing cry for some kind of guardrails to be placed around these companies. So that is what we are talking about today, is a state that had some ideas that actually managed to pass the laws and is putting them into practice and will hopefully rein some of these companies in. Yeah. California is a uniquely important state in tech regulation for a couple reasons. One of them is a lot of the companies
Starting point is 00:02:50 are based here. They care a lot about how California regulates them. the laws that are passed in California tend to sort of ripple out to the rest of the country and the rest of the world. They tend to become kind of de facto national standards, and especially at this moment where our federal government is shut down and even when they're operating don't seem to be interested in passing any tech regulations, this is what we have, is the state level regulation, sort of standing in for the federal regulation that doesn't exist. Yeah. So today, let's talk about some of these bills that got passed and what we think they tell us about what some common sense approaches to regulating AI might look like.
Starting point is 00:03:30 Okay, so let's start with what I think may be the most important bill that has come out of this flurry of legislation, which is SB 243. Casey, what is SB 243 and what does it do? What SB 243 does is it requires developers to identify and address situations where users are expressing thoughts of self-harm. So they have to have a protocol for what they're going to do if they see somebody express these thoughts. They have to share that protocol with California's Department of Public Health, and they have to share statistics about how often they are directing their users to resources. And then, starting in 2027, Kevin, the Department of Public Health has to publish this data.
Starting point is 00:04:13 So that's a little bit longer than I would like to start getting this data. But my hope is that when that begins, we will have a very large and useful set of public health data about the actual effects of chatbots on the population of California. So if you're somebody like me who's really interested slash worried about what it is going to do to our society and our culture once so many people are chatting with these bots every day, this is a really big step toward understanding that. Yeah, I think this is a good one for us to drill down on because it is a place where I think there is sort of a lot of attention and momentum around regular You know, Open AI has recently rolled out some parental controls. Character AI, which we've also talked about on the show, now has a disclaimer on its chatbots and some additional guardrails for minors. So I think the platforms were starting to kind of comply with these kinds of laws in advance of them actually becoming laws. But this will at least give them some formal requirements.
Starting point is 00:05:14 Yeah, and we should mention a couple more of those requirements. In California, chatbots will now have to tell you that their output is AI generated. Of course, you know, our savvy listeners probably already know that. But there may be some people who are chatting with ChatGBT and aren't entirely sure what's going on. This bill does have a few additional protections for minors, including that chatbots cannot produce sexually explicit images for them. And it's going to remind minors to take breaks if they have been chatting with ChatGPT for a really long time. So, interestingly, there was another bill that California legislators passed, which would have, I think potentially ban chat GPTUs for minors.
Starting point is 00:05:58 And Gavin Newsom vetoed that. He was like, that's going too far. But this is kind of like one step back. And I do think add some meaningful protections. And no longer do we have to rely on the goodwill of an open AI or a character AI to implement these things. Now it's just in the law and it says, you actually have to do this.
Starting point is 00:06:15 Now, does this law apply to all of the AI platforms are just the like really big ones with hundreds of millions of users? So according to a law, legislative analysis, it will apply to basically any chatbot that can be used as a companion. And initially, I didn't know, like, well, would that include chat chit? Most people, I think, don't really think of chat chitpt as a companion. But according to this legislative analysis, yes, like, and, you know, look, if you're talking to it for three hours a day, it's some kind of a companion to you. Yeah, I think this is a case where the industry kind of understood that something
Starting point is 00:06:48 was going to be done about chatbot companions in the arena of state regulation. and they had this other proposal that they thought was too strict and stringent and so they kind of accepted the lesser of two evils and sort of got behind begrudgingly this this bill that actually did end up being signed into law. That's right. Can I talk about why I think this is important, Kevin? Yes. Okay, so just on Tuesday,
Starting point is 00:07:12 we get this really interesting tweet from OpenAI CEO Sam Altman. Okay? This week gets a lot of attention because it says at the end that in December they're going to allow what they call verified adults to start using Chachupit to generate erotica. Let's set that aside for a second. Here's what Sam says at the beginning of this long tweet. He said, we made ChachabitpT pretty restrictive to make sure we were being careful with mental health issues.
Starting point is 00:07:37 We realized this made it less useful slash enjoyable to many users who had no mental health problems, but given the seriousness of the issue, we wanted to get this right. Now that we've been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases. And then he says that what people liked about GPT-40, which was the model that they got in trouble over, because it was so sycophantic and it encouraged people who were telling it things like, I'm not taking my medication anymore, or I think I'm God. They said they're going to bring whatever people liked about that model back to chat GPT. How does this connect to the California bill? I'm not sure that OpenAI has mitigated the serious mental health issues that came up this year.
Starting point is 00:08:19 It has been two weeks since they rolled out parental controls in chat GPT. Do we really have enough data to say this one is under control? I have to say, Kevin, I was actually pretty shocked by this tweet, not for the erotica stuff, but for the GPT-40 stuff and the suggestion that we have a handle on how to guide these chatbots so that they don't hurt people. What did you think when you saw this? I thought they are really trying to drive up usage numbers. be seeing something that is suggesting to them that people were engaging more with
Starting point is 00:08:54 Chat Chupitie when it was more like a companion, when it was telling people more flattering and sycophantic things. And I suspect that that is part of the reason that they are trying to sort of get that mode back. Now, I think there's actual logic there. I think a lot of users do want something that's going to tell them they're great. But we do not know how they have solved these safety challenges that they have supposedly solved or these mitigations for mental health issues. It's also like, it's not clear to me that things are just as simple and binary as like some users have mental health problems and some don't. And like for the for the ones that don't, we're going to give them the kind of like unhinged, unfiltered chatbot experience. And for
Starting point is 00:09:38 the ones who do, will like guide them onto a set of guardrails. Like these things exist on a spectrum. And it may not be obvious to open AI or even to the users themselves. where when people are starting to develop mental health issues as a result of talking to these chatbots. I just, I find it so confusing because on one hand, you know, this is a company that will put up a blog post that says, we do not want to optimize for engagement. We want you to use this thing, have it help you and let you get on with your life. And then they release SORA, the infinite slop feed. And then they say we're bringing back the most sycophantic AI personality in the history of the company because we know that some of you out there might need a
Starting point is 00:10:16 friend. So it really feels like there are two wolves inside of open AI right now. And I think it just makes necessary some common sense legislation that starts to put some guardrails around this and signals to these companies that they cannot just do whatever they want. Yeah, I mean, I guess we'll have to see how the AI companies comply with these. I do not think that this law is going to dissuade them from trying to build the chatbot companions because that's obviously a lucrative industry for them, but I hope it does make them pay more attention to things like safety and mental health for especially younger and more vulnerable users. Now, at this point, our listeners are thinking, Kevin, you told us that there were actually
Starting point is 00:10:59 a lot of bills that California passed, and I'm desperate to know what they are. Well, Casey, buckle up. I'm going to run through a few of the other bills here. We won't talk about all of them in that much detail, but we have AB621, which provides stronger protections against deep fake porn, this bill, this law will make it possible for victims of non-consensual deep fake porn to sue platforms that facilitate the creation of that porn for up to $250,000 per violation. Yeah, and this is really important because a trend that we haven't talked much about on the show this year, Kevin, is that there are these really sketchy companies that make what are called nudify apps. They've been advertising themselves all over Facebook and
Starting point is 00:11:42 Instagram somehow, and people have been using them to generate these defects. And now there is a law on the books that says, hey, we can actually come after the companies themselves. So I think that's just obviously a good thing. So next bill I want to talk about is AB 853. This is the California AI Transparency Act. And this caught my attention because it essentially requires that AI companies build into their systems tools to detect whether AI generated content, images, video, or audio is in fact AI generated. Basically, they have to offer users a way to put in an image and say, hey, did you generate this image and get a reliable response back? Yeah, here's what this means. If you come to California and you see a video of dogs playing poker, no longer will you
Starting point is 00:12:26 have to wonder, are those dogs really playing poker? There will be a watermark and you will get the answer to your question. That's true. About damn time. Then there was AB 56, which was about warning labels for minor users of social media platforms about the potential mental health risks associated with using the apps. Casey, this one's pretty wild. Yeah, it's certainly very intrusive. Like, the law dictates how much of the screen this warning has to cover, how often it has to appear, which is basically right when the person starts using the app, and then again, after three hours, which is like sort of funny to me. It's like, you can have your three hours, but then we're going to, you know, remind you that you might be cooking your brain.
Starting point is 00:13:08 Well, I should say, like, it's not going to be a small little thing on the screen. The law says that after three hours of use, platforms must display a 30-second non-bipassable warning covering at least 75% of the screen. If you are a 16-year-old and you are scrolling TikTok or Instagram for more than three hours, you are going to get a giant cigarette warning, essentially, on your screen that you cannot skip or get off your screen for 30 seconds, and that is going to happen again after every additional hour after that. So I predict there will be a lot of teens who are finding clever ways around this, because teens do not like to wait for their TikToks. I'm just
Starting point is 00:13:52 curious. Like, will teenagers see this and think, oh, my God, I have to get up TikTok, or will they think, damn, I am such a badass for using this crazy, dangerous app? Because I can see it going that way, too. Yeah. What a rebel. Me, me behind the school. We're watching TikToks. You know, semi-related, there was a study that came out this week. It was a pretty big size of 6,000 children who were under 13. Did you see this? No, what is it? So they tracked their use of social media, and they found that the more time per day kids spent on social media, the more that was associated with being bad at reading.
Starting point is 00:14:25 And so my question is, should this warning say, hey, you know, kids, be careful. This app could be bad for your mental health. or should it say, you are actively becoming worse at reading than everyone in your class? I actually think that might be more effective. Yes. Actually, in order to bypass the warning label, you should have to do like a reading comprehension quiz based on a short story by Ernest Hemingway. Just say like, no more TikToks for you. Yes.
Starting point is 00:14:53 Until you can tell me what the old man in the sea was about. I'm still trying to find that one out. All right. Next bill. This one is when I wanted to get your take on. This is AB 1043. This is about age verification, a subject we've talked about on this show before. This bill would require that Apple and Google, which make the two most popular mobile operating systems, verify users ages in their app stores. Casey, explain what this bill does and whether it's a big deal or not. Yeah, so there are a bunch of different approaches to what they call age assurance in the business. And the reason that this one is notable, to me is that in the California actually took the approach that I favor, which is that when someone is setting up a device for their child, the parent inputs the age of the child and that information is then passed along to the app store and to the developers. And the thing that
Starting point is 00:15:48 is great about that is that it seems like the most privacy protecting of all of the age assurance protocols that we've seen, right? Other states, they want you to potentially like upload a driver's license, right? You're perfect. providing a lot of really personal data. Some of that is being held by third parties. All that stuff is subject to, you know, data breaches and who knows what else. In California, it's just like, hey, you're the parent, you're the guardian. You tell us how old your kid is, and we will make sure that they don't download an app that they're not supposed to have. Right. So instead of what's happening today, which is that every, you know, app asks you to sort of say how old
Starting point is 00:16:20 you are when you sign up and create an account and it just kind of works on the honor system, this would essentially force Apple and Google to when you're getting a new iPhone or a new Android phone and your parents are helping you set it up. They kind of like, you know, put in your birthday. And as a 16-year-old, it shows, okay, I am a minor. I'm 16 years old. And then it passes, your phone will like pass that information to every app that is trying to be installed on that phone. Is that more or less correct? Exactly. Now, you said you favored this solution. Are you taking credit for this bill? No, I'm not taking it. It also wasn't my idea. Like other smart people, you know, have been talking about this for a while. But I've written about it
Starting point is 00:16:58 in the past. And this is what I said. I thought we should see happen. And, you know, every once in a while in a democracy, you've got to see something you actually want it. It's a lovely thing when that happens. Every once in a while, you have a good idea. Yeah, cherish it. Cherish the moment.
Starting point is 00:17:09 All right. One more bill we should talk about, because this is the one that has actually gotten most of the attention and a lot of the lobbying dollars. This is SB 53. This was actually signed into law last month. This is the Transparency in Frontier Artificial Intelligence Act. This is the sort of successor bill to SB 1047, which we talked about in the show before. That bill was vetoed by Governor Newsom last year.
Starting point is 00:17:36 This new bill is essentially a watered-down version of that bill. It establishes some basic transparency requirements for the biggest AI companies, what they call large frontier developers. It requires them to publish information about their safety standards and create a new mechanism to report potential critical safety incidents to the California state government. It also establishes some whistleblower protections for people inside the companies who may want to disclose some significant risks posed by their models. And this one did pass and was signed into law. Casey, do you think this is a big deal? I think it's great that we have some transparency requirements. I think it's great that we have
Starting point is 00:18:20 some whistleblower protections when I think about the things regarding AI development that concerned me the most, this bill does not speak to them. But I feel like the main reaction that I read to this bill is a bunch of people saying, yeah, this couldn't hurt. You know, like that's kind of how this feels. It's like, yeah, it's fine. Right. It feels pretty toothless to me. And it also is basically sort of codifying something that a lot of the companies are already doing anyway. I think of the large frontier developers, all or nearly all of them, already published things that would sort of put them into compliance with this law. So I like the idea of not just relying on voluntary self-regulation, but this seems like a pretty weak bill that was,
Starting point is 00:19:03 you know, weak enough that most of the AI industry didn't feel like it was worth opposing, although there were industry groups that lobbied against it. But I think for the most part, they said, okay, well, this is better than the one we tried to kill last time. Yeah. Okay, so that is a bunch of information about these California state AI laws and social media laws. When we kind of step back and zoom out here, does this give you any, thoughts about how AI regulation and tech regulation in general is going? I think in some ways it's going better than I expected, Kevin. We covered the past decade of lawmakers twiddling their thumbs wondering how social media ought to be regulated. It took too long. Some of those efforts have
Starting point is 00:19:46 finally gotten off the ground at the state level, but after a lot of harm was done. In the case of AI, we are earlier in that kind of epoch of tech, but already we've seen California and other states make some pretty decisive moves to build some guardrails, create some transparency requirements, and I think that's a really good thing. We're going to have to see how effective these things are, but I just want to say we need something like this. It is important that this week, Open AI came out and said, despite everything that has happened this year with their chatbots and mental health, they are going to hit the accelerator on making them more personable, more sexual, and more powerful. That will continue to have reverberations, and we need state lawmakers paying attention to that. We need federal lawmakers paying attention to it, too. Well, to be realistic. I can't talk to you when you're being hysterical. like I what this makes me feel like is god I wish we had a Congress that could do something about this
Starting point is 00:20:51 like I really I am sympathetic to the AI companies on this one point I do not think that state level regulation is the best way to do this I do not think it is good or efficient to have 50 individual states all kind of coming up with their own bills and trying to pass them and then have the AI companies have to like look at all the 50 states and decide how they're going to build systems that comply with all of those. That does not feel like a good solution to me. But for that to not be the default path here, we are actually going to need Congress to step in and do something at the federal level. And right now, our government is shut down, so I don't have high hopes. But I think that in the absence of Congress getting its act together and deciding to do something federally,
Starting point is 00:21:33 what we're going to end up with is a bunch of states doing what California has done here and just trying their best to get some rules on the books while they can. Yeah, I agree with that. I would add that Senator Josh Hawley is currently circulating a draft bill that would ban AI companions for minors. Who knows how far that will make it through. But I do think that there are a significant number of members of Congress who would like to see something like this happen. The question, of course, as ever is whether they can get something across the finish line. Yeah. All right, Casey, that is what's happening in California.
Starting point is 00:22:11 When we come back, we'll talk about how this legislative fight got personal for one AI lawyer. Well, Casey, there's another big story involving the law and AI this week that we wanted to chat about. And it involves this behind-the-scenes beef that has been going on between Open AI and some of its biggest critics. Yeah, so there are really two big legal battles that this story is at the intersection of. One is the battle over Open AI trying to convert itself into a for-profit enterprise. right now, Open AI is famously a nonprofit. This has created many issues for the company over the past several years. They want to be sort of a more normal money-making enterprise. And this is opposed by lots of people. Some of the people that oppose it are Open AI's direct competitors, including
Starting point is 00:23:27 Elon Musk and Mark Zuckerberg. And Open AI has been pretty aggressive in going after groups that they believe might be connected to those two. The second battle is about SB 53, the bill that we just talked about. It was just signed into law by California governor Gavin Newsom, and it established some basic transparency requirements and whistleblower protections for people who work at AI labs. There were a lot of groups that lobbied both for and against this one, and Code was one of the groups that lobbied for it. And so those are the kind of two big legal battles that were happening next to each other, but today's story, Kevin, takes place right in between both of them. Yes. So our guest today, Nathan Calvin, is the vice president of state affairs and general counsel at Encode. They are a small AI policy nonprofit. They were started several years ago by a high school student, fun fact. What were you doing in high school? I wasn't starting AI safety nonprofits. Debate team crushing it. Anyway.
Starting point is 00:24:27 Anyway, they have become one of these groups that is submitting briefs and lobbying lawmakers on a lot of these AI-related bills and efforts. They have also been very vocally opposed to the restructural. of OpenAI as a for-profit. And what seemed to happen here in Nathan's telling was that one night, as this legislative process was ongoing, a sheriff's deputy showed up at his house and delivered a subpoena from OpenAI, demanding that he produced all kinds of personal communications, included anything related to not just the restructuring,
Starting point is 00:24:58 but SB 53, this bill that they had been advocating on behalf of. Yeah, so this surprise folks, because they do still identify as this kind of mission-driven company, that's trying to create AI to benefit all of humanity. I think it's generally understood that during these legal battles, they're going to people who lobby for and against, and that's just part of the process. But now one of those people who was doing the lobbying
Starting point is 00:25:22 on behalf of his nonprofit finds himself with a legal battle of his own. And that got a lot of folks talking, including some people who worked at OpenAI, who criticized their own employer for its behavior. So that seemed like something that it would be worth understanding more about, Kevin. Yes, this has been a subject. of hot debate and conversation within open AI, as well as around the broader AI industry. And we wanted to talk to Nathan about his experience. But before we do that, since this is,
Starting point is 00:25:49 after all, a story involving AI and legal battles, I should note that my employer, the New York Times, is engaged in its own legal battle. They are suing Open AI and Microsoft over alleged copyright violations. And my boyfriend works at Anthropic, but so far we've managed to avoid any legal battles. So counting our blessings on that one. Well, check the mail when you get home. No. All right, let's bring in Nathan Calvin. Nathan Calvin, welcome to Hard Fork. Pleasure to be here.
Starting point is 00:26:22 You know, many-time listener, first-time caller. I think I said that wrong. But anyway, I'm very glad to be here. So just to set the scene for our listeners, you're in Washington, D.C. It's a Tuesday night. I'm assuming it was, you know, a normal weekday. You and your wife were sitting down to dinner.
Starting point is 00:26:42 And then you got a knock on your door. Tell the story from there. So when I opened the door, there was a sheriff's deputy who was there to serve me a subpoena from Open AI asking for different communications and documents related to a piece of AI safety legislation I was working on, as well as about. our criticism of Open AI's restructuring to a for-profit. One thing I will say just in terms of the timeline, because it's come up and some of the back and forth, is that on Saturday, previously, I had gotten a call while I was visiting my mom and nephew saying that someone was trying to get into my apartment to serve me papers. And I said, I'm not there right now. Anyway, they finally did come on on Tuesday. And so when Jason Kwan, the chief strategy officer at opening eye, I think
Starting point is 00:27:40 in his comment, said something about, you know, I should have known this was coming. I did know they were trying to serve me, but I didn't know about any of the details. And I didn't know they would be coming that exact night. Now, I want to get to all of that. But first, you know, I have not been served with a subpoena. Casey's been arrested many times. So he's familiar with how these. But never convicted. He's familiar with how these things go down. like, are they literally handing you a packet of paper, like in the movies? Or what does it look like to be served with a subpoena from opening? Yeah, it is just a stack of papers.
Starting point is 00:28:13 I did not know, again, I am a lawyer, but I didn't know that sheriff's deputies are the ones who at least some of the time serve subpoenas in D.C. I later learned that that's not incredibly unusual, but it certainly was, you know, surprising from my perspective. I don't know. To be clear, the guy was perfectly nice. I don't know, just to some degree, like, after I had heard... on Saturday that someone was trying to serve me papers by the time it kind of actually happened
Starting point is 00:28:36 and they were at my door there was a little bit of like okay now i can figure out what is actually happening and kind of um it was honestly that the days preceding hearing that it was coming and it actually happening were honestly some of the most stressful and it's like okay now i can figure out at least what we're dealing with and and uh you know how to respond i mean when when you got into AI advocacy. I mean, was this on your radar as something that would likely to be happen that people would be saying like, okay, like you got to like show us all the emails you've been sending about this? No. I mean, I don't know. Like, I do feel like, I don't know, my mom worked in for the American Academy of Piatrics for 25 years and was involved in litigation
Starting point is 00:29:18 against tobacco companies. And they came and, you know, took all of her papers out of her office at some point. And, you know, had told me, you know, never, never write any emails. You're not, you know, comfortable with having read back to you later or something. And so, you know, like, you're actually like way better prepared for this than the average person. Yeah, I think that's fair. Yes, indeed, indeed. Did you understand immediately, like, why Open AI was subpoenaing you? What was your sort of initial response when you actually started reading these papers and
Starting point is 00:29:47 understanding what they were after? Yeah. I mean, in some ways, there had been a little bit of a preceding escalation before. the, I received the subpoena, you know, there was, I had some sense that, you know, we were doing, you know, lots of advocacy and public communications and writing things to the attorneys general about this issue. And I, you know, was getting some sense that this was getting on their nerves. You know, I will say that, like, when they asked, there's part of me that's still thinking, like, okay, maybe this is just a good faith question. And
Starting point is 00:30:25 they're trying to figure out, like, maybe, you know, we are secretly funded and controlled by Musk or meta or something. When I did was reading through the subpoena, though, and I got to the part where it said all of your communications about SB 53, a bill we were working on, then I started to think, this doesn't really feel like they are just asking good faith questions. There's a, again, and I don't know for a fact what's in their head, and I can't say it, but my impression of it was not that. What you're saying is it would sort of make sense to me, if for whatever reason, reason they were serving me a subpoena and saying, hey, are you funded by Elon Musk? Is that why you're
Starting point is 00:30:59 trying to block our for-profit conversion? But when they came to you and said, give us all the emails you've been sending about this bill that you're working on, that just kind of felt sort of out of scope. Yes, it did. And one other thing I will add as well is, again, I was expecting maybe that a subpoena would come, but like when I had talked to a previous organization, like, you know, the other orgs I was aware of that had been subpoenaed, it had been to their organization and just like their Delaware registered agent, and you just, like, get an email that, you know, your Delaware registered agent got a subpoena.
Starting point is 00:31:29 Like, it wasn't people coming to their, you know, fifth-story apartment building at 7 p.m. or whatever. And so that was another aspect that did just feel kind of eyebrow-raising. And so I just think it's, it just really does leave a bad taste in my mouth. Right. I mean, there's one explanation that is like the sort of uncharitable explanation, which is that Open AI is trying to sort of bully and intimidate any sort of nonprofits that are critical of its restructuring plan. There's another explanation, which I want to get your take on, which is that these are fair questions to ask.
Starting point is 00:31:59 We don't have a lot of transparency. We have a lot of dark money sort of flooding into fights about tech regulation these days, and it's worth asking questions about who is behind those efforts. And I guess we should just sort of dispense with the central claim here. Nathan, let me just ask you straight up, are you or in code working with or being funded by either Elon Musk or or Mark Zuckerberg themselves, or people or entities associated with them or their companies. So we are not funded by Elon Musk or Mark Zuckerberg. If you go on our website, it says that we have received funding from the Future of Life Institute, which was one that was mentioned in their subpoena. A Future of Life Institute several years ago got a donation from Musk, but they are not, not Musk.
Starting point is 00:32:48 And we said this in our communications back with them. Like, I have never talked to Musk. like musk is not directing our activities it's false i don't know we we submitted a um uh asking the ftc to open an investigation into x a i and spicy groc and their things and i will happily say on air that i think that like xa i safety practices i think are in many cases far far worse than open ais like i again so it just like that central claim is false what about mark succorberg any relationship with him or not none zero and again like i I think our partners who work on the issue, like, I don't know, we are an organization that focuses
Starting point is 00:33:28 on AI safety and kid safety issues. Like, we are just constantly at war with META. The idea that META is backing us is just, it feels, again, I realize not everyone has the context and knows who we are, but it's just, like, completely laughable. You do have a list of donors or funders on EnCode's website. You say this is, we're generously supported by, and then you list a bunch of organizations, including the Omidyar Network, the Archwell Foundation, which is Harry and Megan's Foundation, the Survival and Flourishing Fund, which is a kind of effective altruism-linked philanthropy,
Starting point is 00:34:04 funded primarily by Jan Tallinn. So you do provide some transparency about who your funders are. Why do you think that wasn't enough for Open AI? Why do you think they still had questions about Elon Musk or Mark Zuckerberg? Yeah. I think to some degree you'll have to ask them. I'm not, I'm not sure. I mean, again, there's also one thing to say here, like, there is no general right for them to know about all of our, and again, like, the subpoena did not ask about the Armadiore Foundation because the Amitya Foundation is not relevant to their litigation in any way. Like, the role of a subpoena is to get relevant information for the litigation you are engaged in, not to just like ask whatever questions you would like the answers to from other private organizations. Like, you know, we would love to send a subpoena to Open AI and be like, tell us all the details of what you're planning to do and the restructuring. And like, are you going to disempower the nonprofit in the ways it perceives whatever? But like, we don't have
Starting point is 00:34:59 a right to do that. Like, that's not a question we can just ask them, even though we might like do. And so what we did is, you know, we put out like a public letter asking them a bunch of questions. Like Open AI can go to the press and say, you know, we want transparency about these things. Again, they do have the right to ask us about Elon because they are in litigation about this. And again, I think if they had just reached out to us at our corporate address and said, are you funded or directed by Elon? And, you know, we explained no and proved to them no. And then they moved on. Like, I would understand that. And I think that that is a fair thing and that Elon is attacking them and trying to destroy them. And they want to make sure that there are efforts that are not covertly being being supported and directed by him. But that, but I just like can't emphasize how far away what actually happened was from. like that narrow question that they were entitled to ask. So, I mean, as you reflect on this experience, do you feel like this was intimidation? Do you think that Open AI is trying to penalize organizations for speaking up either against
Starting point is 00:36:03 the for-profit conversion or for AI regulation? Yeah, I mean, to some extent, it's a question of intent, but, and I don't know what's inside their heads. And so I want to be careful about that, but I believe that that is what they were doing. That is my best guess. And that was how I received it. And I would like to be another there to be another explanation for this. And if it really, you know, I thought it was possible when I put this out that maybe they would say, you know, hey, this was a misstep. Our lawyers went a bit far. We didn't really actually mean to add the thing about 53. Like, that's not what they said. They like doubled down and said that, you know, we think we are entitled to
Starting point is 00:36:45 this. And I think that that just is very important to note. And I will just say another thing that I don't think we've mentioned is that, you know, even for some folks within Open AI, for instance, Joshua Ackiam, who was speaking in his personal capacity, but put out a fairly long thread talking about the fact that what I was describing in my thread is, you know, doesn't look great. Yeah, but that was the unofficial response from someone at the company who was sort of breaking from the company itself. We've also seen Jason Kwan, as you mentioned, the chief strategy officer at OpenAI.
Starting point is 00:37:20 He wrote a lengthy thread, arguing that you and Encode were sort of only giving part of the picture, that Encode doesn't disclose their funding, and that this is not about SB 53. Jason said, quote, we did not oppose SB 53, and they said that basically this was sort of a Tempest in a teapot. There was also a quote that a lawyer for OpenAI, Ann O'Leary, gave to the SF standard, saying, we welcome a legitimate debate about AI policy, but it is essential to understand when nonprofit advocacy is simply a front for a competitive commercial interest. What do you make of the official OpenAI response to what you claim?
Starting point is 00:38:04 So one thing is, you know, I think Jason focuses on the fact that we became involved with the lawsuit between Elon and OpenAI by filing an amicus brief, arguing that it was in the public interest for Open AI to remain a nonprofit. Jeffrey Hinton also made some positive comments about our amicus and showed support for our arguments. He's also someone who, by the way, has called for Elon Musk to lose his status with the Royal Academies, and it's really not a fan of Musk if you want another example of not everyone who was critical of opening eyes restructuring as a musk fan. Yeah, I mean, also on the point of the
Starting point is 00:38:47 did not oppose SB 53, it is true that they never put out something saying that they've formally opposed it, but their global affairs head, Chris Lehane, did send a letter to Governor Newsom at a time when SB 53 was in, you know, pretty heated discussion saying that he believes the correct path for California is to have an exemption from its AI frameworks for any company that signs on to an agreement with the federal government for testing or that says that they will be adhering to the EU AI code of practice, which in practice means a complete exemption from the California law. So I mean, you can say that advocating for you to be completely exempted and all of your bunch of your fellow companies to be completely exempted is not the same as opposing
Starting point is 00:39:31 it. You know, like you can ask a linguist for whether that's fair. But, you know, I I think it still is important context that he did not discuss. What now? Are you going to send OpenAI the information that they're asking for? Are you planning to do any more transparency around your funding or your advocacy efforts? What's the next shoe here? So we sent them our objections and responses where we laid out in four areas that were relevant. like, for instance, our, you know, communications or funding received by Elon saying that
Starting point is 00:40:12 those didn't exist and saying that for the other pieces of information that they were not relevant, they never responded to that. They could have filed a, you know, motion to compel saying to the judge that we have to turn them over, but they didn't do that. My view, again, I don't know this for sure, is that they didn't know that because they realized a judge would not grant that motion because they were not, in fact, relevant. I think there are fair discussions about transparency. I mean, I think there's fair things if, you know, some of our donors want to be private.
Starting point is 00:40:41 And when you're donating to C4s, you have the right to give money privately. We have listed on our site a lot of our donors. And I think we're, you know, I think you get a clear impression of the different types of motivations that people have who are funding us. But I think this kind of like larger discussion about like what the transparency, appropriate transparency is for folks involved in the advocacy process. is very different from like, I don't think that's like what open AI cares about here or why they're asking about this. And even in the subpoena, which was an overreach in many ways, like they don't talk about, you know, the Omidio Foundation, which again is listed on our website as a funder. We're not hiding that fact because it's not relevant to their litigation with Musk. But you said there are donors that you don't list on your website who want to remain private. Would you like to tell us who they are or how much they're giving you? Not, not, not, not here. Okay. Just checking.
Starting point is 00:41:35 I have to ask. Fair, fair. I mean, I think the, they're not, they're not Musk or Zuckerberg. They're, yeah, they're not Musker or Zuckerberg. We don't take money from frontier AI companies. Yeah, I don't know. I will say that. Yeah, and I think it's a reasonable thing to advocate for that all of these groups should be required
Starting point is 00:41:53 to disclose much more about who funds them. But I think that should apply equally to organizations that are pushing for the other side of things here. I think all of the... Yeah, I think that's fair. Yeah. I think that's a fair discussion to have. I'm just not sure Open AI is like the one to make that argument. So as you look back on this episode, how has it changed the way that you think about Open AI? I genuinely have a lot of positive feelings about Open AI and think that they do many things genuinely better than their peers, for instance, like meta or XAI. And I think that for instance, some of their safety research and system cards are things that they have even impressed.
Starting point is 00:42:31 on in recent months and have done a genuinely good job of. And I think that there is some of a feeling among some people at Open AI that they get disproportionate criticism relative to their peers. And I think that there is, you know, some truth in that. One thing I will say is like, I don't know, if one of their peers had been the one to show up at my house and give me a subpoena, I would have said about that too, but it was Open AI. That was the one that did it. And also, I think there's some aspect that Open AI is a nonprofit. And they are a nonprofit that has a mission to ensure that AGI benefits all of humanity. And, you know, they are in the process of trying to weaken and get around that legal mission and be able to
Starting point is 00:43:13 consider profit more in their decisions. And I think this episode and also things like, you know, the discussion about whether to allow, you know, not safe for work, you know, porn or whatever on chat GPT or to, you know, release Sora 2 in the way they released it. And, you know, their kid safety practice and all sorts of these other things. that, like, they are not a normal for-profit company. They are, at least for now, a nonprofit that is dedicated to this mission above-profit. And I do think that means that they should be held to a higher standard. Yeah, I mean, I'll just say, like, it's not like Elon Musk is the only person who opposes
Starting point is 00:43:47 this restructuring plan. Like, the whole AI safety, you know, community has been up in arms about this for years now. It's very unpopular. Yes. Yeah. I am just curious what you make of kind of the difference between, you know, Joshua's statement and Jason's statement and kind of some of this like continued evolution and pressure you have between open AI kind of transitioning from more of a research
Starting point is 00:44:15 organization focused on some of these loftier ideals to trying to move to the next stage of what it wants to do. I mean, I think it just speaks to a very real tension within the company, which is that there are a lot of people there who believe in the stated mission who want to create this very beneficial AI. And then you also have a lot of people who come from other giant tech companies who see this primarily as a competition about winning and being first and making the most money. And people who come from those kind of companies are not above, you know, waging lawfare to get what they want. So I'll be curious to see kind of how that shakes out in the coming months. It does seem like it's that.
Starting point is 00:45:01 that second group, the kind of big company group that is currently steering the company, and I wonder if that's going to continue. But I will say, in addition to that, I think that's right, your story, Nathan, has caused more consternation in soul searching among people at OpenAI than I think anything since the Daniel Kokatello story about these non-disparagement agreements that they were forcing people to sign or else they would claw back their vested equity in the company. That was a big deal to people at OpenAI, and this is a big deal to people at Open AI. I've been talking to people.
Starting point is 00:45:38 It's not just Josh who is saying this stuff. I think there's a lot of soul searching going on inside the company about this question of are we still the good guys? Are we transitioning to something we no longer support? And so I think there's going to be some internal qualms about this. and probably other stories to come, but most of them probably won't break out of the open the way this has. Nathan, thank you so much for coming on and explaining all this to us to us.
Starting point is 00:46:09 Thanks, Nathan. Thank you. I just wanted to note that we reached out to OpenAI after this interview asking about this question of intimidation, and they responded with a statement from Jason Kwan, reiterating that, quote, Elon has opposed a restructure for obvious competitive reasons and in code joined in, organizations that suddenly emerge
Starting point is 00:46:31 or shift priorities to join Elon raise legitimate questions about coordination and funding, which the subpoena seeks to clarify. Our questions have still not been answered, and we still don't transparently know who is funding these organizations. When we cut back,
Starting point is 00:46:46 an old woman falls off a very high shelf. Is it real or is it fake? No, it's the hard fork review of slop. Well, Casey, over the last few weeks on our show, we've been talking a lot about Slop. We have. And it seems like the more we talk about it, the more of it at Peters all over the internet. Yes, it is taking over the internet. And for that reason, we thought we should introduce a new segment that we are calling the Hard Fork Review of Slop. The Hard Fork Review of Slop.
Starting point is 00:47:33 Oh, my God. That's so perfect. That's beautiful. You know, this, I would say, is generally a STEM podcast. We care a lot about science and technology and engineering, not as much math. But we also care about the arts. And so we thought, why don't we carve out some time on the show to talk about some of the new achievements in AI art that we're seeing out there on the internet? And also sort of bring our critical eye to them. put them in conversation with the culture.
Starting point is 00:48:00 Yes, we have critics out there for books and movies and music and video games. And I think slop is like an emerging genre of cultural production, most of which is bad, but some of which may actually be good. And so we need to stand here amid the floodgates, sort of filtering out the bad slop and letting the good slop get through. Okay, I just want to signal up front. I didn't actually bring anything good. I didn't know that that was part of the assignment. I have one good one, but we'll save it for the end.
Starting point is 00:48:29 All right, fair enough. So, Casey, tell me about the slop that you have been looking at, and then I will tell you about some slop that I've found. That's great. Well, Kevin, maybe to just kind of warm us up, we can look at some of the slop that I think of as cocoa melon for adults, just kind of pure visual stimulation, no ideas in it whatsoever. And this kind of slop, you can find on TikTok if you search for glass fruit cutting. Have you seen any of the glass fruit cutting?
Starting point is 00:48:57 No. Okay, let's see if we can cue one of these up. Some of these are in the sort of like the ASMR realm. Ooh. This man is cutting into a coconut with a knife, but the... The coconut is glass. Oh, the kiwi is glass. Oh, I don't like that sound.
Starting point is 00:49:19 That has like nails on a chalkboard vibes for me. This is what's replacing reading in our schools. I mean, literally. Now, what I liked about this one, Kevin, is it's glass pancakes. Ooh. With a beautiful maple syrup. You really hate that sound, that's the jump scare sound for me. Oh, we got a donut, a glass donut.
Starting point is 00:49:39 We'd love to see a glass donut. Oh, just cutting through the glass bowl of cereal there. The physics here are actually kind of impressive. Right? Like, it is showing the reflections of the knife in the glass. It looks vaguely realistic. Yeah. It's like, it's weird because it's like the food looks delicious, but beautiful, but that it's
Starting point is 00:50:03 glass, so it's off-putting. Like, it just sort of, it doesn't make any sense. And so it hypnotizes your brain into this sense of, I don't know what I'm watching. I don't want to look away. Yes. And I'm going to stop thinking words. Yes. It's sort of like the spiritual successor to those like crush videos where they'll just have like
Starting point is 00:50:21 the hydraulic press. Yes. And they'll just like press down on. like seven objects. Yeah. And now, instead of just wasting those objects, we can waste water and electricity. All right. What do you have?
Starting point is 00:50:35 So I have an example from the news. Actually, this one comes to us from DirecTV, which has just struck a partnership with an AI company called Glance that will allow people with DirectTV Gemini devices to put themselves inside of 30-second AI-generated videos. Basically, if you step away from your TV to go get a snack or go to the bathroom, you might come back and find that you are in the ad on the TV. And Casey, let's watch an example of this. So this kind of shows how it works. You connect it to your TV. You put in your photo, tap a couple buttons, and it generates your looks. This process is already so absurd. And then boom.
Starting point is 00:51:20 There you are in a blazer. Now, Casey, I thought the point of advertisements was to show clothes on people who are more attractive than me to entice me to buy them. Why would I want to see clothes ads with me and them? I can't answer that question, honestly. This one is so funny to me because the process you have to go through to do this is so complicated. I basically cannot imagine a single person doing this. First of all, you already have a TV that is, like, working against you, right? Like, the way that this works is that if you have one of these TVs and you leave it idle for 10 minutes, AI takes over, which is like, it's kind of like, you know, if the bus goes below 55 miles an hour, it explodes.
Starting point is 00:52:08 Yes. This is that, but for, like, AI advertising. And then after it shows you these images, then it's up to you to go scan a QR code and take a photo. like no one who is watching TV wants to do any of this at all so um it's a very silly process and you know I mean in the in the demonstration the photos look fine they look fine yeah but let me ask you this do you not know what you look like with a jacket on you know what you look like a jacket on what let me just say this what are we doing here that is my review of this what are we doing here we are selling advertising technology
Starting point is 00:52:47 Okay. So now I just want to show one that made me laugh. I call this one Women on the Walmart shelf, if we want to cue this one up. I saw this one on TikTok, although it does have the SORA watermark on it. And I think this speaks to the ability of AI slop to just kind of create like a classic pratfall physical comedy situation. This one involves what looks like store security cam footage of an older woman on a very high shelf inside a Walmart. And there's a police officer who's looking up at her as our story begins. Ma'am, please come down from there. You want me to come down? Yes, ma'am. And she kind of does a header off the shelf and crashes into the police officer. So, Kevin, what did that one make you feel? There's a lot there.
Starting point is 00:53:39 There's a lot of layers to this onion. Is this a one-off or is there a larger genre of... older people falling off the top shelf at the grocery store onto a police officer. It's a whole interconnected cinematic universe with sort of, you know, these very rich sort of characterizations. The vocal performances are really amazing. So I encourage you to get into it. Beautiful stuff. Now, this is a one-off, Kevin. I've never seen anything else related to it. Yeah, I'm not worried that people are going to start throwing themselves off the shelves of grocery stores to sort of mimic the trend here. This one feels pretty harmless to me. And I appreciate
Starting point is 00:54:17 inspiring older people to do things like climbing up to the top shelf of the grocery store. Anytime I see a SORA video that isn't like misappropriating the likeness of Martin Luther King Jr., I say that's a win for slop. Yes, this one, I think, pretty harmless. All right, what else you got?
Starting point is 00:54:33 Well, this next one, Casey, was not harmless because it involved America's Queen, Dolly Parton. Oh, no. Leave her alone. Basically, some sicko out there has been generating AI images of Dolly Parton looking very sick, including at least one image of Reba McIntyre
Starting point is 00:54:52 visiting Dolly Parton on her deathbed, which went around on the internet and led to a bunch of rumors that Dolly Parton, God forbid, was dying. Oh, no. See, I hate this. Yeah, I don't like this either. Let's watch Reba's video summarizing the whole thing. You tell them, Dolly, that AIMS has got us doing all kind of crazy things.
Starting point is 00:55:13 You're out there dying. I'm out. here having a baby? Well, both of us know you're too young and I'm too old for any of that kind of nonsense. But you better know I'm praying for you. I love you with all my heart and I can't wait to see you soon. Love you. Wait, just to be clear, what you showed me was real, not slob. That is Reba McIntyre's actual Instagram account. That is Reba McIntyre's actual Instagram account. She does show some of the Slop images inside the thing of Reba at Dolly's Deathbed and Dolly response. with another real video from her real social media account saying, quote, I ain't dead yet.
Starting point is 00:55:51 So, Casey, what do you make of this one? I mean, this is so bad, you know? Like, so many of the feeders around misinformation have been there will just become a time when you can't tell what is true and what is false. And the better that image generation software gets, the more of these little viral hoaxes we're going to see going around. So this is super bad. I'm truly trying to met, like, what kind of person do you have to be to be like, today is the day that I create a rumor that Dolly Parton has died, and I'm going to, like, use SORA to prove it.
Starting point is 00:56:23 Truly. It is, like, mind-boggling to me. If you wanted to turn the public against AI and against AI-generated content, the most effective thing you could do would be to go after Dolly Parton, who everyone, literally everyone loves. No, I hope Jolene finds whoever did this and does a number on him. Let's just say Dolly Parton's lawyers are going to be working more than nine to five.
Starting point is 00:56:48 This next one is sort of a narrated journey. We are returning to Walmart for this one. And this creator is very interested in the use of AI to create, like, art on products. You know, so you're sort of, like, you know, I've seen some that are, like, at a craft store and, like, there's, like, framed pictures of what has clearly been AI generated.
Starting point is 00:57:12 In this case, she picks up some butter cookies at Walmart and makes a pretty convincing case that it is slop art. And I enjoyed this journey. Let's see how it looks here. This is bad. I didn't think it could get worse, but you guys were right. The butter cookie tins at Walmart are way worse than the popcorn tins. Because why is Santa throwing ass? Why is he squatting on a table?
Starting point is 00:57:36 Why does he look like he's about to twerk? What is his hand doing? What is that? Do you see? Santa has the fattest ass in this. Look how wonky that is. And what is this wall full of random things? Like, can you make out what any of that is actually supposed to be?
Starting point is 00:57:51 It looks like there's cobwebs on the roof, whether that's intentional or not. It's just like random shapes on the wall. All right, we could probably stop it there. I have to say, this video made me feel very naive because I did not realize that there was like mass produced products in like Walmart stores that is AI jobs. Oh, yeah. And I also love that there are now, like, slop detectives who are just going out there vigilante style and, like, investigating the slop on the shelves of their local Walmart. That's beautiful to me. We need more citizen participation. Honestly, it could be a segment for our show, you know, Slop Investigations. Let me ask, wait, let me have a question. Yeah.
Starting point is 00:58:31 If you're shopping and you pick up an object and you see that, you know, there's slop art, does that affect the way that you want to buy it or not buy it one way or the other? No. Okay. I mean, I think there's like a whole, like, category of art that basically doesn't matter, which is like the stuff on the cookie tin, right? The stuff at Walmart. No one is winning any prizes for that. No one is reaching any new heights of creativity. Basically, this is just a way for the butter cookie manufacturer to save a couple bucks and not have to hire an illustrator or use some stock art from the internet. And do you think they're passing the savings on to us, the customers? Probably not. Probably not. That's probably going right to their bottom line. That's unfortunate. Yes. What about you? Would you be less likely to buy something if a slop had been used in its advertising?
Starting point is 00:59:18 I mean, maybe, you know, because I think it speaks to a kind of cheapness and a lack of care. And so if I were buying like a heart defibrillator and I saw that there was slop art on the box, I would say, I don't know if I could trust these people. What about butter cookies from Walmart? Are you going for quality when you're buying butter cookies from Walmart? I want brown butter if it was going to be butter cookies. Butter is a great flavor
Starting point is 00:59:44 but it needs something else. You know what I mean? Okay. So for Casey, only the artisanal images of Santa with a huge ass. Small batch, huge ass, Santa butter cookies, please. Okay. One more example of slop that I
Starting point is 01:00:00 want to tell you about today, Casey, and get your opinions on. This one is what I would consider a good slop. This is Slop that is being made toward a noble cause, which is preventing the AI apocalypse. Okay, so you might think to yourself, how could this happen? How could AI slop be used to ward off the AI apocalypse? I was just about to ask you that.
Starting point is 01:00:21 Well, this is a company called Hypertition. It was founded by Andrew Cote and Aaron's Silverbook. And basically, this is a company that is trying to counteract all of the sci-fi stories and narratives out there about AI going rogue and killing people, which, you know, this hypothesis goes sort of makes its way into the training data for these AI systems and actually makes them more likely to sort of go rogue in this theory. It gives them ideas. It gives them some ideas. And so Andrew Cote said, what if we combated this by writing a bunch of AI generated novels about AIs and humans getting along really well? And then we fed that into the training data for the AI systems to
Starting point is 01:01:03 kind of give them some more good examples to follow. All right. Kind of a convoluted explanation, but sure, why not? So this company has just gotten a grant. I read about this on AstroCodex10. They just got a grant to create 5,000 AI-generated novels, and they're trying to have these novels be sort of 80,000 words, and they're going to enlist the public's help to help generate these,
Starting point is 01:01:28 and you can buy credits, about $4 a book to generate this, and then they're going to try to feed these into the, the language models and get the models to think about maybe good scenarios and maybe be more likely to act on that. Wait, why does the public get involved if the works are all AI generated? I think they want it to reflect a diverse set of, you know, sort of scenarios and characters. Basically, they want just people to sort of get involved in this and make it as diverse as possible. All right. Well, do we have any examples we can see? No. Great. So what do you make of this attempt to use slop for the benefit and potentially the salvation of humanity?
Starting point is 01:02:12 Here's what I'm going to say. If it turns out that the thing that is needed to prevent human extinction from AI is a massive infusion of slop into the training data, I'll be very surprised. I'll be very surprised if that was the difference maker. I share your skepticism. I think the default outcome from this project is that it, It probably doesn't save us from the AI apocalypse. I think a funny secondary effect would be if one of these, like, 5,000 slop novels goes on to become a huge bestseller and, like, becomes the literary craze that takes over the country. Do I think that's likely?
Starting point is 01:02:49 No, but it could happen. Well, as we mentioned earlier in the show, it doesn't seem like people are reading all that much these days. But, you know, maybe all of this will eventually be fed into a notebook LM video presentation that folks can watch. Yes. All right, that is it for the hard fork review of slop, and we welcome your submissions for future installments. If you spot something, some slop that is worthy of cultural interrogation by some of our nation's foremost slop critics, please send it over to us at hard fork at ny times.com, along with a brief explanation of the affected head on you, how it moved you. Yeah, we want to, like, it can't just be like, look at this weird thing. Like, I want to see slop that made you feel something.
Starting point is 01:03:30 Yeah, and the next time you see a Santa. with a suspiciously large posterior call us email us we want to know about it we want to see it and we want to salute us a folder on his MacBook
Starting point is 01:03:43 that's just photos and images of Santa with a very large ass I love a thick Santa and I salute them sir see you on Christmas big guy see you on Christmas big guy
Starting point is 01:04:03 Hard Fork review of Slop. Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poyant. This episode was fact-checked by Will Paisal. Today's show was engineered by Chris Wood. Original music by Alicia by Etup, Diane Wong, Rowan Nemistow, and Dan Powell. Video production by Soya Roque, Pat Gunther, Jake, Jake Nicol, and Chris. shot. You can watch this whole episode on YouTube, along with that Slop at YouTube.com
Starting point is 01:04:39 slash Hard Fork. Special thanks to Paula Schumann, Puiwing, Tam, Dahlia Hadad, and Jeffrey Miranda. You can email us at HeartFork at www.com with the slop that made you stop. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.