Hard Fork - Meta Goes MAGA Mode +A Big Month in A.I. + HatGPT

Episode Date: January 10, 2025

This week, Meta announced a series of content moderation changes that will transform the way the social media company’s platforms deal with misinformation and hate speech. We break down what these c...hanges will mean for users and why the company seems to be caving to the right’s arguments on censorship. Then, we’ll explain why 2025 is already shaping up to be a huge year in A.I. — with models like OpenAI’s o3, Google’s Gemini 2.0 and DeepSeek, from China, stirring discussion that superintelligence is near. And finally, we play a round of HatGPT. Additional Reading:Meta Says It Will End Its Fact-Checking Program on Social Media PostsOpenAI Unveils New A.I. That Can ‘Reason’ Through Math and Science ProblemsNetflix’s WWE investment and the Future of Live Events on the PlatformMegaLag’s Video Investigation Into HoneyLos Angeles Man Is Trapped in Circling Waymo on Way to Airport: ‘Is Somebody Playing a Joke?’ We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript
Discussion (0)
Starting point is 00:00:00 Casey, we're back. We're back in the studio, Kevin. So Dirty Secret is that we recorded our predictions episode that ran last week in 2024 before we left for the holiday break. We are just now coming back from a multi-week break. How are you doing? How was your break? I'm doing great.
Starting point is 00:00:17 We recorded that episode so long ago that when I listened to it, all the predictions were fresh to me. I was so excited to hear what we were going to say. But I'm doing good. I had a really nice break and of course I'm excited to be back. But what about you, Kevin? Well, I had kind of a disaster happen to me over this break, which was that I got robbed on Christmas.
Starting point is 00:00:33 Oh, wait, wait, wait. Was it the Grinch? You know, the citizens of Whoville are still looking for the suspect. Oh, no, who robbed you? How'd you get robbed? Well, I wasn't home luckily, but someone broke into my house.
Starting point is 00:00:48 Wait, like what do they take? So still sort of sorting through, we just got back, but it appears that thief or thieves took some jewelry, some electronics. Oh my God. But weirdly, and this is sort of the tech angle here, they did not take the Apple Vision Pro.
Starting point is 00:01:06 Not even a robber wants one of those. It makes sense, because robbers typically only want to take what is valuable, Kevin. And it's not clear what they would actually do with a Vision Pro. Also keep in mind, if you're a robber, you're out there, you're moving through the world, you're breaking into homes,
Starting point is 00:01:24 you can't have that giant thing on your face, you know? You sort of need to maintain clear vision, so to speak. Let me ask you this, even though all your items were stolen, did you look at your family and your dogs and you think, you know what, at the end of the day, I got my family and that's all that really matters? I did, and I don't know why you're saying
Starting point is 00:01:40 it was such a... I was looking for a nice sentimental ending. Honestly, that was sort of the moral of this robbery was much the same as the moral of the Grinch who stole Christmas, which is that the real Christmas, the real household items are families. Exactly. And so, you know, if you get robbed again,
Starting point is 00:02:02 maybe don't worry about it. Was it you? I'm changing the subject. We're moving on. OK. Where were you on Christmas? Ah. I'm Kevin Ruse, a tech columnist at the New York Times.
Starting point is 00:02:18 I'm Casey Noon from Platformer. And this is Hard Fork. This week, meta goes mega. We break down the company's surrender to the right on speech issues. Then, why 2025 is shaping up to be a huge year in AI. And finally, some Hatch GPT. Call that a Hatch GPT's. Well, Casey, I think we better talk about Meta.
Starting point is 00:02:44 We better do it, Kevin, because I never met a bigger story for this podcast. Yes. So, the big news this week in the world of social media is that Metta is making a, I would say, pretty calculated and transparent- Craven is another word people have used. Play to ingratiate itself with the incoming Trump administration by surrendering to the demands of right-wing speech critics and changing a bunch
Starting point is 00:03:12 of things about the way its platform works. I think this is a very big story, not just because of what it represents about Metta, but because it is the biggest and most prominent example of a Silicon Valley tech company sort of positioning itself for the second Trump term. And I think it's going to have very big implications for speech on the internet, for the rise of misinformation online, and potentially for the future of Meta itself.
Starting point is 00:03:40 Yeah, absolutely. I think that while we have talked about speech policies on Metta basically as long as we've been doing this podcast I think this set of changes that the company announced this week are the most Important series of policy changes that they have made in the past five years easily. Yeah So let's run down what's actually been happening over at Metta So over the past week there have been three main things that people are Pointing to as being all part of this effort to kind of curry favor with the incoming Trump administration.
Starting point is 00:04:08 The first was that last week, Metta's global policy chief, Nick Clegg, a former British deputy prime minister who had served in that role for a number of years, stepped down and was replaced by Joel Kaplan. Joel Kaplan is a longtime Republican operative, going back to the George W. Bush administration who's been working at Metta in their policy division for a while now and has sort of become
Starting point is 00:04:32 the unofficial liaison between Mark Zuckerberg and the Washington right. That's right. And then this week on Monday Metta announced that it was appointing three new board members, including Dana White, who is the CEO of UFC, the ultimate fighting championship. Dana White, not known as a particular expert on social media governance, but definitely a close friend and ally of Donald Trump and someone who can presumably act as a liaison between Metta and the Trump administration. Yeah. So just sort of staffing that bench up with more Trump friends. And then the big one came on Tuesday when Metta announced that it was ending its fact
Starting point is 00:05:11 checking program and replacing it with an X style community notes feature. The company also said it was redoing its rules to allow more speech and less censorship. It's going to dial up the amount of quote civic content that's sort of met his term for political content and current events content in their feeds and said that they were moving their content review operations from California to Texas to avoid the appearance of political bias. There were some other details in there that we can talk about including some changes to the way that its content moderation automated services will work. But basically, this was
Starting point is 00:05:50 a laundry list of things that right wing critics of social media platforms had been asking for for years. And Metta sort of stood up and said, we're going to do all of it. Yeah. Or another way of putting it, Kevin, is just that they accepted wholesale the Republican critique of Facebook's speech policies, right? And actually use the same words that Republicans would do. You know, in a previous time, we only use the word censorship to apply to state action to actually prohibit speech. Some people would say it doesn't actually apply to private companies, just sort of policing online forums. But Mark Zuckerberg said, no, effectively, you're right, we do do a bunch of censorship, Some people would say it doesn't actually apply to private companies just sort of policing online forums But Mark Zuckerberg said no effectively. You're right. We do do a bunch of censorship
Starting point is 00:06:29 We're doing too much censorship and we're gonna stop doing censorship Yeah So the reasons that Mark Zuckerberg gave and that Joel Kaplan gave when he went on Fox and Friends to announce these changes Which was a very deliberate decision and one that I probably don't have to explain the meaning of to our listeners. But the reason that Mark Zuckerberg and Joel Kaplan gave for these changes was that Meta had been doing some soul searching and basically had discovered that its former policies created too much censorship and that they were going to return to the company's roots as a platform for free expression.
Starting point is 00:07:01 I was really struck by just the way that they completely back down here. They accepted the critique and they seemingly are terrified of what the Trump administration could mean for them and for Mark Zuckerberg personally if they do not comply in advance with everything that Republicans have said about them for years. Keep in mind that none of these critiques are new. They were made throughout the first Trump administration and Facebook stood up against them. And they said, we're actually going to try to find a middle path here.
Starting point is 00:07:32 We are going to try to do what we can to preserve free expression while also trying to make this a really safe and inclusive space for as many people as we can. And in 2025, at the start of the year, Mark Zuckerberg came forward and he said, no, not anymore, we're done with that. Everything that the Republicans have been saying about us is true, and so we are going to lean into their version of what a social network should be. And so I'd like to play just some of what Zuckerberg said
Starting point is 00:07:58 in the reel he posted on Instagram announcing these changes. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political, but there's also a lot of legitimately bad stuff out there. Drugs, terrorism, child exploitation, these are things that we take very seriously and I want to make sure that we handle responsibly. So we built a lot of complex systems to moderate content.
Starting point is 00:08:21 But the problem with complex systems is they make mistakes. Even if they accidentally censor just 1% of posts, that's millions of people. And we've reached a point where it's just too many mistakes and too much censorship. The recent elections also feel like a cultural tipping point towards once again prioritizing speech. So we're gonna get back to our roots
Starting point is 00:08:43 and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms. I was just struck by how craven and cynical it felt like Mark Zuckerberg in particular was being about this. I mean, he sounded like Elon Musk, to be totally honest. He used phrases like legacy media with this kind of like dripping disdain, which is a phrase that Elon Musk and his friends love to use in describing the mainstream media.
Starting point is 00:09:15 He also did use this word censorship that he has avoided studiously for years in describing the content moderation work that every social network, including all of metaa's social networks, do as a matter of business. So it just sounded like a total capitulation, a total giving in to the demands
Starting point is 00:09:34 of his most ardent right-wing critics. More than that, Kevin, he also threw his own contractors under the bus. And let's hear that clip. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns
Starting point is 00:09:53 without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US. He says that the fact-checkers had just to be too biased, gives no evidence for that, no examples, just sort of says that these fact checkers, all of whom follow this very rigorous code for how they do their work, just sort of assert, oh, they've been super biased, so who knows what that meant.
Starting point is 00:10:18 He also, as you pointed out, says that they're going to move their moderation teams to Texas to avoid bias. Well, first of all, I can tell you they have had moderators in Texas for many years, basically for as long as they've had moderators. They've also put moderators in red states for years. In 2019, I visited Facebook moderation sites in Arizona and Florida, right? So there's absolutely nothing new about this, but he is throwing his moderators under the bus.
Starting point is 00:10:42 And the worst part about it to me is that he is suggesting that the moderators were the ones making decisions about policy when in fact that person was Mark Zuckerberg. So if Mark Zuckerberg wants to talk about the perception of bias around Facebook policy, he should reckon with the fact that he is the policymaker in chief over there. Right, so what do you think the most impactful part
Starting point is 00:11:01 of these changes is? Because for all of the talk about the end of the fact-checking program over at Meta, my sense is that the fact-checking program, for all the good people who worked very hard on it, really only ever touched a very tiny fraction of the content shared on Meta's platforms. It was a pretty ragtag effort that never really had as much of an impact as I think the fact-checking community would have liked, in part because of the way that Metta restricted it. So I don't know that the average user of Facebook or Instagram is actually going to
Starting point is 00:11:33 notice the fact that their fact-checking has disappeared. But what do you think the biggest impact on users will be? Well, so let me speak to the fact-checking first, because in some ways I agree with you. I don't know about you I rarely encountered one of these fact checks on Facebook on the other hand I am someone who believes in harm reduction and fact checkers did look at millions of pieces of content that were getting Presumably hundreds of millions or billions of views and there were empirical studies done that showed that Overall people came to have fewer false beliefs if they saw those fact checks
Starting point is 00:12:06 So to the extent that people saw them they were effective and I think that there was a case to continue doing them Particularly if you want to be a good steward of a network that you have built that billions of people are using every day and it's Important to you that they have a good experience on that platform and don't come away from it stupid or than when they started But I don't actually think that that's the most important thing that they announced. I think it's something else. And I'm going to point to something that Mark Zuckerberg said in his reel. Let's hear that clip. We used to have filters that scanned for any policy violation. Now we're going to focus those filters on tackling illegal and high severity violations. And for lower severity violations, we're going to rely on someone reporting an issue
Starting point is 00:12:46 before we take action. So what does that mean? What it means is, whereas before, Meta used to rely on automated systems to catch all sorts of things, not just illegal things, but also just stuff that was annoying or hurtful, stuff that was a little bit bullying, harassment,
Starting point is 00:13:02 I called you a name, I called you a slur. Meta would catch that stuff in advance and maybe not show it to you, maybe take some sort of disciplinary action against the person who sent that. What Zuckerberg is saying here is we're not the content moderators anymore. You are Facebook user, Instagram user. We are now enlisting you in the fight and we're going to leave it to you. If you see a slur on our platform, you go ahead and report that and then maybe we'll take a look.
Starting point is 00:13:27 And I think that this is a really big deal. So yesterday I wound up talking to a bunch of people who either work at Metta or used to work there. And I talked to one person who just said that they were extremely worried about what this meant because they had seen in so many countries around the world where Metta has traditionally done much worse moderation than it does in the United States,
Starting point is 00:13:48 where by not taking action against these lower severity violations, right? Stuff that was not obviously illegal, they had just seen violence fomented again and again. They had seen harassment against women. They had seen abuse against LGBTQ people. And Zuckerberg in his reel said, look, we're going to have more bad stuff on the platform. But he doesn't go the second step to
Starting point is 00:14:10 what does that actually mean? Well, what it actually means is people could get hurt, people could die. So I want to be very clear about that. This is not, you know, two like pointy headed intellectuals like, you know, sitting in their podcast studio saying, oh, no, you know, Facebook isn't a safe space anymore for the college students. What I'm saying is that violence has been fomented on Facebook before, and it will be fomented on Facebook again, and as a result of these changes,
Starting point is 00:14:32 more people are going to be hurt. So that to me is the biggest consequence of these actions. Yeah, I think this reporting thing that you bring up is so interesting because, you know, as we know, a lot of the worst stuff on Facebook happens in groups, happens in sort of semi-private spaces with hundreds or thousands of members. And so now I think Metta is essentially saying
Starting point is 00:14:52 that it will be up to the members of those groups to report any violative content that they want to be moderated rather than having these sort of proactive scanners going around. And you might say, what's the big deal about that? Well, if you're in a stop the steal group or a QAnon conspiracy group, or a group that's plotting an insurrection at the Capitol,
Starting point is 00:15:13 which members of that group are going to be reporting each other for violating Facebook's rules? I don't think that's a thing that's going to happen. And so I think what we're going to end up with is just a much more sort of unmoderated mess over at Facebook and Instagram and all the other meta platforms. Yeah, when I was talking to employees this week, one of them pointed out to me, what a sort of strange step
Starting point is 00:15:35 backwards this is in this respect. For so many years, Mark Zuckerberg bragged about how automation was the future of content moderation. And he boasted about the systems that they were building that were getting better every single quarter at detecting the hate speech, detecting the bullying, and making this a sort of better place for his community. And now, instead of saying,
Starting point is 00:15:56 we're gonna lean into this even more, we're gonna make these filters better, he said, we're going to stop using them, and we're gonna go back to human beings who don't even work for us or have any training or expertise, right? This is an abandonment of his technological project in favor of something that is obviously inferior.
Starting point is 00:16:15 So to me, that is one of the big twists here, is Mark Zuckerberg walking away from the very good technology that he built. Yeah, that's a really good point. So what else in these changes caught your eye? Yeah, so, you know, some of our listeners, Kevin, may use Facebook or Instagram and just wonder, you know, what's it gonna be like now,
Starting point is 00:16:33 now that these changes have made? So I thought maybe it would be good to go through some of the offensive things that you can now say on Facebook and Instagram if you want, and not get in trouble. So for example, I'm gay, you can now tell me that I have a mental illness, Kevin. You can go right onto Facebook and Instagram if you want, and not get in trouble. So for example, I'm gay, you can now tell me that I have a mental illness, Kevin.
Starting point is 00:16:47 You can go right onto Facebook and tell me that I'm mentally ill for being gay. You can say that I don't belong in the military. You can tell trans people. I mean, you don't belong in the military, but for other reasons. For other reasons. And that's important. Yes, nothing to do with your sexuality.
Starting point is 00:17:04 No, I have a terrible shot. Okay, go on. There's some other changes. Yes. You know, so look, if you want to say offensive things about trans people, like, you know, they can't use the bathroom of their choice, if you want to blame COVID-19 on Chinese people or some other ethnic group, you can just do that on Facebook and Instagram now. And Mark Zuckerberg says, well, that's sort of more in keeping
Starting point is 00:17:25 with the mainstream discourse. Those are the words he uses. That is in keeping with the mainstream discourse. And I look at that and I think, oh, like the standard on Facebook now is that it's just gonna feel like a middle school playground, right? All of this stuff is stuff that I used to hear
Starting point is 00:17:41 when I was 12 years old in Washington middle school. Maybe not the trans bathroom stuff that was sort of still yet to come. Everything else I heard in seventh grade and that is the new standard that Mark Zuckerberg has set for his property. Yes, he's saying I would like the discourse on my platforms to more closely resemble the dialogue
Starting point is 00:17:57 in a Borat movie. Yeah, yeah, which is satirical in the Borat case, but is very serious. Yes. And look, it's easy for me to joke about it. Look, if you want to tell in the Borat case, but is, you know, very serious. Yes. And look, it's easy for me to joke about it. Look, if you want to tell me I'm mentally ill for being gay, like, I can handle that. But you know, if you're 14 years old and queer and it's people in your high school that are calling you that on Instagram, we've seen over and over again that these kids harm themselves.
Starting point is 00:18:20 And one of the things I find so crazy about these series of decisions, Kevin, is that right now 41 states and DC are suing Metta over the terrible child safety record it has on its platform. And my understanding is that these changes apply to younger users just as they apply to everyone else. And so these classifiers that once used to try to find bullying and abuse and harassment against young people, they're no longer going to be automatically enforced, and it is going to be up to, I guess, the other kids in school to say,
Starting point is 00:18:49 hey, looks like my friend is being bullied over here on Instagram. So that just seems like they're opening up a huge amount of liability for themselves. Right, and I think we should say, like, it is not just right-wing culture warriors who have complained about excessive moderation on Metis platforms, right?
Starting point is 00:19:04 People on the left complain that their pro-Palestinian speech culture warriors who have complained about excessive moderation on Metta's platforms, right? People on the left complain that their pro-Palestinian speech is being targeted for takedowns or that... And that's true, by the way. Those are not just phony complaints. It is absolutely true that Metta has over-enforced in some cases. Right. But what's so interesting, as I'm hearing you explain the details of some of these changes
Starting point is 00:19:22 and how they are revising their rules, is that they all seem to be pointed in one direction. It's like, let's let people on the right mock people on the left in more ways. Yeah, absolutely. And again, like if, you know, I sort of wrote in my newsletter that like a younger and more capable version of Mark Zuckerberg truly did handle this differently. And the way he handled it was like, oh, we're over enforcing in this way. Let's improve the classifier, right? Let's adopt a technological solution to this problem. But what they said this week is, we're done trying to fix any of it, right? We are just abandoning the project
Starting point is 00:19:52 altogether. Yeah. So that is a lot about the what of these changes. I want to talk now about the why of these changes. I think there is a an obvious explanation, the one that has been popular among the critics that I've been reading and talking to over the past couple of days, is the political opportunism angle, which is, you know, this is Mark Zuckerberg's attempt to kind of ingratiate himself with the Trump administration. It's all business. It's all strategy. It's all cynical and probably all temporary until the next administration comes in. What do you make of that explanation for why these changes were made now?
Starting point is 00:20:26 So I think that there is a lot of truth to it. I think another factor that is in there, and we've talked about this on the show a bit, is that trying to be a good Democrat just didn't really get Mark Zuckerberg anything. After the 2016 US presidential election and the huge backlash against Metta in particular that it created, Zuckerberg tried to say, whoa, whoa, whoa, okay, I hear that you're super mad, I'm going to try to fix this. And so they went out and they built all these fancy machine learning classifiers to try to improve the service.
Starting point is 00:21:00 And at the end of the day, I don't think Democrats really liked him 1% better than they did before he did any of that. So you have to remember that politics is transactional and people vote for people who they think they can get things out of. By the end of 2024, I think it was very clear to Mark Zuckerberg, he was truly not going to get one thing out of the Democrats. But then along comes Donald Trump. And Donald Trump has this really interesting relationship with Elon Musk where, you know,
Starting point is 00:21:24 Elon Musk used to be kind of a liberal guy too, had a bunch of sort of bog standard liberal positions, but you know, then he, you know, changed his views for whatever reason, gave a bunch of money to Trump. Trump said, hey, I like this guy. I'm going to give him every political advantage that he wants.
Starting point is 00:21:37 And Mark Zuckerberg is a pretty smart guy. And he thought, oh, well, you know what? Maybe I could do the same thing. Right. Right. I mean, I think the, the one thing that we know about the values of Mark Zuckerberg and Metta is that they are an extremely efficient organism at self preservation, right?
Starting point is 00:21:52 They will do anything to stay relevant and stay ahead. They will copy features. They will change the name of the damn company. We know that Mark Zuckerberg's own views on speech are very flexible. They tend to sort of shift as the political winds shift. But I also think there's another potential why here, which is about Mark Zuckerberg personally and his own shifting political allegiances.
Starting point is 00:22:18 I've been talking recently with some folks who know Mark Zuckerberg or who have worked with him in the past. And what they have said to me is that this is a man who is following a very conventional sort of former Democrat turned Republican arc, right? He is a man, he's 40 years old, he's sort of approaching middle age. He's very into these kind of male coded hobbies like mixed martial arts.
Starting point is 00:22:42 He spends a lot of time talking with Joe, talking with Joe Rogan and, you know, hanging out with Dana White. And he's just sort of enmeshed in this kind of manosphere outside of work. And he's also been the target of a lot of criticism from especially the left. And one thing that we know about successful men who get targeted by left-wing opprobrium
Starting point is 00:23:04 is that they often respond to that by becoming sort of disaffected former liberals who embrace the right because there they feel like they're getting a more fair treatment. So I just want to put that out there. I can't prove this theory, but some people who know Mark Zuckerberg have floated it to me that he has actually become personally quite red-pilled or conservative over the last few years. Now obviously he's not Elon Musk, he's not broadcasting his political opinions on social media dozens of times a day. He's been more careful about sort of signaling which team
Starting point is 00:23:39 he's on. But I just offer this as a theory because I think we're starting to see more evidence that his own views may have shifted quite a bit independent of what's good for Metta Yeah I mean I think that there was a version of all of this that was less extreme and that if Zuckerberg himself were more truly Liberal or progressive in his heart we would not have seen these changes So I do think that the changes that they announced this week offer some evidence for what you just said also my colleagues that the changes that they announced this week offer some evidence for what you just said.
Starting point is 00:24:03 Also, my colleagues, Mike Isaac and Teddy Schleifer reported last year that Mark Zuckerberg has begun referring to himself as a classical liberal, which if you've ever watched a right-wing YouTube video is what every former liberal who has now become a Republican says. They call themselves classical liberals. So I'll just put that out there.
Starting point is 00:24:21 That is a code word. So, okay, last question about the implications of these changes. Do you think that we are going to see an exodus of liberal and progressive users from meta platforms the way that we did from X after Elon Musk took it over? Well, it depends on how all of these changes play out. And we're just not gonna know for a while. My assumption is that Metta will continue to do a significantly better job at
Starting point is 00:24:49 moderation than X does. It's a much bigger company. It has more infrastructure in place and so I don't think you're gonna get this sort of overnight transformation you got with Elon Musk. Also you know Facebook and Instagram they're just like structured very different than X's. Like Zuckerberg, I don't think can really take over those platforms, like in terms of the actual posts that you're seeing in the feed the same way that Elon does. So, you know, I would be somewhat surprised by that. On the other hand, if Facebook and Instagram do truly come to feel like seventh grade playgrounds at recess and the sort of discourse just gets much rougher and coarser.
Starting point is 00:25:31 I do think you're going to see people walking away from it because while we almost only ever discuss content moderation in terms of the politics of it, the truth is there's a huge commercial demand for it. People do not want to spend time on networks that are full of violence and harassment and abuse and gore and porn. And that is the main reason why all of these companies build systems to remove those things or suppress them. So the real question, I think, Kevin, is how far ultimately does Zuckerberg go in this
Starting point is 00:26:03 direction? Because whatever the politics might be, the vast majority of his users just want a safe and friendly place to hang out online. Yeah, okay, so that is where we are with Meta today and what some of the implications will be. Do you have any more predictions about where this will all head?
Starting point is 00:26:18 I have a really fun one for you, Kevin. Yes. So Meta has told its partners in this fact checking partnership that it has been funding for the past several years That their contracts will end in March So in March the fact checks on Metta properties are going to end the community notes product that Metta is planning to build Which is essentially a volunteer content moderation system. That's gonna take a little bit longer to build
Starting point is 00:26:41 So that means Kevin that you and I can look forward to fact free spring on Facebook. Let's go. We can truly say the craziest things and not one person is gonna be able to stop us. And let me just say, I'm cooking up some whoppers. The things I'm about to say on Facebook and Instagram, let's just say you're gonna wanna follow me. Yeah, so follow Casey over at threads.
Starting point is 00:27:04 And let's just say, start follow Casey over at threads. Yeah. And let's just say, start piling up the drafts now. Yeah. Because the purge is coming and you're ready. I'm ready for the purge. When we come back, oh, say can 03 forge a new path forward for AGI. Okay, we'll go with that.
Starting point is 00:27:30 Well, Casey, we have more news from over the break about one of our favorite topics, AI. Boy do we, it was a huge couple of weeks for AI, Kevin, during a time of year when normally the news cycle gets pretty slow. Yeah, I was wondering about that because usually in December, people are sort of getting ready to go on holiday break.
Starting point is 00:27:47 The news kind of trails off, but not this year. The AI labs were sort of trampling all over each other to try to get their big news out before the end of the year. Yeah, and I think it was led by OpenAI, which of course announced their 12 days of ship miss, where they tried to announce something's big, something's small every day for 12 days. And, you know, they did wind up ending on something pretty important, I think.
Starting point is 00:28:08 Yes. So this is all moving very fast. There's a lot to catch up on today. And I want to take some time to really dig into what happened and what we can expect for the first few months of the new year. But before we get into all that, Casey, you have something to tell us. I do. So Kevin, of course, our listeners trust
Starting point is 00:28:26 is of paramount importance to us. And so I wanted to let folks know about something that happened in my life that I just think I wanna be upfront about, which is that at the end of 2023, I met a man who had many wonderful qualities. One of those qualities that I loved was that he worked for a company I'd never heard of, which meant, fine, I can keep doing my job as normal.
Starting point is 00:28:46 But as of this week, Kevin, my wonderful boyfriend, started a job at a company we talk about sometimes on the show. He is a software engineer at Anthropic. Is his name Claude? You know, many people have written to me asking me if I fell in love with Claude. And while I do find it to be very useful for some things,
Starting point is 00:29:02 no, this was a human man that I am currently in love with. I've met him, he's real, can confirm, he's wonderful. But yes, you are disclosing that you have this new, let's call it an entanglement because this is a company that you and I talk about that you also cover in platformer and so we just wanted our listeners to know that this is happening out in the world and in your life
Starting point is 00:29:26 and that, is there anything more you wanna say about this? Yeah, I mean, people have some questions about this, like, you know, I did not play any role in my boyfriend getting this job, Anthropic didn't know about our relationship before this happened. Of course, you know, we have since told them about this. I do plan to continue writing, reporting about Anthropic
Starting point is 00:29:44 because I think it's a really important company. But whenever I do that, I do plan to continue writing, reporting about Anthropic because I think it's a really important company. But whenever I do that, I'm going to remind you that this relationship exists. A couple other things that I would say, you know, my boyfriend and I do not have any financial entanglements. We do not currently live together. But you know, I'm also going to commit to updating folks as that changes. Basically, I'm also gonna commit to updating folks as that changes. Basically, I'm gonna try to do the same job that I always do, try to bring the same skeptical, critical eye that I do to everything.
Starting point is 00:30:12 But I'm also just going to remind you that I have this relationship. But, you know, if you have questions about that, email the show, hardforkatnytimes.com. I will try to answer any respectful questions that I can about this. Yeah. Now, Casey, I will just editorialize
Starting point is 00:30:26 and add a little bit here to your disclosure, which I think is laudable and I'm glad you're doing it. I'm glad you did it in your newsletter. I'm glad you're doing it on the podcast. I have known you for a long time. I have known how hard you have tried to avoid dating men who work in the technology industry. I truly have.
Starting point is 00:30:43 I mean, for more than 10 years, Kevin, I would be on apps like Tinder, and I would see that somebody cute worked at a Google, Meta, a Twitter, you name it, and I would just always swipe left because I thought I don't need that drama in my life. I don't need that complication. Which is tough in San Francisco
Starting point is 00:31:00 because everyone works in tech. It is a very small town and the number of sort of eligible bachelors out there who do not work at one of the companies you cover limits your dating pool considerably. It really did and it sort of explains why I was mostly single for the last 10 years. And I thought well I finally found something that sort of gets me out of it but you know sometimes life just has other plans for you, and you kind of have to roll with the punches. Yeah, so here I am
Starting point is 00:31:25 Well anyway, thank you Casey for that disclosure. I think transparency is very important We are obviously going to keep talking about developments in AI at anthropic and elsewhere But we will also put this disclosure in sort of the way we do when we talk about Open AI and the fact that the New York Times company is suing open AI and Microsoft alleging copyright violations. Yeah, and you know, when I disclosed this in my newsletter this week, Kevin, one reader actually replied that they thought it was cute
Starting point is 00:31:54 that I would now have a disclosure to go along with your disclosure that you do every week. So we're sort of now one for one. Well, let's proceed to the real meat of this segment, which is about AI news. Because so many things happened. Truly. So let's start by talking about OpenAI.
Starting point is 00:32:10 We've already made the disclosure. Don't have to do that one again. This was a big month for OpenAI. On December 20th, they announced a new model called O3. This was a successor to 01. Funnily, they skipped 02 in the naming process because of a lawsuit threat from 02, the telecom company. I'm not sure if it was a threat. They said they did it out of respect, but yes, presumably there would have been some sort of legal problem.
Starting point is 00:32:36 Yes. Yeah. So they skipped right over 02 to 03. This model is not yet available for users, but they did give a kind of preview of it to some researchers And they also talked about how it had performed on some benchmark evaluations Casey tell us about O3. What is O3? So O3 is a large language model come in like you would already find in chat GPT but it is built in a different way. And it's known as a reasoning model.
Starting point is 00:33:08 And the reasoning models are a little bit different. A main way that they are different is how they are trained. So they are trained to try to be better at handling logical operations and structured data. The second big way that they are different is that when you make a query, you type into the little box, whatever you want it to do,
Starting point is 00:33:32 the reasoning model takes longer to go over it. It uses more computing power, it will take multiple passes through the data, and it will really try to bring true reasoning to what it is looking at. And so the result of taking more time, doing more passes, being structured in a slightly different way, is that it can perform a lot better on very complicated tasks. And what OpenAI found with O3 is that they were actually able to get way further on some of the hardest benchmarks ever designed
Starting point is 00:34:07 for LLMs to pass than anything that has come before them. Yeah, so we talked a little bit about this idea of test time inference or test time compute back when we discussed O1, their previous reasoning model, but this is basically a different step than the classic pre-training step of building a large language model. This is something that happens when the user makes the query instead of just spitting out
Starting point is 00:34:32 an answer right away. It goes through the secondary test time step, and that is something that researchers were very excited about when O1 came out. They thought, okay, maybe if we are tapping out the limits of the pre-training step, maybe there is a kind of new scaling law developing around this test time or inference compute. And maybe if we pour more resources into that step,
Starting point is 00:34:58 the models will get better along a different axis. And so what people were very excited about when O3 came out was that it looks like that actually worked. Yes, and now this stuff is not yet in the hands of everyday users, but OpenAI did enter this O3 model in this really fascinating public competition known as the ARC Prize.
Starting point is 00:35:21 You know the ARC Prize, Kevin? Yes. So the basic idea with the ARC Prize is they try to come up with problems that would be insanely difficult for an LLM to solve. And one of the ways that they're difficult, by the way, is that they are original problems. So these problems are known to not be
Starting point is 00:35:37 in the training data of any of these models, because of course, one of the criticisms of the LLMs is essentially, oh, well, you already have all that data stored, right? You just essentially did a quick search. So this prize says, no, Oh, well, you already have all that data stored, right? You just essentially did a quick search. So this price is no, no, no, we're not going to let you search, you actually are going to have to show that you can reason your way through
Starting point is 00:35:52 something really difficult. So this arc AGI one public training set has been around since at least 2020. And at that time, Kevin GPT three previous open AM, got a 0%, okay? So just four or five years ago, we were at 0%. In 2024, last year, GPT-4.0 got to 5%, okay? With 0.3, it gets to 75.7% in one evaluation where the limit was you could only spend $10,000 on computing power. In a second test where they let OpenAI spend as much money as they wanted, which we actually think it was like more than a million dollars, 03 hit 87.5% on this model. So something that
Starting point is 00:36:40 was essentially impossible through all of 2024, almost instantly, we have now hit 87.5% of that benchmark. And that is essentially the only public data we have about how good this thing is, but man, did that get people's attention. Yeah, it got people's attention. I also saw a lot of people paying attention to O3's performance on something called Codeforces. This is a programming competition benchmark,
Starting point is 00:37:06 and this is sort of one way that these AI companies try to assess how good their models are at coding. OpenAI's 03 received a rating on Codeforces of 2727. That is roughly equivalent to about the 179th best human competitive coder on the planet. And just for context, Sam Altman, in presenting this result, mentioned that only one programmer at OpenAI has a rating higher than 3,000 on code forces. So why does this matter?
Starting point is 00:37:41 Well, you think about some of the discussion that was happening at the end of 2024, Kevin, and you started to hear people saying, we are hitting a scaling wall. This was the phrase, right? And the idea was the techniques that we used to build the previous LLMs were just sort of running out of the low hanging fruit,
Starting point is 00:37:56 and it's gonna require some sort of conceptual breakthrough in order for them to continue improving. And 03 comes along and effectively does just that. And what I think is so important about these benchmarks and why we want to take some time today going through them is there's a lot of questions and criticism right now that is justified around how much are these things being hyped up, right?
Starting point is 00:38:19 We know that the companies love to hype up their products and tell us how incredible they are, but the benchmarks are something objective that you can actually use hype up their products and tell us how incredible they are. But the benchmarks are something objective that you can actually use to measure their performance. And so when you have one of those benchmarks saying that there is now a model that is better than all but 179 people on earth, well, it seems like we might be getting pretty close
Starting point is 00:38:39 to super intelligence because what is super intelligence if not a system that is better than every human at something? Yeah, and I would just add to that a little bit of caveat which is that these so-called reasoning models they seem from what we know about them so far to be very good at the kinds of tasks that you can design what are called reward functions for which are things that have sort of a definite right answer, right? Coding, either the code runs or it doesn't. Math has a definite right and wrong answer. So in these domains where you can kind of give the reinforcement learning model a goal
Starting point is 00:39:14 and the indicator of whether it is right or wrong in pursuing that goal, it tends to do very well. I think- But if you asked it, what is the meaning of true love, it would never know. It wouldn't know the first thing about it. And I think that's beautiful. Right, so I think for the short term,
Starting point is 00:39:28 like the next year or two, we're going to have these early reasoning models that are very good and potentially even superhuman at some tasks, the kinds of tasks that have sort of definite right and wrong answers. But for other things like, you know, fiction writing or life coaching or sort of these, these vaguer tasks that don't necessarily have one right
Starting point is 00:39:49 and one wrong answer, they may not advance much beyond what we see today. Yeah. And you know, some people will use that as an excuse to say, well, then this doesn't matter that much. And I would just point out that, you know, at some point in your life, you're probably going to go see a surgeon and that surgeon might be not that great of a painter,
Starting point is 00:40:05 and it's not actually gonna change the fact that the surgery that you got was very valuable, right? So I think it's important to think more in terms of what these things are capable of in the moment than what they are not capable of. Yes, the other thing from OpenAI that we should talk about quickly is that Sam Altman wrote a new blog post
Starting point is 00:40:21 on January 5th called Reflections, basically talking about some of his thoughts about the two years since Chachi BT was released. The big headline from this blog post is that Tim Altman is claiming now that OpenAI knows how to build AGI, that the artificial general intelligence that people have been speculating about for years now,
Starting point is 00:40:44 that OpenAI has been sort of hinting at, that they are within sight of that goal and that he believes it could happen very quickly and that they are already starting to look past AGI to ASI, to artificial super intelligence. So Casey, what did you make of this blog post? Well, so I spent, you know, basically a day trying to figure out
Starting point is 00:41:03 what exactly does Sam mean when he says that they know how to build AGI. And another thing that happened this week, Kevin, is that Sam did an interview with Josh Terangle at Bloomberg. And one of the things that he tells Josh is, I'm going to quote, I don't have deep precise answers there yet, but if you can hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, Okay, that's a GI ish. My interpretation based on the
Starting point is 00:41:33 conversations that I had this week is this actually is the destination that everyone has in mind for 2025. This is where the race is going, you are going gonna see all the big AI labs race to try to release a virtual AI coworker. And if they can do that, and if the coworker is pretty good, then they're gonna say, this is actually what AGI is. Because at the moment, you can hire a sort of virtual entity
Starting point is 00:42:00 to do some tasks or series of tasks in your companies that you no longer need a person for, that is where this entire thing has been driving the whole time. Yeah, I agree. And I think that it is just, it is not necessarily something that we need to accept uncritically, right?
Starting point is 00:42:13 Sam Altman is a person with his own goals and motives and open AI's. And reward functions. And reward functions. And we should maybe apply some discount to what he says about his projections for AI because he does have a vested stake in the outcome.
Starting point is 00:42:30 But I think we should also just use this as sort of a, you know, sticking our finger in the wind of what conversations are happening in the AI scene in San Francisco. People here, I cannot emphasize this enough, are very sincere and very genuine about the fact that they believe that we are going to get AGI or something like it very, very soon, possibly this year. Yeah, and when you look at the improvement in these models that we saw in December alone,
Starting point is 00:42:55 I think you have to take them seriously. Yes. Okay, moving on from OpenAI. Another thing that happened in December is that Google released Gemini 2.0, the new version of its flagship AI model. And Casey, have you tried it yet? What do you make of it? You know, I have not tried it yet, Kevin, because it is not in the sort of consumer brand Gemini that I pay for, with the exception of they have this new feature called Deep
Starting point is 00:43:21 Research, where you can ask Gemini to sort of go and read the web and prepare a little report for you about something. I think I've only used it one time. It seemed like okay. To be candid with you, I have not followed the 2.0 stuff as closely because it just hasn't seemed as shocking or impressive as the OpenAI stuff. Have you? I played around a little bit with Gemini 2.0, mostly in a series of demos that I got at Google before it came out. Some of what has been in there is sort of catching up with other models. Google also released a Gemini 2.0 flash thinking mode, which was their first kind of attempt at an inference time compute reasoning model, similar to 01 and 03 from OpenAI. I have not played around with Gemini deep research mode yet,
Starting point is 00:44:10 but I've heard people talking about how cool it is. So I'm excited to try that out. But people I trust, whose judgment I trust about this stuff say that this is basically Google announcing that it is on the same trajectory as OpenAI and all the other companies that are its peers and rivals, that it is going to be scaling up very quickly in 2025, and that we should look forward to more there.
Starting point is 00:44:34 Yes, although there was a post on X that went viral this week where someone asked Google, does corn get digested? And all of the image results are of AI slop that are appear to be diagrams of corn and just whatsoever and it's extremely funny. So maybe it'll be patched by the time this comes out. But if not, just go ahead and do an image search for does corn get digested and you'll get a sense of where Google's AI search skills are at. Got it. So in conclusion, Google is cooking in the AI department. But not much of this has gotten out into consumers' hands yet. And so I think that will be the question for 2025, is this stuff actually as good as Google says it is?
Starting point is 00:45:13 Yeah. All right. The third and final story that we're going to catch up on today from over the break is something out of a Chinese company called DeepSeek. DeepSeek is a Chinese AI company. It's actually run by a Chinese hedge fund called High Flyer. And right around Christmas, as my house was getting robbed,
Starting point is 00:45:35 they released a new model called DeepSeek V3 that ranks up there with some of the world's leading chatbots and caught a lot of people's attention. Yeah, and look, I have not used this one yet, but there's a few things to know about this one. One is that it's really big. It has more than 680 billion parameters,
Starting point is 00:46:00 which makes it significantly bigger than the largest model in Meta's Llama series, which I would say up to this point has been sort of the gold standard for open models. That one has 405 billion parameters. But the really, really important thing about DeepSeek is that it apparently was trained at a cost of $5.5 million. And so what that means is you now have an LLM that is about as good as the state of the art that was trained for a tiny fraction of what something like llama or a GPT was trained for. I saw some speculation from this great blogger, Simon Willison, who said, it seems like the export controls that the US is placing on ships is actually inspiring
Starting point is 00:46:46 these Chinese developers to get much better at optimizing and indeed you now have this state of the art model for $5.5 million. So this is a huge step toward the proliferation of LLMs everywhere. Yeah, let me just back up and go a little more slowly through what you just described because I think it's really important. I was trying to go really slowly. I need a slower... I need the deep research mode here.
Starting point is 00:47:12 So one of the big questions over the past five or so years is about the Chinese AI industry and where they are relative to the leading frontier AI labs in the US. Whether we need to be doing more to slow them down, and if we even can slow them down, or if this stuff is just common knowledge that as soon as someone invents a new way of doing AI, it spreads throughout the world and there's not much you can do to stop it.
Starting point is 00:47:42 One of the things that we've done in the United States was to pass something called the Chips Act, along with a set of controls that basically limited which AI chips you could export to China. We put a lot of faith in the ability of these restrictions to effectively constrain the Chinese AI industry. If they couldn't get the latest chips out of Nvidia and other companies,
Starting point is 00:48:04 they wouldn't be able to build models that were competitive with the state of the art US models. And that was one way that we were gonna sort of try to keep our national advantage. What DeepSeek I think has showed, or at least what they have hinted at, is the possibility that China is actually not that far behind.
Starting point is 00:48:22 Because this model, whatever you think about it, I have not tried it myself. But according to its benchmarks, it is up there in many respects with the latest and greatest models from companies like OpenAI and Google and Anthropic. It is, according to some measures, the highest ranking open source or open weights model that we have.
Starting point is 00:48:44 And it does not appear to have needed the latest and greatest hardware to be trained on. According to the report that DeepSeek put out, they trained this new model V3 at an estimated cost of about 5.5 million dollars and they did it not on the leading-edge NVIDIA H100 or A100 chips that all the big AI labs use, but on a different version of NVIDIA chips known as the H800, which is basically just a less capable version of the state of the art chips
Starting point is 00:49:17 from NVIDIA. And so I think what this all boils down to is the conclusion that regulating AI by limiting access to hardware is just going to be much more complicated than we thought. One interpretation would be that you actually can't stop China from building state-of-the-art foundation models,
Starting point is 00:49:41 and that our regulatory regime just isn't going to cut it when it comes to keeping the US ahead of China What do you make of that? So I mean the first thing I would say is I do get a little bit nervous when people frame the debate this way because I think a lot of the people who try to frame the like AI story as a race between the United States and China are like sort of very between the United States and China are like sort of very hawkish and like leading us to a potential conflict that I would rather avoid. And it also presupposes that all of the American companies have to race as fast as they can and they have to build AGI as fast as they can, even if it means cutting corners on safety
Starting point is 00:50:19 because otherwise, you know, this looming specter of China and everything that could happen. So I just would sort of say, we don't necessarily have to do that. We can choose to still, you know, move somewhat deliberately and with caution here. But do I think that this shows that it is going to be harder to prevent China from developing extremely high-end models
Starting point is 00:50:37 and that regulations will be more complicated? Yes, absolutely. All right, Casey, that is a small fraction of what happened in AI while we were gone. But probably the most important things. I think we covered most of what really mattered. And if there's one thing that we can be sure of in 2025, it's that we are going to be very busy talking about more AI changes and progress.
Starting point is 00:50:58 You know, somebody was telling me that if like 2023 was a year that made everybody say, oh my gosh, AI is going so fast. And 2024 was a year that felt very business as usual. 2025 is a year where we could be going back to, oh my gosh, AI is going so fast. And then maybe it'll just feel like that all the time forever. Isn't that a pleasant thought?
Starting point is 00:51:19 Yeah, so anyway, happy new year. AI vertigo forever. Forever. When we come back, 2025's first game of Hatch GPT. Well, Kevin, from time to time, we like to check in on some of the wilder headlines from the world of tech in a segment we call Hatch GPT. Yes.
Starting point is 00:51:56 In Hatch GPT, of course, we take headlines, we put them into a hatch, we fish headlines out, discuss them for a bit. And when one or the other of us gets bored, we simply say, stop generating. We have not done a hat GBT in a while, and there's been so much that I'm excited to see what's in the hat. Me too. Well, let's, why don't you go ahead and get us started.
Starting point is 00:52:14 Okay. I'll pick first. Okay. All right, this one is called Meta Kills AI Gener like proud black queer mama. This is from Futurism. So this was sparked by a interview that was given by a Meta executive
Starting point is 00:52:35 in the Financial Times at the end of 2024, basically talking about their plans to let users create a bunch of AI profiles and sort and fake people, and get them to share generated content on Meta platforms. Then people began discovering the existence of these older AI-generated profiles that Meta had started up back in 2023.
Starting point is 00:52:58 Washington Post columnist Karen Attia posted on Blue Sky about one AI-generated profile in particular that was described as a proud black queer mama of two and truth teller named Liv. And Karen started chatting with this chat bot. She then posted her chat on Blue Sky and Meta summarily killed Liv and many of its other older AI persona You know this whole thing was so silly and I think there's been a lot of just backlash against Facebook for this one because this truly is a case where you wonder Why are they doing any of this?
Starting point is 00:53:36 Yes, you know and and I think the answer would probably be that they saw character AI have some success by letting people chat With all of these sort of different sorts of characters. But I think where character AI succeeded was they let you pretend like you were talking to Luke Skywalker or a Spider-Man or characters that were very personally meaningful to you. Meta just made up a bunch of essentially generic humans and said, go nuts, and had them say generic things. And it just felt incredibly creepy to people, I think. Yeah, I think this is a case of an idea that needs to be taken out back and dispensed with,
Starting point is 00:54:13 but Meta is not giving up on the idea of AI-generated personas. In fact, they have signaled that they intend to put more AI-generated personas inside all of their apps, and I'm just fascinated to see what fresh horrors emerge when that happens. Here's what I hope. I hope that at some point, meta will be able to detect
Starting point is 00:54:30 when you're harassing or abusing someone, which is of course now allowed under their new rules. And they just actually route you to an AI so that the AI can sort of absorb all of your prejudice and bigotry. Might be a nice solution. I like that, like an AI punching bag. Exactly.
Starting point is 00:54:44 Yeah, okay. Stop generating. All right. I feel like normally when it's my turn to pick I get to shake the hat but for some reason this week you've decided you want to shake the hat. Okay. I'm just gonna shake the hat as is my right. All right. Here's one. Apple agrees to pay a $95 million settlement in a Siri privacy lawsuit. Kevin, this is from Chris Velasco at the Washington Post. Apple has agreed to end a five-year legal battle over user privacy related to its virtual assistant Siri with a $95 million payout to affected customers according to a preliminary settlement. Apparently, Kevin,
Starting point is 00:55:25 Siri was a bit overzealous in listening for wake words like Siri. So when it thought it was being called into action, it would start recording audio that it wasn't supposed to. And a number of those clips somehow ended up in the hands of third party contractors. Back in 2019, the Guardian reported on Apple contractors regularly hearing confidential medical information, drug deals, and of course, recordings of couples having sex. So if a judge signs off on the settlement, anyone who qualifies can submit a claim for up to five Siri-enabled devices for a max payout of $20 per device. So I guess my question to you is, would you be willing to let Apple listen to you have
Starting point is 00:56:03 sex for $100? Because let me just say, I'd go for it. No, I don't think, my price is a little higher than that. But Casey, I saw this one making the rounds because people said, oh, finally they're admitting that they listen to you through the microphone in your iPhone, which has been of course a favorite conspiracy theory of people,
Starting point is 00:56:25 including critics of Metta, for years now. There's no proof that that is true. What this essentially seems to be saying is, it's not that this was sort of an omnipresent listening Siri that was listening when it shouldn't be. It's that, you know, obviously Siri needs to be listening sort of ambiently in order to tell when a user says, Hey, Siri.
Starting point is 00:56:48 That's right. And I'm sorry if we just woke up your, your Siri on your iPhone and you're no longer listening to this podcast because I just said that. But this is essentially saying it sounds like that it was a little miscalibrated to where it was listening more than it needed to be to sort of listen for that wake word or that it was recording more audio than it needed. Yeah, and I don't care about the actual incident, Kevin, and here's this reason.
Starting point is 00:57:12 In the 14 years that Siri has existed, I think it's correctly understood me about four times. This is not a technology that ever knows what I'm talking about for any reason. Siri could take an hour-long recording of me and have no idea what to do with it, so I don't care about that aspect. What I do talking about for any reason. Siri could take an hour long recording of me and have no idea what to do with it, so I don't care about that aspect. What I do care about is this is just going to fuel the most annoying conspiracy theory
Starting point is 00:57:32 in tech, which is that all the tech companies are secretly listening to you. So yeah, we're just going to see a lot more conspiracies around this and it is super unfortunate because again, this is only Siri we're talking about. It doesn't know anything. Yeah, it's not that serious. Stop generating. Okay. This one is from the athletic Netflix's WWE investment
Starting point is 00:57:55 and the future of live events on the platform. Quote, we're learning as we go. Starting January 6th, the story says the WWE's popular weekly wrestling show Raw will stream exclusively on Netflix in the United States. This is part of a decade-long agreement worth a reported $5 billion. And Casey, as Hart Forks resident WWE fan and expert, why don't you take this one on? Well, Kevin, I mean, did you watch? No, I did not. Well, you missed something huge, which is that Roman Reigns beat his cousin, Solo Sakoah,
Starting point is 00:58:28 in a tribal combat match, winning back the Ula Fala, and becoming the one tribal chief of the World Wrestling Entertainment. Is that true? That is all true. It was a great match. It was a really fun show. And I think it looked great. You know, WWE positioned this as a really huge thing for them.
Starting point is 00:58:44 And it's a, and it is. It's also huge for Netflix, you know, WWE positioned this as a really huge thing for them. And it's a it is. It's also huge for Netflix, you know, from WWE perspective, now they can be in something like 280 million homes around the globe for Netflix, they get to experiment with some of this live programming, which they've been dipping their toes into. Of course, there's a lot of speculation about whether they might soon go after more traditional sports. So maybe they want to get a big football deal, a big baseball deal. And so I'm very interested to see how these two things work together. And I'm very interested to see who Cody Rhodes will be fighting at WrestleMania this year.
Starting point is 00:59:13 So yeah. I did see the, I mean, obviously they did the big Jake Paul, Mike Tyson fight that was on Netflix. I also saw on Christmas day, they had some live football on Netflix. That's right. Do you think this is hastening the death of cable TV, or do you think it's just that was sort of already happening and this is just Netflix trying to pick up the pieces? I absolutely do.
Starting point is 00:59:39 I watch in addition to WWE, another wrestling promotion, AEW. And the reason that I had my YouTube TV account, which cost me something like $80 a month was so that I could watch AEW programming because that is only available on cable. Well, guess what, Kevin, AEW started streaming on Macs. And so I was able to cut the cord once again. And now I am fully streaming again. So yes, as these sort of live events that have these, you know, intense weird fandoms move from traditional cable to streaming, it absolutely becomes a moment where more people cut the cord.
Starting point is 01:00:13 Now this is a little bit of a tangent, but I did have an interesting moment over the break where we were stuck in a motel in Lake Tahoe, and our iPad that we used to sometimes entertain our child had run out of battery. And so I was forced to turn on the hotel TV and try to explain to my two-year-old son, the concept of linear TV. And Casey, it blew his fricking mind.
Starting point is 01:00:38 I was like, so on this screen, you can watch Bluey sometimes, but not all the time, and you can't pick a specific episode. And then about twice an episode, they're gonna interrupt the episode to try to sell you toys. And he was just so confused by the concept of linear TV that I thought, you know,
Starting point is 01:01:00 this industry probably does not have a long time left. No, it doesn't, your child knows. Yeah. Yeah. Yeah. All right, we'll stop generating. Okay. Now, oh, this was a fun one. So the YouTuber Mega Lag posted a video on December 21st
Starting point is 01:01:14 titled, Exposing the Honey Influencer Scam. And ever since, Kevin, YouTube has been overtaken by discussion of what Honey did. Yeah, this in the world of YouTube creators was probably the biggest news story of the year. Yeah. And I don't think I've heard much about it outside of YouTube because of the sort of way
Starting point is 01:01:33 that Insular Platform works, but essentially this was a massive scandal among major YouTubers over the holidays. Maybe we should just sort of explain what happened for people who are not glued to YouTube 24-7. I think we should. So Honey is a company that was acquired by PayPal a while back, and they are a browser extension.
Starting point is 01:01:56 And the idea is, before you go to check out online, before you make an online purchase, you click the Honey button, and Honey will scan the landscape for the best coupon. Because often, if you have a coupon code, you can get a little discount. And so Honey went out to a bunch of YouTubers and signed these deals. And they said, hey, please go ahead and promote Honey. And the reason that this is important is that these sort of coupon codes are a big part of the creator economy.
Starting point is 01:02:23 We've talked on this show in the past about affiliate links. A lot of the internet is built on companies that sell things, giving a little kickback to people who talk about their things. Right, and I think before we say what the allegations against Honey are, we should just like set the scene for people who are not YouTube heads. The relationship between, like,
Starting point is 01:02:43 Honey was maybe the most prominent advertiser on major mainstream YouTube channels. I would say that Honey sponsorships propped up YouTubers and YouTube content creation in a similar way that online mattresses propped up the podcast industry for a couple years, major, major YouTube influencers, David Dobrik, Emma Chamberlain, the Paul Brothers, Marques Brownlee, these people, many of them
Starting point is 01:03:14 had major deals with Honey to sort of underwrite their channels. So they were basically ubiquitous. It was hard to watch a lot of YouTube a couple years ago without running into Honey ad after Honey ad. Right. So what are the allegations that that Mega Lag publishes? Well, it's two things. One is that, and this is just sort of hiding in plain sight on Honey's website, Honey will actually go to online retailers and charge those retailers money to keep their best codes out of the Honey database.
Starting point is 01:03:43 So let's say you have your online store and you have like a crazy 80% coupon that you gave out. Honey will say, Oh, we'll make sure that no Honey user actually ever sees that coupon code. So Honey is straightforward about that, but it's obviously a terrible user experience, right? Right. Because the way Honey works, like in a nutshell, is there are these coupon codes, people, you know, had their sites where you can go look up coupon codes before you buy
Starting point is 01:04:06 something, try to find, you know, a 10% or 20% off coupon. Honey will basically go out and scour the internet for these codes for you and then automatically apply them to your purchase in your browser for basically any e-commerce website that has these codes. That's right. So save you a little money while you're out shopping. That's right. And if that had been all that Honey was doing, this wouldn't have been a scandal.
Starting point is 01:04:25 But then there was the second allegation from Mega Lag Kevin. And that was that when people would see products in these influencer videos and they would go to buy them, those shopping carts would often get the creator's affiliate link inserted. So the creator would then get a kickback, which is of course the whole point
Starting point is 01:04:44 that creators like to work with these companies that share affiliate links is so they can get a little bit of money and the Allegation is that honey was going in at the end of this process and replacing the creators affiliate link with honey's affiliate link so honey got to keep all of the affiliate revenue and Cut the creators out of the process. Let's's just walk through this step by step, okay? So I am watching a major YouTuber's video. You're watching the Hardfork channel. I'm watching the Hardfork channel.
Starting point is 01:05:14 We don't actually have affiliate links in our videos, but say we did, say we're out there, you know, we've got, you know, an online mattress company that we have a promo deal with. And every time you go and buy a mattress and enter the code hardfork at checkout, you get 10% off. The allegation was that Honey, in the instances where a user went to go buy a mattress
Starting point is 01:05:36 from our affiliate link, if they used Honey in their browser, Honey would find that affiliate link and replace it with the Honey affiliate link. And so instead of getting a kickback on that sale ourselves, that money would instead go to Honey. That is exactly right. And so people are quite mad about this.
Starting point is 01:05:54 There's a channel called Legal Eagle that is suing them, which I know nothing about Legal Eagle, but I have to say that sounds exactly what a YouTube channel named Legal Eagle would do, would just be to sue one of their advertisers. When The Verge asked PayPal, by the way, about all of this, PayPal said, quote, Honey follows industry rules and practices,
Starting point is 01:06:12 including last click attribution. And what I take that to mean is that the industry rules and practices is horrible, and Honey is not doing one thing to try to improve on them in any way. So, you know, this was really a case where creators took a look at the situation and they said, I don't think so, honey. And that's a lost culture racist reference.
Starting point is 01:06:31 And I would just say that I think this is a case of like, people just really being naive about how the internet works. You know, honey was a very popular, very so profitable and popular that PayPal acquired it. And people just really, YouTubers just thought they were out there providing these coupon codes to people out of the goodness of their hearts. And I just wanna say bless your heart
Starting point is 01:06:56 if you thought that's what Honey was about. YouTubers are telling Honey to mine their own beeswax. Yes. And with that, I'll stop generating. Okay, last one. LA tech entrepreneur nearly misses flight after getting trapped in RoboTaxi. Passenger Mike Johns was reportedly riding
Starting point is 01:07:18 in an autonomous Waymo car on the way to the Phoenix Airport when the vehicle began driving around a parking lot repeatedly, circling eight times as he was on the phone seeking help from the company. Did you see this video? I did see this. This was so wild. So he initially believed it was a prank, he told the guardian
Starting point is 01:07:36 and then he sort of gets on the phone with the support person at Waymo as he's inside this car that is just circling the parking lot and it won't let him out. And as a result, he almost missed his flight. You know, I think this is every Waymo support person's fantasy is that one day you just pick a random Waymo and you just start driving it around in circles in the parking lot with no explanation. Maybe you're like teaching your kid how to drive or something like that. No, this would obviously be somewhat disconcerting, but it is also hilarious.
Starting point is 01:08:06 And I have to say, if I made a list of like the 10 worst things that ever happened to me in an Uber, for example, driving around in a circle eight times would not make the top 10. Yeah, I've almost missed my flight several times because of Uber drivers just thinking they know a better way to the airport. So yes, I would say we shouldn't make light of this.
Starting point is 01:08:23 People are placing their life in Waymo's hands when they get into one of these autonomous cars. And I did see some people saying, see, this is why I would never trust a self-driving taxi. And I do think it's worth taking these incidents seriously. At the same time, no one was hurt. This was a case of clearly some like little software glitch or something, or some issue with them.
Starting point is 01:08:42 I don't think they ever got to the bottom of what happened here. Look, here's another way of thinking about it. Maybe this is a final destination situation where if the Waymo had gotten immediately on the freeway, maybe there would have been a terrible accident. But something in the training said, no, we need to stay in this parking lot. We're gonna drive around in eight circles
Starting point is 01:08:58 and that will sort of reset the timeline and ensure that Mike makes it safely to the airport. Something that they could think about. Do you know how airport Wi-Fi sometimes makes you think about. Do you know how like airport Wi-Fi sometimes makes you watch an ad before you can get the free Wi-Fi? This is giving me like an evil business idea, which is like, oh, you wanna get out of your Waymo
Starting point is 01:09:13 and make your flight? Time to click over to Honey. Complete your purchase with Honey if you want us to stop circling this parking lot. God, someone out there is taking notes. I'm so sorry. All right. Stop generating. That is Hat GPT. Casey, it is so good to be back with you in the studio
Starting point is 01:09:33 doing one of our favorite games. Hats off to you, Kevin. And hats off to all of our listeners. Art Fork is produced by Whitney Jones and Rachel Cote. We're edited this week by Rachel Dry. We're fact checked by Caitlin Love. Today's show was engineered by Chris Wood. Original music by Alicia Bietube, Rowan Nemesto, and Dan Powell.
Starting point is 01:10:13 Our executive producer is Jen Poyant. Our audience editor is Nel Gullogeley. Video production by Ryan Manning and Chris Schott. You can watch this whole episode on YouTube at youtube.com slash Hartford. Special thanks to Paula Schumann, Pui Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at Hartford at NY times.com with something really mean that you can say on Facebook now. Thanks for watching!

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.