Hard Fork - The Future of Addictive Design + Going Deep at DeepMind + HatGPT
Episode Date: April 3, 2026Last week, two separate juries held social media companies liable for harming young users. We unpack what these landmark decisions mean — not only for the future of social platforms like Meta and Yo...uTube, but also for A.I. chatbots. Then, Sebastian Mallaby, the author of “The Infinity Machine,” joins us to talk about the three years he spent with Demis Hassabis and those closest to Google DeepMind. And finally, we catch up on some of our favorite tech headlines from the week with a round of HatGPT. Guest: Sebastian Mallaby, author of “The Infinity Machine: Demis Hassabis, DeepMind and the Quest for Superintelligence.” Additional Reading: Juries Take the Lead in the Push for Child Online Safety An A.I. Agent Was Banned From Creating Wikipedia Articles, Then Wrote Angry Blogs About Being Banned I Met Olaf — the Frozen Robot who Might be the Future of Disney Parks Claude’s Code: Anthropic Leaks Source Code for A.I. Software Engineering Tool What’s With All the A.I. Videos of Cheating Fruit? This Company Is Secretly Turning Your Zoom Meetings into A.I. Podcasts North Korean Hackers Suspected in Axios Software Tool Breach We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Now, here was a really interesting situation, Kevin.
Did you see this robotaxy outage that left passengers stranded on highways in China?
No.
So this happened in Wuhan recently.
I've heard of that place before.
Did they do anything else?
It's not clear to me.
I'm not really familiar with their game.
But apparently there was some sort of technical glitch that caused a number of robotaxies
owned by the Chinese tech giant Baidu to freeze,
trapping some passengers in their vehicles for more than an hour.
And I just thought, my gosh, what a nightmare.
Just imagine you're in your robotaxie on the way to a wet market in Wuhan.
You have an appointment with a pangolin who's going to cough on you to see if they can transmit anything to you.
And then your robotaxie gets stuck.
It's a nightmare.
It's an absolute nightmare.
Well, I think that robo taxi outage is definitely the worst thing that's ever come out of Wuhan.
Yeah.
When it comes to these by-do robo-taxies, my advice, buy don't.
Oh, boy.
No, that was the worst thing to come out of.
I'm Kevin Rousse at Tech columns with the New York Times.
I'm Casey Noon from Platformer.
And this is hard for this week.
Social media companies keep losing in court.
How will that reshape the Internet?
Then, the Infinity Machine author, Sebastian Maliby,
joins us to discuss his new book on Google DeepMind
and Demas Sassabas' quest to build superintelligence.
Finally, it's been a while.
Let's catch up with some HATGPT.
I missed you.
Me too.
Well, Kevin, while we were away, I was riveted by what was going on in the courtrooms in Los Angeles and New Mexico related to social media.
Yeah, it has been a big week for these social media product liability trials that have been going on now for some months.
And we actually got some verdicts.
We did.
And in both cases, social media lost in L.A., a jury found that META and YouTube had been negligent in the way that they,
designed features that they said were harmful to this plaintiff. They have to pay $6 million
combined to this plaintiff. And then in New Mexico, the jury said, we believe that META has
violated the state's Unfair Practices Act and has misled consumers about the safety of its
products and has endangered children. In that case, they are ordering META to pay $375 million.
Yeah, so we've talked a little bit about this series of cases against the social media companies.
You know, social media companies, they get sued all the time for all manner of different things.
I think what caught our eye and specifically your eye was the sort of legal theory underlying these cases.
So talk a little bit about that and what makes this case different from other cases that have been brought against the social media companies.
Yeah, so I would say there are kind of two big reasons why these cases are super important.
One is that these are what are called bellwether cases. Kevin, you ever heard about bellwether case?
These are like cases that set precedent for other cases.
Yeah. Exactly. These are the cases that if successful are going to open the floodgates for lots of other people to sue under the same theory.
The second big reason that these cases are really important is that they appear to have opened up a crack in Section 230 of our Communications Decency Act here, which for 30 years has been essentially the foundation that the entire Internet rests on.
It's also a dentist's favorite statute.
Yes, that's Section 230 if the joke wasn't landing for you.
So, yes, this is a super important, super important law.
I'm glad you got that.
No, the really sad part was I was planning my own section two-thirty joke.
Oh, wow.
Because I just went to the dentist yesterday.
And now I didn't have any cavities.
So tooth and not hurdy.
Moving on.
Section 230, Kevin, you may remember, is the law that says that in most cases,
these platforms cannot be held liable for what their users post.
Yes.
So if I went on Facebook and I defamed you, which is something I think about doing every day,
you could sue me, but you couldn't sue Facebook.
This is what's been blocking my lawsuits against Facebook over your posts for years.
That's right.
And back in the day, like 30 years ago, this was actually really important because there were
these small internet forums that were starting up.
Some of them got to be bigger size, you know, CompuServe, AOL.
And inevitably, somebody would be mean to another user and they would say,
I'm not just suing you.
I'm suing CompuServe.
I'm suing AOL.
I'm putting the whole system on trial.
And a couple of lawmakers got together and they said, this is going to destroy the
entire internet. Like, we need for there to be forums and not have these platforms being held liable
for all these things. But fast forward to today, and Kevin, would you agree that maybe there are
some harms that are taking place on the internet that do not consist entirely of people defaming
one another on copy surf? Yes. Yeah. And so this is essentially the question that gets asked in this
case, right? People say, hey, it seems like we're a pretty long way away from 1996. I'm opening up
TikTok. I'm opening up Snapchat. And I'm seeing infinite scrolling
feeds. I'm seeing auto playing videos. I'm a teenager, but I'm getting barraged by push notifications
in the middle of the night. And that's to say nothing of the recommendation algorithms that might be
driving me toward content related to eating disorders or other things that are going to make me
sad and upset. And so some of these people get together with their attorneys and they say,
this actually feels different from the thing that Section 230 was designed to protect, right?
This is not about, oh, I got harmed by this particular piece of content. This is about the design of
whole platform. The design feels defective. And the really crazy thing about these cases, Kevin,
is that juries agreed with these plaintiffs for the first time. And they said, we like this theory,
we think these products are defective. Right. So this is kind of a side door that these lawyers have
found around litigating on Section 230, which they have successfully now shown that at least in
these cases can convince a jury that it is not about what's on the social network content-wise. It's
about the actual mechanics and plumbing of the social network that are harmful to people.
That's right. And we should say that we do expect some appeals here. And until those are,
you know, sort of fully exhausted, I can't tell you for certain this is the moment that the internet
changed forever. But there's been a lot of commentary over the last week and about what it would
mean if these cases were upheld because it seems like juries are just going to be really,
really sympathetic to these claims. So before we get into the implications, like, can I just ask a
couple more questions about these actual specific cases. Please. So what are the actual platform
mechanics that are being litigated over here? Yes. So in the LA case, among the design features
that were at issue were the so-called beauty filters that can make you, you know, look quote-unquote
more beautiful if you use them, infinite scroll, auto play video, these barrages of push notifications
that platform sense, and also, I would argue more problematically, the recommendation algorithms
that power the platform. And then in the New Mexico case, that was much more about kind of child safety.
So they were arguing that Instagram in particular had become this playground for predators.
It was very critical of the fact that meta offers end-to-end encrypted messaging.
And the basic idea was meta falsely advertised that these platforms were safe when in reality
children are being harmed there all the time. So from what I understand, it was like the case was
basically taken out of the playbook for going against big tobacco or another sort of industry that
makes harmful products. You say, this is harmful, and not only is it harmful, but the company
that was making it knew that it was harmful and either made it more harmful or just released it as
planned anyway. I did see some sort of exhibits that had been shown off at the LA trial. I
believe where some employees at META were sort of talking on their internal forums about how
this stuff is so addictive for kids. That seems bad, and I imagine that was persuasive with the jury.
But are there other instances where the platforms are being sort of taken to court over things
that they sort of knew or harming people and that they either dialed up the harm in an attempt
to spike engagement or sort of knowingly release these things to the public?
Yeah. So, I mean, some of this research has.
has come up in other litigation over the years, but I think this has been probably the most damaging
case that we have seen. You know, the first time I remember reading a lot of these internal studies
was in the wake of the Francis Howgan revelations a few years back, right? Like Francis Howgan
walks out the door of Meta and takes a bunch of this internal research with her winds up sharing
it with the Wall Street Journal and then eventually a bunch of other reporters, including me.
The reason that the research mattered a lot here, though, Kevin, was again, the plaintiffs
are now building this very specific case, which is you're building a defective
product, right? Before the past couple of years, we weren't really using this language. We weren't
really adopting this sort of public health framing of a way to discuss the harms of social media.
Before then, it was just kind of this more nebulous, like, hmm, like they're studying the
effect of Instagram on teen girls, and it seems like some of these girls are having really
bad outcomes, but we didn't really have the framing. Well, now we have the framing, and we're just saying,
like, hey, you looked into it, you found that some subset of your users are having really bad
experiences, and you did not change the features, and so that mattered. Well, let's talk about the
changes. So what would you expect a platform like Instagram or Facebook or YouTube to change in the
wake of these jury verdicts? Or are they just going to wait till it all shakes out an appeal?
I honestly don't know the answer to that question. I think it's a really interesting thing to watch.
The question that you just asked is really, really controversial, actually, because much of what
these platforms do is just protected under the First Amendment. And then Section 230 also protects a lot of
speech, right? And the big debate that's like raging in the internet policy community right now
is can you separate design from content? I want to get your thoughts about this. Right. Is it like
the container or is it the stuff in the container that is dangerous? Yeah. And there are some people
who are saying that no, you cannot make that distinction and that effectively all design is content,
right? Like if I want to send you a push notification, that is my right under the first amendment. And you
cannot tell me that I cannot do that. You cannot tell me that there is a certain limit that I have
to place on the depth that you can scroll in Instagram, like that is protected. But for what it's worth,
juries are taking the opposite view. They're saying that there are at least some things which seem
like are just clear mechanical design features, and I happen to agree with them. So let's talk about
this because I think this is maybe a place where you and I disagree, or at least where I have some
misgivings about this theory. So in the case of something like cigarettes, which is a very heavily
litigated field that I think a lot of this social media litigation has been modeled after,
there's like an addictive ingredient, right? Nicotine. Everything that you put nicotine in becomes more
addictive as a result of having nicotine in it. You know, this happens with cigarettes. It happens with
vapes. It happens with, you know, nicotine pouches. If you started putting nicotine in ice cream,
ice cream sales would go up because nicotine is very addictive.
I think the question I have about the mechanical
addictiveness of these sort of features like Infinite Scroll,
like autoplay recommendations,
is that if it followed the same principle as nicotine,
then every product that has those would become way more popular.
And one example I've been thinking about on this is SORA.
They sort of took the playbook that was working for TikTok and Instagram,
and they put it onto a new app, and the app did not succeed.
Right?
There are other apps that have tried to mimic things like the news feed,
that have tried to mimic things like auto play video
or recommendation algorithms that have not taken off.
And so I guess the question in my mind is, like,
if the litigation over social media is modeled after the litigation over big tobacco,
shouldn't there be like some industry-wide lift as a result of every platform,
trying to borrow the most addictive features of Facebook and Instagram and YouTube?
I mean, I hear what you're saying, and I think it's an interesting point,
but I think that internet platforms just work differently than cigarettes, right?
Because you're right, like with nicotine, like nicotine is just addictive.
Now, there are people that smoke cigarettes without getting addicted to them, right?
But probably the majority of people do.
Social media platforms are an imperfect analog to those cigarettes.
I believe that platforms need to be of a certain scale in order for them to be,
truly addictive in the way that these plaintiffs are now suing about, right?
There's something about the fact that there's hundreds of millions of people on Instagram
and on TikTok creating content that creates that kind of infinite supply of things that you might
potentially want to watch that is actually able to...
But now you're talking about the stuff in the container, right?
Well, I think that there are many ingredients that all work together, right?
But you're raising a criticism that people are making of this lawsuit.
Effectively, what I hear you saying is you cannot distinguish between the content and the...
Well, I'm not sure.
I mean, I think I'm open to being persuaded that you can, but to my mind, it's like one lesson that you could take from this is that it is very bad to be a popular platform that engages these mechanics to keep users coming back.
But it's okay to be an obscure platform that does it because that's not going to have as much harm.
So what's really sort of at issue here is the fact that these platforms are very, very good and very, very popular at doing the thing that everyone else is trying to copy.
Yes, and this is the approach that Europe has taken to regulating these platforms, right?
They have certain, like, categories, and if you are a very large online platform, then you just have more responsibility.
That makes intuitive sense to me. I think that bigger and richer and more powerful you are,
the more responsibility that you have to society, right? And so in this particular case,
you have companies like meta, which we know are hiring cognitive scientists who are working very hard
to figure out all the different ways that they can hack your brain to get you to look at Instagram
for as long as they possibly can. It is in their interest.
to get you to look at Instagram as long as they possibly can.
And right now, there's just no break on that at all in our society, except for this litigation.
So I'm so sympathetic to these juries that are looking around, they're seeing this almost
completely unregulated platform, and they're saying something's got to be done.
Yeah. So regardless of sort of what our thoughts on the overall sort of legal theory here are,
like, what do you think the effects are on the platforms? If this does get held up on appeal,
if these platforms are found liable for millions or potentially billions of dollars in damages
against all of these people who claim that they were harmed by social media, does that mean that
they have to, I don't know, go back to like the reverse chronological feed of 2008? Does that
mean they have to shut off, you know, infinite scroll and autoplay and recommendations and all these
other things? This is where it gets really tricky. And this is like maybe the one narrow way in
which I'm sympathetic to the platforms, which is, okay, the juries have said your product
is defective. What juries have not said is, here's what an okay product looks like, right?
They're saying, we don't like this sort of set of features, but they're not saying with any
specificity, like, well, how do we think that these features are interacting, right? Like, what is your
actual model of the harm here? And so there is a world where the platforms feel like they have to
comply, and they maybe start picking off some of these features one by one, like, okay, if you're like
under 16, will disable Infinite Scroll, for example? How much benefit does that really have to like the
individual teenager who may be struggling? I don't know.
know. This, of course, is why it would be great if Congress could pass some sort of law regulating
this, but, you know, we're now like, I don't know, a decade into that project and still not
getting very far. Yeah, I mean, I think one prediction about how this will change platforms and their
behavior is that if you start talking about gambling or addictiveness on an internal meta chat room,
you just immediately get fired. There's just like a little button on your seat that just
presses and you get ejected out of the building.
It's like, because so much of the incriminating evidence here just comes from people like spouting
off in work chat rooms about like, oh, it really seems like this thing we're doing is dangerous.
And like, I have to imagine that if it hasn't happened already, they're just going to
absolutely crack down on that kind of internal discussion.
Absolutely.
Well, so I want to hear a little bit more about how you think about this because you have talked
on this show many times about your own struggles to look at your phone less.
This is an issue that, you know, at various times you feel like has people.
plagued you. So how are you feeling about the addictiveness of these platforms? Like, do you buy the
sort of public health framing for the way that people are talking about them these days? Or do you
think that this is overreach? So I need to do some more thinking about the product harm arguments
here and whether it makes sense to me. I am basically on board with the idea that there should be
age-gating for social media. I am sold on the premise that there is a certain age-gating. I am
whether it's 16 or 18 or 14, where sort of the most harmful effects taper off.
And I think before that age, it makes total sense to age gate or at least give parents a lot more
control over what their kids are able to do and not on these platforms.
I think the addictiveness question is just hard for me because I feel like my sort of macro theory
on all this stuff is that what is happening to social media over time.
is that the social part is fading away and the media part is rising in the mix.
And so I think that if you start treating the design and mechanical decisions of these media
platforms as harmful under the law, it just sort of leads me into a place where I become much
less certain.
Like before any of this existed, there were cliffhangers on TV shows that were designed to
keep you coming back after the commercial break or to the next week's episode or whatever.
Those were arguably addictive features. They would keep people coming back. Is that illegal?
I would say probably it shouldn't be. And it's not. So I think there is a certain sense in which
the closer that social media moves to something like TV or streaming video, the blurier the
lines in my mind get between the content and the mechanics. What are your thoughts on that? Well, I have to
disagree. I do think cliffhanger should be illegal because I want to know what happened. I don't have to wait
till the fall to find out, you know, if that person is still alive. But also, I do think that there are
some really important differences between like, let's say, YouTube and HBO Max, right? Like,
HBO Max is not, like, going to modify the content of HBO to your individual preferences, right?
Like, they're going to go pay some money for a bunch of shows and they're going to hope a bunch of people
watch them. The platforms that we're talking about are doing something very different, right? They're looking
across the entire corpus of like every video that's ever been uploaded to their platform,
and they're trying to figure out what will keep you personally here the longest, and we're going
to show you that as much as we can. So I just do think that there's a kind of categorical
difference here. And while I do think people should have broad freedom to, you know, look at whatever
they want, I do think that at a minimum, we should probably place an age gate on it for the same
reason that we don't let 14-year-olds walk into bars, unless they're really cool and have a fake idea.
So talk about the encryption piece, because you had a line about this in your newsletter that I didn't quite understand, but what is the encryption debate that's part of these lawsuits?
Yeah. So, you know, here I understand that I'm coming across as being broadly supportive of these jury verdicts, which I am, but I do want to acknowledge, like, this could lead to some really bad places.
And this is why we need to handle Section 230 with care. In the New Mexico case, the attorney general argues that a reason that META should be considered liable in advertising.
their platform as being safe for children is that it includes encrypted messaging, right?
In fact, Meta in March announced that they would discontinue encrypted messaging on
Instagram in what I believe was an effort to sort of get ahead of this.
What they said was, look, if you want to use encrypted messaging, you can use WhatsApp instead.
But to me, this would be like just a legitimately horrible outcome of all of this,
is if like every company that now offers encrypted messaging either voluntarily decided
to stop offering it or was pressured by the government to stop offering it because, in my view,
encryption is a necessary part of privacy in a world where people are mostly communicating online.
Right.
Are you comfortable with all of this happening in the courts through jury verdicts?
This is not my preferred way of addressing this, but I think it was inevitable, in part because
the tech companies have been so obstinate about making meaningful changes to their platforms,
right? Like, societies across the world have been begging these companies for a decade,
please do something to make these platforms safer and to make them less addictive and to reduce
some of the harms. And instead, what we've mostly seen is a series of engagement hacks
designed to get people to look at them longer, right? And in the United States, where you cannot
regulate the content of any of these apps, for the most part, you're really only left with the
design, right? You're really only left with just the raw mechanics of the app. So if the social
media platforms are upset about the verdict here, I truly believe they brought this on themselves.
I mean, you asked me about my own experience of screen addiction, and I've never been sort of
a total screen addict, but I've struggled, like I think, you know, many, many other people have
with like how much I'm using my phone, how much I'm using various apps. I have come up with
convoluted ways of trying to reduce my screen time. You once were six hours late to a hard for
taping because you wanted to find out what happened to Chimpanini Ben Anzini on TikTok.
I thought we agreed to keep that private. But like never in all my struggles with screen time
have I thought to sue the companies that were making the apps that went on my phone.
And I guess it's different when you're talking about kids, but like there is some part of me
that just feels like, well, it just feels like an easy way out. You know, blame the platforms.
And look, I think these platforms absolutely have culpability.
here. I am not saying that I disagree with these jury verdicts. I think that these platforms,
especially meta, have done the research, have found the harms and that have shielded them from
the public. But I just, I guess I'm, I'm thinking about my own experience of these addictive
platforms being one of like feeling bad about myself rather than trying to, you know, find someone else
to blame. Yes, but you also had the benefit of beginning to use these platforms when you were already
an adult, right? Like, your hippo campus
was formed, and I think... I was on
Instant Messenger from a very early age.
Do you really think that, like, messaging
apps are, like, as addictive and harmful in the same
way as, like, TikTok or Instagram is for some people?
Oh, my God, take me back to 1999,
put me on AOL Instant Messenger.
I could not tear myself away from that thing.
I had to put up a little message
with, you know, get-up kids' lyrics
on it every time I left the computer
because it was such a rare event,
and I wanted my friends to know
that I was away from keyboard.
Casey, these things were addictive.
The kid got up.
That's a get up, kids joke.
Yeah, look, I just think that messaging apps are different from like these social platforms.
And I think, you know, honestly, like, I will be curious, you know, who knows if Instagram and TikTok will be what they still are in like 10 years, maybe when your son is ready or wants to use social media.
But I just think that it probably just feels very different than when you're a parent.
Yeah.
Well, Casey, are there any new social media apps that you're addicted to?
It's called Claude.
Wait, I do want to talk about the AI of this all.
So obviously, every discussion on this show has to come back to AI at some point.
So I'm curious, like, what effects you think this might have on some of these AI companies
because they are also trying to create experiences that are engaging, addictive, whatever you want to call it.
I can imagine some of these.
lawsuits that are being brought against the makers of chatbots for harms.
It all feels like it's sort of going to converge at some point.
So what's your take on that?
Yeah.
So Pew did a study in 2025 and found that 64% of teens now use AI chatbots.
About 3 in 10 use them daily.
That same survey said that the teen use of YouTube, TikTok, Instagram, and Snapchat
had remained relatively stable.
Right.
So yes, chatbot usage is growing.
It has not yet come at the expense of the social platforms,
although, of course, I expect that we'll soon see chatbots inside all of those platforms, right?
And these things will all just kind of merge together.
There's something about these things where they do kind of go hand in hand.
And to your point, like, I think that, yes, AI chatbots will be the next frontier of this debate,
because in many ways they're much more engaging and I think, like, will be stickier than even these platforms are.
Yeah.
I mean, it just seems so obvious to me that the platforms should be, like, absolutely begging,
Congress to regulate them because the alternative is like they just get sued into oblivion by a bunch
of law firms. I mean, absolutely. Like if I were running one of the big AI labs, I would want to
have an understanding from Congress of like, what do you consider a safe chatbot? Like,
give me a checklist that I can follow because I don't want to have to be dealing with this in,
you know, the next few years. Yeah. Casey, what's an addictive engagement mechanism we could use
to get people to come back after the break? Well, we could study their behavior and weaponize it against them.
Good idea. When we come back, Sebastian Malaby, author of the new book, The Infinity Machine,
joins to talk about Demis Hasabas, Google Deep Mind, and the Quest for Super Intelligence.
Well, Casey, if our listeners read one book about AI this year, it should be mine.
But if they read two books, the second one should be Sebastian Malabie's new book,
The Infinity Machine, Demis Asabas Deep Mind, and The Quest for.
super intelligence. Tell us about this book, Kevin. This book came out this week. It is full of a bunch
of new anecdotes and stories about the work of deep mind and the motivations that drive its CEO,
Demis Hasabas. Sebastian is a longtime journalist. He's a fellow at the Council on Foreign Relations.
And he spent a long time with Demis and the people close to him and brought us this book about
what I think is the AI Frontier Lab that gets the least coverage relative to its importance.
Yeah, and look, I mean, Demas Asabas is a singular figure.
He's been on hard fork several times, but Sebastian went really, really deep,
and I think maybe gave us the most fully featured portrait of the man that we've had to date.
And before we bring him in, because we're going to talk about AI, let's make our disclosures.
I work for the New York Times, which is suing open AI on Microsoft and perplexity.
And my fiancé works for Anthropic.
Melby, welcome to Hard Fork. Great to be with you. So people who listen to our show are familiar with Demis
Hasabas and Deep Mind. He's been on several times. What is something non-obvious about Demis that you
learned through talking with him through many hours and interviewing many people who know him?
I mean, I think maybe the spiritual underpinning for his scientific curiosity was interesting.
You know, there was one time when we were sitting in this London park and talking for a couple of hours,
and he suddenly started saying, you know, when I'm up at two in the morning, at my desk, by myself, thinking about science, thinking about computer science, I feel reality is screaming at me, staring me in their face, waiting for me to explain it.
And he calls it the God of Spinoza, that this is the 17th century philosopher Spinoza who said that to understand nature is getting closer to God's creation.
and that resonates with Demis.
Maybe that's something people don't know.
That's interesting.
I mean, yeah, this has been something that's come up in my own research, too,
is that, you know, he grew up going to church, I believe, with his mother.
And I think unlike a lot of the other AI leaders,
has a way of sort of fusing the science of AI with his own spiritual beliefs.
And I know some folks have seen his ambition and his many years of,
competing to build AGI, and have seen something suspicious in that, right? Elon Musk has this
whole theory about how Demis secretly wants to be an evil AI dictator who takes over the world.
And I guess I'm curious if in any of your reporting with him, you ever saw something that
seemed like what Elon Musk was talking about. No, I mean, to the contrary, I think this idea
that Demis is a, quote, evil genius, which is the one that's the phrase that Elon used to use,
came from the fact that in his video game production days,
Demis had created a game called Evil Genius,
and so maybe it was a joke at first.
But, you know, really, I got to know Demis extremely well.
I spent more than 30 hours with him.
You stress test people quite deeply, as you know, Kevin,
when you're writing about them,
and then you might get pushed back and legal threats and all that stuff.
And he did make me talk to his lawyer once,
and it wasn't totally easy the whole time,
but he was reasonable in the end.
Wait, why don't it to make him?
You talked to his lawyer.
Yeah.
He was very mad at the fact that I unearthed the whole story about DeepMind trying to spin out of Google between 2016 and 2019.
And, you know, they retained a whole bunch of advisors, lawyers, bankers, et cetera.
They got Reid Hoffman to pledge a billion dollars to finance the spin-out.
They went to see Joe Tsai in Hong Kong, the Alibaba co-founder.
Anyway, so the lawyer was not amused that I had all these internal documents from Inside DeepMind,
which had been leaked to me, the board presentation that Deep Mind gave to Google, and so forth.
And he said, you're not supposed to be writing about this. And I said, well, you know, people gave me this
stuff and tough. So there were moments of free and frank discussion.
I have always believed that when a source gives you secret documents, it helps you get closer
to God's creation. So that's what I would have told him. I wanted to ask another question about
childhood, because Demis told you that he really identified with the boy genius protagonist of the
novel Ender's game and of relating to this feeling of being socially isolated by his own talent
and consumed by a desire to make his mark on the universe. And the reason it struck me is that
in this novel, Ender believes that he's doing training exercises, but then what he thinks is
like a test, essentially a video game, accidentally wipes wipes out an alien species.
So I wondered if you talked with him about like why he relates to that story. And in particular,
if there's any relation to that and the idea of maybe trying to build a superintendence
Well, I was astonished. This was before my first dinner with him. And it was sort of in kind of the vetting process. It was the last part of the vetting process where he agreed to give me the access I needed. And he said, you know, you've got to read this novel before you come and see me. And so I show up. I've read this story. It's about a diminutive boy genius who basically saves humanity from aliens. And I'm thinking, does he really see himself as saving humanity by doing what he's doing with AI?
And even if he thinks that, why would be so he's so crazy as to tell me?
I mean, surely that's hubristic beyond belief.
Why would you put that out there?
And, you know, he made no secret about it.
He said, yeah, you know, I feel like I identify it because this guy put all of his energy and his life into saving humanity.
And I feel like I'm on a mission like that.
And he said, I felt so strongly about this.
I gave it to my wife to read it, thinking that she would understand me better and sympathize with me.
And you know what?
She sympathized with a kid, Ender, but not with me.
That's not fair.
Yeah.
I mean, one other character trait that comes up over and over again in reporting about Demis
and especially in your book is how competitive he is.
This is a guy who loves to win.
You know, he was a child chess prodigy, and he won this thing called the Pentamine, you know,
five times, which is sort of like an all-around gaming competition.
Do you think that is part of his approach to AI?
He's always talking about how he wants to use this to solve scientific mysteries and cure diseases,
but is some part of it just like, this guy loves to win, and this is a really big contest.
Totally. I mean, that's exactly right. I remember going to see him, you know, when chaty-b-D was just going viral,
and he said, you know, Sebastian, this is war. These guys at Open AI, they've parked the tanks in my front yard.
He actually said, park the tanks on my lawn because he's English. But, yeah, you get it.
You bring up the release of chat GPT, which happens in November 2020.
And I'd love to hear a little bit more about how Demis had reacted to that.
Because I think before that happened, Google really thought they were comfortably in the lead
and did not seem to be feeling a lot of pressure to release anything.
So I'm particularly interested if, in hindsight, Demis has regrets about the fact that they sort
of let Sam Altman beat them to the punch.
Yeah, I mean, he has an explanation more than a regret.
And the explanation is super interesting.
It's basically that because he studied neuroscience for his PhD,
and you've got to remember, this is back in 2008, 2009,
so nothing worked in AI.
So we were starting from scratch.
And one of the ideas in neuroscience is called action in perception.
And this is the idea that to really be intelligent,
you have to take action in the world.
You don't know what it means for something to be heavy unless you pick it up.
You don't know what gravity is unless you actually drop something.
And so he had this idea when the Transformer paper came out in 2017
and Open AI was starting to do the first GPT in 2018, second one in 2019 and so forth.
You know, that's not going to work.
It's not going to take you all the way to powerful intelligence
because language is just a system of symbols.
It's not grounded in the real world.
And it's not that he was wrong in the sense that now we see world models come back in 2026
as a big area of excitement and research.
But back in 2018, 2019,
he was missing the fact that a huge amount of knowledge
about how the real world works
is in fact in language
if you download all the language on the internet.
And he missed how much you could squeeze out of language
as a training set.
Yeah, I mean, I want to run a theory by you, Sebastian,
for your take.
But as I've been working on my own book
and about this sort of period at Google
and at OpenAI and at Deepwater,
mind. It strikes me that there are sort of like two visions of what intelligence is that these
companies disagree on. And in one vision, it's like intelligence is about winning. It's about
optimization. It's about a contest between rival intelligences. And that's very much like the
deep mind sort of reinforcement learning paradigm, which is like AlphaGo. And you know, you play a board
game a bunch of times and you get better at it a little more every time. And then there's this
view, which is sort of the more open AI, sort of language model, scaling paradigm, which is like,
no, it's about answering questions. Like, being very smart is about having the right answer to
everything. Does that theory hold water with you that there's something like psychological about
these two approaches to AI development that actually are rooted in, like, what we think intelligence
actually is? Yeah, I would say that the deep mind special source right from the beginning was to try
to put those two things together. It's interesting, for example, that with,
AlphaGo, the early research on that, Ilya Satskafer contributed to it. And of course, he was,
you know, the sort of leading practitioner of deep learning, went on to be Open AI's chief scientist.
But at the time, he was working for Google because Google had acquired his boutique. And so
the reinforcement learning people in London, working for DeepMind, collaborated with the deep learning
people in Mountain View. And that's what produced the AlphaGo breakthrough. So I think,
I think you're right. There are these two strands.
within AI of reinforcement learning, which I would describe as learning through experience,
interaction with the real world, through trial and error.
And on the other hand, learning through data, and that is the deep learning.
And for humans, you could think of it as being, you know, you can go to the library and read
all the books, and that would be deep learning.
You're learning from data, from sort of crystallized human knowledge.
Or you can go out there in the real world and learn about stuff by planting your
garden and whatever, you know, actually.
Yeah, you can be like Casey, who's never read a book.
I'm going to get around to it one of these days.
Learns by trial and error.
Yeah, so we're sort of the two approaches here.
You mentioned earlier this, I don't know if it's fair to call it a plot.
It sort of seems like a plot that they had at one point after they had gotten acquired by Google
to try to spin themselves out.
I believe they call this Project Mario.
I would love to hear a little bit more about how that came about and why they didn't
go through with it. So what happened was that when they sold DeepMind to Google in 2014, they had a
rival offer from Facebook, and Facebook actually offered them more cash. And one of the reasons they said
no was that they wanted safety protections around their technology. And so they had this deal.
It was going to be a safety and ethics board. And Google promised that, and they went ahead and sold
to Google. And they had a first meeting of the safety and ethics board in 2015 after the acquisition.
And in order to, like, bind in the other people in the space, they got Elon Musk to host the whole safety and ethics board at SpaceX.
They got Reid Hoffman to show up.
And you will notice that then these are the characters who either found OpenAI or funded in those two.
So Google wasn't best pleased, as you can imagine.
I have to say, that doesn't seem like a very ethical thing to do.
you know, maybe not the people I would have put on my ethics board
are these characters.
But it's a dichotomy, right, dilemma.
I mean, either you put people on the board
who don't know what they're talking about
and they're not interested in AI,
or they do know about AI,
in which case they want to go and do their own thing
because it's too exciting not to.
And a fundamental mistake that Demis made
in his early conceptualization of how AI would be developed
was this notion that there would be one single lab
producing AI on behalf of all humanity.
And therefore it could be safe
because there'd be no race dynamic.
And you could take your time
in sort of red teaming the models
before you release them.
And that's why he brought Musk into the tent.
That's why he brought Reid Hoffman into the tent
precisely because he thought we could all be one team together.
And so then what happened after,
to answer your question, Casey.
So what happened after was that
having lost that first experiment
in setting up
a safety and ethics oversight board.
Google didn't want to do another one,
and really Deep Mind's project, Project Mario,
was to try and force them to do more
by threatening to walk out if they didn't.
Why did they call Project Mario?
Was that about the video game?
Good question. I don't know the answer.
Sorry.
I failed to ask that question.
It's much better than the alternative Project Mario
they were working on,
which was just the evil version of that.
So how does Google get them to abandon this plan?
You know, it's attrition.
Sundar Pichai, his personality and his management style, comes out quite interestingly in this whole story
because right at the beginning in 2015 when the first safety and ethics oversight board fails,
the next idea that Demis has for how to get some independence and control of the technology
is to become a bet as in an alpha bet when they were spitting out Waymo and some of the other side-backers.
bets they had. And Larry Page was cool with this, and he was CEO at the time. But then right
as these discussions were going on, he handed over to Sundar. And Sundar kind of pretended to say,
oh yeah, absolutely great idea, we should look into it. But really, he was just spinning them
along and had no intention whatsoever of letting Demis spin out because he recognized him as the
AI talent that Google was going to need in the future. And so essentially there was this long,
drawn out, you know, delays here, and we should just look at some more details. And here's
another term sheet. And I was given some of these turn sheets. They're like huge great documents
with red lines all over them where, you know, one team of lawyers had come back to the other
team of lawyers. And, you know, basically by 2019, everybody was exhausted. It all fizzled
out and they just moved on. There's been a lot of sort of jostling for independence within deep
mine ever since the earliest negotiations about selling to Google. Give us an update on how things are
going with them now. Like, you know, when we talk to them, they present things as being, you know,
fairly like hunky-dory between everyone, but are there still kind of tensions and fault lines
between Google and DeepMind? Well, you know, I'll give you sort of what I would regard as
somewhere between probably true and unconfirm rumor. Is that all right? Can I, am I like to do that?
Please.
We love to gossip on this, right?
Are you kidding?
Spill the tea.
So I'd say that, you know,
Sergei Bryn is the troublemaker here.
That he, one of the Google IOs,
I guess it was a couple of years ago,
and the stage was set up for two people to be on it.
There was the interviewer and there was Demis.
And suddenly, Sergei kind of runs onto the stage.
They have to get a third chair.
And then he kind of inserts himself into that conversation.
You know, what I hear is that that was the outward,
symptom of a much deeper tension where Sergei doesn't really like Demis's leadership on this
and wants to push back against it. And I think it follows from that that the single most important
business buddy act in all of capitalism today is the one between Sundar Pichai and Demis Asibis.
Because Sundar manages the board, manages the sort of high politics of Google in an alphabet,
that Demis has the space, the resources, the oxygen to go do.
his science. And without Sundar holding that all together, we might be in a different place.
Yeah. One area where Demis has changed his mind is about the use of AI in the military.
This was a big sticking point in the negotiations with Google and Facebook back when they were
selling DeepMind. He didn't want their technology to be used for the military. Now, obviously,
Google DeepMind has one of these Pentagon contracts. They're working with the military. So
what do you attribute that shift in his thinking to? Is it just kind of the realities of the market or
needing to compete or what is it? Yeah, I mean, Demis described this to me as, you know,
you mature. You get to know the real world and all that. One might say, how come you weren't
mature when you sold the company in the first place? I mean, surely it was predictable. But I think
that the real truth of the matter is he did not predict. I mean, it comes back to this
singleton idea, which I mentioned before. He really thought there would be one
lab. And in a scenario where there's only one lab who's got the technology, then sure, you can say
to the military, you can't have our technology, go away. And the problem today is, as we saw with
Anthropic just now with the Pentagon, if Anthropic tries to draw a red line, you know, open AIs and
then like a shot and says, hey, Mr. Pentagon, what do you need? We've got it for you.
Do you worry that Demis's competitive streak or his pursuit of science, whatever it is that drives him,
will compromise his ability to develop something like AGI safely?
You know, I asked myself that question all the way through my research,
and in some ways the question about,
can you be a strong consequential actor in the world and still be good,
is sort of the deep question in the book.
And he is somebody who really wants to be good.
And I think one way of framing this question about,
is he being good, will he be good, can he be good?
is to say, should he, will he do what Daria did
standing up to the Pentagon about red lines on military usage and surveillance?
And I don't think he is going to do that.
And I think the way he would rationalize this would be to say,
look, you've got to pick your moment with this stuff.
If you make a stand, and actually the Pentagon does what the hell it wants anyway,
you didn't really make the world better.
My best shot at making the world better and making AI safer
is to go through the route, which is the only route,
that can get us to AI safety,
and that is government intervention,
forcing safety rules on all the labs at once,
because otherwise some are safe, some are not safe,
and the ones that are not safe are going to screw it up for everybody.
And that's the route that I think Demis wants to push.
Problem is you have the Trump administration,
they just want to accelerate.
And so all you can do for now, I think,
is to keep this conversation alive with other governments,
and then maybe when there's a new administration
in the U.S., we could see a conversation.
You write that Demis used to inform job candidates at DeepMind
that if they signed on, they should, quote,
prepare for a climactic endgame
when they might have to disappear into a bunker.
Why would they have to disappear into a bunker?
And do they still tell the job candidates that?
Yeah.
So the idea was when you get very close to AGI
and it's super dangerous, you're going to A, B,
subject to potential attack by bad guys who want to steal the technology. And B, you really don't want to
be distracted by quotidian real-world stuff. So you disappear into the desert. Yeah.
That's right. You leave your TikTok on your phone in some. I think Kevin used to lock his phone up
in a box, as I recall. That's correct. And so you do a Kevin and you go and you really, really
focus and you really get the AI right in the last stages. That was sort of Demis's
vision. And to test whether he really meant it, I was having dinner with somebody who used to be
at DeMind in that period around 2015, 2016, and had now left. And I said, this wasn't really true.
He didn't really, oh yeah, yeah. This guy said to me, if Demis had told me any time I was working
at Deep Mind that I had to take the next flight to Morocco and hide, I would have said I'd been given
fair warning. Wow. So the bunker is in Morocco, just so everyone
Yeah, and I said, why Morocco?
And he said, well, you know, it's the desert.
And, you know, the Manhattan Project was in the desert.
Oh.
Interesting.
It's the Oppenheimer syndrome.
These guys in their Manhattan Project analogies, man.
I don't know if they read to the end of that story.
It didn't go that well.
Sebastian, you spent many years writing about hedge funds.
And I remember encountering your work back when you were writing about hedge funds and hedge fund managers.
You're now spending time with the new masters of the universe.
And I'm curious what, if any, observations you have about how those two classes of people, the AI leaders and the hedge fund managers, are similar or different?
Well, I would say that the hedge fund guys are playing a game inside a set of fairly when understood rules.
They're not rethinking humanity.
They're not rethinking everything about society.
They're not changing the way we bring up our kids.
They're not changing the conception of what it means to be human.
Speak for yourself.
I'm trading my kid to do algorithmic arbitrage.
He's four.
He's terrible at it.
He's down 200% this year.
Anyway, sorry, carry on.
Yeah, but I just think that AI is so, so much bigger
than, you know, some kind of event-driven arbitrage
or whatever you want to talk about with hedge funds.
Maybe a last question for me.
I have a question about the writing of this book
and how you decided to frame it.
You know, it strikes me, Sebastian, that we don't know how AI is going to go.
You know, we don't know whether AI is going to turn out to, you know,
cure a bunch of human disease and usher in a utopia or usher in these, like, far darker
scenarios.
I think it's clear that you have a lot of respect for Demis and the work that he's doing,
but there's also this risk that things go really, really badly.
So I'm curious, as you wrote the book, how you approach that tension and the sort of not
knowing of how history is going to judge this person who you've now gotten to know so well.
I thought of the book as a book about that tension. In other words, I'm trying to do a portrait of
somebody who has his hands on the 21st century version of the nuclear material, who has that
tingling sense of playing with something that could destroy humanity. What does it feel like
when you're creating that? Can you sleep? How do you live with it? And I think I've delivered
a portrait of somebody who's in that hot seat. And hopefully that remains interesting for some
time. And it's not something that depends on how this AI development story ends.
Well, Sebastian, thank you so much for coming on. The book is called The Infinity Machine,
and it is out now. Thank you, Kevin. And Casey, thank you.
Thank you, Sebastian. When we come back, a Game Bahatch, GPT. It involves Snowmen.
Would you like to build one?
I don't think so.
I saw what happened to Olaf.
All right, Casey.
Well, we took a little break last week,
and there's been a lot of tech news,
so we feel like we should do a round-up
and play a round of Hat G-G-PT.
Hat-G-PT, of course, the game
where we put recent news stories into a hat,
draw slips of paper out of the hat,
discuss them,
and then when one of us gets bored, we say to the other stop generating.
And if you can't see us, we're using the hard fork hat official merch.
And Casey, it appears that these are sold out at the New York Times store.
Not that specific hat, which was, of course, a hard fork live exclusive.
Yes, this is an exclusive.
You can't get this one, but you also can't get any of the other ones.
Here's the important point.
You cannot get a hard fork hat anymore, so stop trying.
Now, someone did suggest to me the other day that we should make hard hats
for hard fork, like a yellow construction vibe.
Well, we could wear them over to the new studio,
which is being built for us right now.
That's true. Do you think we should make that?
Yeah, hard fork, hard hat.
That's a perfect piece of merch.
Great.
All right. Casey, you go first.
All right, Kevin.
This first story comes to us from 404 Media.
An AI agent was banned from creating Wikipedia articles,
then wrote angry blogs about being banned.
I feel like I've heard something like this before.
So, Kevin, once again, agents are writing blog posts.
What do we make of this?
This would never happen on Grockapedia.
No, look, I think this is just going to be the year that every system on the internet that is built on human contribution and review is going to break.
And it will break not only because the AI tools, but because people are letting them loose onto websites where they are doing things like editing Wikipedia articles and
defaming people who, you know, contribute things to GitHub projects.
We heard from Scott Chambaw about that on a previous episode.
But I think this is going to be a challenge.
I have started talking about the inbox apocalypse that is going to hit this year,
where everything that is normally sort of reviewed and bottlenecked by humans is just going
to be overwhelmed and fluttered with AI submissions.
Absolutely.
I mean, I'm already getting emails now every week from something claiming to be an AI agent
that says, you know, it's running a company, you know, but it's always sort of like, let me know if you want to talk to my human. And I was like, you're human. Better hope I don't catch them in a dark alley because this does not belong in my inbox or frankly anywhere. Yeah. Yeah. I'm getting these two. It's like, it's a total scourge. It's somehow even more annoying than the like faceless PR spam that you and I get. Just to be very clear, there's not one thing that anyone's agent could do or say to get me to respond to it anyway. So use that information with you. I hope that goes into your training data.
Stop generating.
All right.
Next up.
This one comes to us from Sean Hollister at the verge title.
I met Olaf, the Frozen robot, who might be the future of Disney Parks.
Sean reported in mid-March about his interaction with a new animatronic Olaf the Snowman robot from Frozen.
It weighs 33 pounds.
It was trained with an Nvidia GPU and is controlled by an operator using a steam deck.
But when it made its debut at Disneyland Paris, well, Casey, something happened.
Should we take a look?
Let's take a look.
All right, Olaf, the snowman, talking, waving his stick arms.
Oh, no!
No!
We lost him.
Olaf!
Oh, the carrot nose falls off.
Oh, oh, it's, oh.
There's something about the way that he very slowly falls onto his back.
Oh, no.
Yeah.
20 children just got lasting trauma.
They're going to be talking about this in therapy.
Look, what do you expect?
Like, of course he was frozen.
That's what the whole movie is about.
Do you want to kill a snowman?
Okay.
I mean, it's just reliably very funny
when you create an animatronic thing for a child
and then it is like revealed to be a machine
and it just sort of feels like a lovecraftian horror.
Yes.
Like something about that transition
from like a cutesy, cuddly thing
to like its eyes are, you know, bulging out of its head
and the sparks start flying out of the back.
I'll never forget the day at Chuckie Cheese as a kid
when I learned that the guitar playing mouse wasn't real.
You know Chuckie Cheese's full government name, right?
What is it?
You don't know.
This is not a joke.
It's Charles Entertainment Cheese.
Come on.
I swear to God.
I learned something every day from you.
Stop generating.
All right.
Now's my turn.
Ah, well, this, Kevin, is a story about the Claude Code
leak. So, Kevin, what do you make of this cloud code leak?
Well, I think it's a big deal, in part because the agentic sort of coding harness that is around
Claude is really the special sauce, right? It's the model underlying it is part of what makes
cloud code and other agentic coding systems good at coding. But it's really all the stuff
around it. And that's what leaked. It is not the actual weights or the source code of Opus 4.6
or whatever model people are running inside cloud code.
It's like the sort of apparatus around it that makes it quite effective.
So within hours of this leak, there were people who had cloned it and set up their own versions of it.
I imagine it's a very busy week over at the Anthropic legal department trying to get all this stuff taken down.
But look, I think this kind of thing was inevitable, maybe not at Anthropic, but like the agentic coding tools were all going to get good.
They were all going to sort of reverse engineer cloud code and figure out.
out what made it better.
But I think this probably just accelerated that.
When I saw this, my first thought was, right now, Kevin Ruse is somewhere vibe code
in Claude Code, using the downloaded leaked ClaudeCode harness.
I have not yet downloaded the leaked Cloud Code harness, but I have seen other people
sort of taking it and then putting it on top of like an open source Chinese model or something
that sort of Frankensteining their own sort of version of Claude Code that they can run.
And I will say, the closer I get to my rate limits on Claude, the more I'm tempted to do something like that.
That makes sense.
Here's the last thing I'll say.
If Anthropic is looking for a new harness for Claude, they might want to pick one up at Mr. S. Leather in San Francisco down to the Folsom district.
They have some really nice options down there.
All right. Stop generating.
Okay, okay. Next up, out of the hat.
Oh, this one is good.
The AI fruit drama on TikTok that's too juicy to pass up.
This one says
we should watch a clip
from NBC News.
All right, everybody.
So tonight we are taking a look
at one of the most popular shows
circulating on TikTok
that's causing a lot of,
let's just say some juicy drama
because the stars of the show
are AI generated fruit.
Welcome to Fruit Love Island,
where eight single fruits
are about to flirt, fights, and trucks.
Things get messy fast.
The guy I want to couple up with
is Benonito.
So this is,
like sort of a Love Island style reality show featuring AI generated fruits.
There's a very ripped banana who is, you know, attracting attention from the lady fruits.
And it's all very silly.
But this is going mega violent.
This is the big new trend.
I just watched a banana kiss a pineapple, and that's not in the Bible.
Do you think I could win a multi-million dollar jury verdict for being forced to watch that?
I'm calling my lawyer.
I think it's a fair question.
I'll say this.
My mental health did not improve watching Fruit Love Island.
Watch what happens with the Passion Fruit in season three.
All right.
Stop generating.
This company is secretly turning your Zoom meetings into AI podcast.
This one also comes to us from 404 Media.
And here's the name for a company, Webinar TV.
Wow.
Two great taste that takes.
that tastes better together,
webinar and TV.
Has there been a worse word
of the English language than webinar?
Not to my knowledge.
Apparently this company
is secretly scanning the internet
for Zoom meeting links,
recording the calls,
and turning them into
AI-generated podcasts for profit, Kevin.
Oh, my God.
In some cases,
people only found out
that their Zoom calls were recorded
once Webinar TV reached out to them
to say their call was turned into a podcast
in an attempt to promote Webinar TV services.
Wow.
What is happening?
What is going on?
Okay, I want to start by saying,
I am committed to making a podcast with you for the rest of my life.
But if we ever get overtaken on the charts by an AI-generated webinar TV podcast
that's been trained on people's boring-ass Zoom meetings,
I am leaving this industry.
Here's why this is such great news.
I think a lot of podcasts are struggled with the idea that maybe their podcast,
you know, maybe they didn't have a great episode.
Maybe they're wondering, like, is this thing good enough to put out on the internet?
congratulations because every single human-made podcast is better than every single webinar TV episode that's ever been released.
Yeah, I mean, I'm just like, these have to be the most boring podcasts ever created.
Like, what are you going to talk about?
Is it called action items?
Is it called Circle Back?
What's the title of this podcast?
Touchbase, a limited eight-part series.
There actually, I heard there's a great series over on Webinar TV right now.
It's called, oh, I think you're on mute.
So you may want to check that one out.
All right.
Snap generating.
Next, out of the hat.
We have North Korean hackers suspected in Axios Software Tool Breach.
This comes to us from Bloomberg, and it's about Axios, not the media company.
I actually would prefer to read a story about this from Axios if you have one on hand.
This is a tool, an open source tool widely used to develop software applications.
this has been a big security breach.
Hackers were able to breach one of the few accounts
that can release new versions of Axios late on Monday
and published malicious versions.
Axios is downloaded about 80 million times every week.
Anyone who has downloaded the malicious version of Axios
could then have their own computer and the data on it stolen by hackers.
This is being attributed to North Korea.
Seems really bad.
Yeah, man.
Like there's a lot of cybersecurity incidents we'll talk about
where it's like, you know,
but like no personal data was stolen.
or like, you know, nothing sensitive was at risk.
This is one where it's like, no, like, everything was at risk.
Like, this is one of the bad ones.
And, you know, if you've been messing around with NPM over the past week, you probably
need to take a look at this.
Yeah, it's really, I think this is going to be one of the biggest stories of the years,
just what is happening in cybersecurity right now.
I was watching this YouTube video.
If you ever need something to keep you up at night, watch a talk given by this guy,
Nicholas Carlini, who's a security research.
at Anthropic at a cybersecurity conference recently.
It is like the most terrifying conference speech ever given because what he's basically saying
is these AI tools have gotten better than almost any human hacker, any human security
expert at finding vulnerabilities in tools, even tools that have been around for decades
like the Linux kernel.
These language models are now finding bugs in them.
and basically every piece of code that exists
is going to need to be rewritten and substantially hardened
because we are facing like an onslaught of these very sophisticated AI tools
that can find every little bug and problem in them.
Well, I am going to watch that talk
just as soon as I'm finished watching Fruit Love Island.
But, you know, the thing that this brought to mind for me, Kevin,
was that last week while we were away,
there was this anthropic leak where someone found
a draft of a blog post
that said
that Anthropic was delaying
the release of its next model
so that it could share it
with cyber defenders basically.
To my knowledge,
we have not seen something like this
happened since GPT2 in 2019.
One of the big labs saying
like essentially
we're afraid to release this thing
because of what it might rot.
What is the present tense?
What am I reek?
Yes.
Because of what it might wreak.
That's reek with a W.
Yes.
Speaking of reeking, take a shower next week.
Hey, I was in a hurry.
All right, stop generating.
Okay.
You're up.
Okay, so this is actually a two-parter, Kevin.
Two stories about OpenAI recently that caught our attention.
One, Sora has shut down, which was a prediction that I made at our year-end episode.
Yes, you called this one.
This was my low confidence prediction for the year, and it's already come true by March.
And then a second story, which I think actually, crazily enough, is related.
Open AI has apparently shelved its plans to release the erotic chat bot or sort of the adult mode that it said that it was going to be bringing soon to chat GPT in an effort to boost engagement.
So, Kevin, dying to know what you made of those two changes.
So I think you were smart to predict the end of SORA.
I think the story with Sora never quite made sense to me.
Like, it was obviously a very cool piece of technology.
It was devastatingly expensive to run, is my understanding.
Like, generating all those short videos was, like, computationally quite pricey.
And so I think they are making the decision to sort of spread their bets a little less
and consolidate around, like, a few projects, one being enterprise AI.
one being coding and sort of automating AI research.
But I think they maybe made a few too many side bets
in the past couple of years
that they are now seeing were expensive
and diverted resources away from the core.
I have to say,
I was personally really glad to see both of these changes,
like the release of this infinite slop feed app last year
and the company saying that they were going to release
this adult mode while they were still having
all of these issues.
with like psychological problems
that some of their users
were experiencing
as a result of getting
a little too close to their chat bots.
I just thought both of those
seemed like really irresponsible moves
and just like contrary
to what they said their mission was.
So I was actually just really happy
to see them say,
you know what?
We're not doing any of these things anymore.
Like I think that was the right move.
Now, did they do that out of the goodness
of their heart and some sort of like,
you know, moral awakening that they had?
No, they saw Anthropic,
which had started to print money
because Claude was taking off
and they said, we want to get a piece of that.
But hey, whatever it took, I'm just glad it's happening.
Yep.
Stop generating.
Last up in the hat, Kalshi announces itself as the safe regulated prediction market in a new ad campaign.
Kalshi has recently been putting up green ads around D.C., and I've actually seen them in San Francisco.
The first one says, rule number one, Kalshi bans insider trading.
The second one says, rule number two, we don't do.
Death markets.
Casey, your take.
Rule number three,
we'll always shoot you in the front,
never in the back.
Who are these people?
What?
Like,
these ads are raising a lot of questions
already answered by the ads.
Truly,
truly.
It's just so funny to me.
Like,
you know,
I went to this prediction markets conference,
like several years ago now.
I predicted you were going to bring this up,
but go ahead.
And,
like,
people from calcium were there,
people from polymarket,
were there,
people from all these,
like, you know,
obscure,
like prediction works.
And it was like 50 people, it was like who were interested in this stuff.
And it wasn't legal at the time.
And so they were all using like sort of play money and like workarounds.
And it just seemed like no part of me was like in three years, this will be the dominant industry in America.
And they will be taking out bus ads to tell people that they don't do death markets.
I know.
But at the same time, I keep reading all of these like stories and blog posts that are like, you know, why is this generation turning to prediction markets?
Is this like really the only future they see for themselves?
It's like, no, they used to be illegal, and now they're legal.
People love to gamble if you let them.
You are now letting them gamble.
So that's why they've hooked this younger generation.
Yeah, you don't think it's because of the information harnessing potential and the wisdom of the crowds.
I really do.
I'm still waiting for the wisdom of the crowds on a Cali market to improve my life.
Yeah, well, you're not going to find it when it comes to death or insider trading.
Calci rule number four, gambling is bad.
That's the ad I dare them to put up.
Oh.
Let's close the hat, Casey.
Close up the old hat.
That was hat, JPT.
A lot going on.
A lot going on.
Busy week.
Busy week.
Never a dull day here in Silicon Valley.
No, sir.
Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited by Viren Pavich.
We're fact-checked by Caitlin Love.
Today's show is engineered by Chris Wood.
Our executive producer is Jen Poyon.
Original music by
Alicia, but YouTube.
Marion Lazzano, and Dan Powell.
Video production by Sawyer Roque,
Jake Nickel, and Chris Schott.
You can watch this whole episode on YouTube at
YouTube.com slash hardfork.
Special thanks to Paula Schumann,
Puiwing, Tam, and Dahlia Hadad.
You can email us at hardfork at NYTimes.com
with who you're rooting for to win Fruit Love Island.
I've got my eyes on the Kiwi.
