Hard Fork - Bad Apple + The Rise of the AI Empire + Italian Brain Rot
Episode Date: May 9, 2025This week, iPhone users started to feel the impact of a stern court order against Apple that requires the company to stop collecting a commission on some app sales. We break down what this means for a...pps like Kindle and Spotify and why the judge suggested that Apple and a top executive should be investigated for criminal contempt. Then, Karen Hao joins us to discuss her new book about OpenAI and explain why she believes the benefits of using the company’s tools do not outweigh the moral costs. And finally, Casey introduces Kevin to a strange new universe of A.I. slop that’s racking up millions of likes on TikTok.Guest:Karen Hao, Author of “Empire of AI”Additional Reading:Judge Rebukes Apple and Orders It to Loosen Grip on App StoreBrain Rot Comes for Italy We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Well, Casey, have you heard the exciting news this week?
Which news, Kevin?
The Golden Globes are adding a podcast category.
I did not hear that.
Yeah, that just came out.
So yet another award. We're not going to win.
Well, I don't know about that, because if I know one thing about the Golden Globes,
it's that until very recently, it seems like you could just bribe them directly to win.
I don't know if that's still true, but we should look into it.
Yeah, what does it cost to win a golden glove these days?
I don't know, a few hundred dollars.
Checks in the mail.
Wait, unless there are tariffs, topical.
Okay, now we're definitely not winning.
Now we're not winning?
Cause I accused them of corruption?
Oh, listen, we speak the truth on this podcast, okay?
We do.
I don't care what it costs me.
I'm Kevin Reilly, and I columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, the scathing court ruling that forced Apple to give up some control over
its app store and could send an executive to jail.
Then, author Karen Howe joins us to discuss her new book on the history of open AI
and the hidden costs of reaching massive scale.
And finalini, it's time for me to teach Kevin
about the joys of Italian brain rot.
Mama mia. Kasey, have you noticed a smell in the air over San Francisco this week?
Many smells in the air, Kevin.
Well, the smell I'm talking about, Kasey, is the smell of freedom.
Because in the last week, Apple has lost its iron grip on the iOS app store thanks to a
ruling by a judge.
Commerce is legal in America again, Kevin.
Yes.
So we're going to talk about this today.
Apple has been forced to make some big changes to its app store by a lawsuit that was brought
by Epic Games, the maker of Fortnite.
A judge ruled last week that Apple had not complied
with an earlier injunction,
and we will get into all of that.
But first, I just want you to make the case
that this matters to normal people.
Why should the average person with an iPhone
care about what Apple's rules for its app store are?
Well, to me, it actually starts with the Kindle app, Kevin.
Lots of people love to read on their phones and tablets.
And I think most people I know in my life have had the experience
of opening up the Kindle app or the Amazon app, thinking,
I want to buy that book.
And then there's just kind of a big blank spot where you're
expecting to see the buy button.
And Apple is the reason for that blank spot.
They charge such a high commission on e-books that Amazon
and other companies cannot profitably sell them
And so since the dawn of the app store in order to buy a book on your phone something that should be very easy
Has required you to open up a browser log into an Amazon account navigate that whole system
And Amazon is not alone many many developers have had to go through similar
Contortions just to be able to sell their products and still make any kind of profit.
Yeah, this is the so-called Apple tax of up to 30% that developers have to pay when they
want to charge for apps or purchases within their apps. And for many years, Apple has
not only levied this tax, but they have also made it impossible for those developers to direct users off of Apple's platforms to say, hey, if you want a better
deal on this Spotify subscription or this Netflix subscription or this purchase of an
iPhone game, you can actually go on the web and get a better deal there because there
we don't have to pay Apple's 30% fee.
That has not been allowed. And so Epic Games, which makes
Fortnite, brought a lawsuit years ago to try to get those policies changed. And in 2021,
a judge in California named Yvonne Gonzalez Rogers ruled that Apple had violated the law
in California against unfair competition. She ordered Apple to allow apps to provide users with links
to pay developers directly for their services.
And that way they could avoid paying Apple's 30% commission.
And after that ruling, Apple did go and make some changes,
but apparently they didn't do a good enough job.
No, and I would say this has been apparent
to most people who've been following this.
I think we've talked about this on the show.
Apple did what is often called malicious compliance,
doing the absolute least while dragging and kicking
and screaming the whole time.
Yeah, so we're gonna talk about
some of that malicious compliance,
but let's just say straight up, this was a scathing opinion.
I have rarely read a judge who is so obviously angry
at a tech company for doing what they did.
No, this was the kind of speech that you typically only see
on a Bravo reality show.
Yes.
So Judge Gonzalez-Rodgers not only accused Apple
of doing this kind of malicious compliance,
but she also accused them of outright lying to the court
under oath.
She referred both Apple and their
vice president of finance, Alex Roman, for potential criminal prosecution for perjury.
We should just read the last paragraph of the order from Judge Gonzalez Rogers, which is truly
the mic drop moment. She writes, quote, Apple willfully chose not to comply with this court's
injunction. It did so with the express
intent to create new anti-competitive barriers, which would, by design and effect, maintain a
valued revenue stream, a revenue stream previously found to be anti-competitive. That it thought this
court would tolerate such insubordination was a gross miscalculation. As always, the cover-up made
it worse. For this court, there is no second bite at the apple.
Period.
But you know what?
It kind of was a second bite at the apple
because she bit him the first time,
and then they didn't do it, so she had to bite him again.
Yes.
So let's just talk for a second about some of the details
that were revealed in this judge's opinion
that have come out about how Apple tried
to skirt compliance with this earlier 2021 injunction.
Yeah, well, and this was well known
to all of the developers,
but if you wanted to use an external sales system
in the App Store, you still had to pay Apple a commission.
And that commission was 27% or 3% less
than it was paying Apple.
And of course, these companies
have to pay the payment provider.
So basically Apple created a system
where you were actively disadvantaged in multiple ways
from trying to operate outside of the App Store.
Yes. So I knew that Apple was charging a commission
for apps that would send people, like if you're Spotify,
and you want people to be able to subscribe to your app on the internet,
pay a lower price, pay you directly rather than going through Apple.
You could do that under Apple's sort of revised rules,
but Apple would actually charge you a 27% commission,
which by the time you added credit card fees
on top of that would probably be more than the 30%
that they would charge you.
So this was clearly a case of Apple trying to say,
well, go ahead and use this other system,
but it's not actually gonna save you any money.
No.
And what I did not realize until I read
Judge Gonzalez-Roger's opinion here
was that Apple would not just collect those commissions
if you went directly from an iOS app
to the web to buy a subscription or a service,
but if you went a week later,
they would be able to track that you had gone to the web
from the iOS app and they would still charge the developer
that commission.
Yeah, it was absolutely outrageous.
It was insane.
And it was also not the only thing that Apple did
to try to dissuade iOS users from going to external links
to buy goods and services outside of their payment system.
Casey, what is a scare screen and how did Apple use this? to external links to buy goods and services outside of their payment system.
Casey, what is a scare screen and how did Apple use this?
The scare screen was a pop-up that you would see when a user did actually try to click
out of the app store to make a purchase using an external system.
And while these were not the exact words, Kevin, here was the vibe.
Hey, loser, looks like you're trying to do something stupid.
You're probably gonna die.
Do you wanna try it anyway?
And believe it or not, Kevin,
when people saw a message that had that vibe,
most of them just chose not to click it.
Yeah, and what was so amazing about this
was that Apple, I guess, had tried to protect
some of its private company communications
from being seen by the judge in this case
by claiming some sort of attorney-client privilege. But the judge in this case,
by claiming some sort of attorney-client privilege.
But the judge said, no, no, no, out with it.
Let's see those emails.
And so we have, in this opinion, lots of emails between Apple executives,
including Tim Cook, the CEO, talking about the very specific language
to put on this scare screen and how to make it even scarier
so that users would
be less inclined to go outside of Apple's ecosystem and make a purchase.
Yes, and these internal documents showed that the company would lose minimal revenue or
no revenue at all from this, right?
That they built a system that was maximally designed to protect their revenue, which was
contra to the judge's order, which she wrote in the spirit of increasing competition
and other companies' revenue.
Yeah, so to put it mildly, Judge Gonzalez-Rogers
did not find any of this charming in the least,
and she also directly accused at least one Apple executive
of lying outright under oath about what it had done.
Casey, explain the perjury charge here.
Yeah, so this perjury charge was leveled
against Alex Roman, the vice presidents of finance at Apple.
And among other things, she focuses on this moment
where he testifies that until January 16th, 2024,
which is when Apple's revised system went into effect,
Apple had no idea what fee it would impose on purchases that
leaked out of the App Store.
He testified that the decision to impose a 27% fee was made that day, which is just like
so obviously untrue.
And of course, during the legal proceedings, business documents revealed that the main
components of the plan
were determined in July of 2023.
So basically this guy got caught red handed
and the judge is gonna punish him for it.
Yeah, and so effective immediately,
according to Judge Gonzalez-Rogers order,
Apple has to drop these commissions,
these 27% fees on these external links.
And Apple, as of last week, had officially updated its App Store guidelines to allow
those links out of the app in the US.
But Casey, what are the implications of this and how are other developers that put stuff
on iPhones reacting?
So developers are reacting by implementing the links that they've always wanted to have. So in the Kindle app, for example, now, you will see a get book button,
you'll tap it and it'll kick you out immediately into a browser
where you can complete a purchase.
Spotify, Patreon are also doing something like this.
This is not a perfect solution, like you can't actually just buy a book
in the Kindle yet
for reasons that actually aren't entirely clear to me.
Maybe we'll get there.
But on the whole, we are essentially removing
the restrictions that prevents outside businesses
from communicating with their customers,
telling them about deals, telling them about their websites.
Just these sort of like very onerous restrictions
on the speech of these other companies have been wiped out.
Yes, and I think that gets to why these arcane
and somewhat small seeming changes
to the rules governing Apple's app store
really are important.
Apple has been for many years,
this sort of godlike gatekeeper on any company
that wants to make things
for the billion plus iPhones out there.
They have made extremely strict and specific rules about how
developers can and can't build their apps and sell products and services to customers.
They have effectively been a landlord over the entire digital services economy. And I think,
judging from this opinion, they have really abused that power and now they are getting slapped on the wrist for it
Yeah, and I think it has been to their own detriment Kevin
You know Apple's view as that these developers should feel lucky that they get to sell in the app store at all
When in reality a big reason that we buy iPhones is because of the apps that are there if you took off
The Amazon app and the Spotify app and the Patreon app and a mill, you know, all these other apps off of the iPhone,
people would start considering alternatives, right? And so I think
that the the balance in between the developers and and Apple had just
gotten completely skewed and Apple has not been recognizing the value of
what those developers are bringing to iOS.
Yeah. So you think this ruling is a good thing?
I think it is absolutely a good thing. I think it is absolutely a good thing.
I think it has been long overdue
and I hope it is upheld after Apple appeals,
which it is going to do.
But what do you think?
Yeah, I mean, I think it's an open question.
So Apple's defense of these app store rules
has always been some version of like,
we're protecting our customers, right?
If we let people, you know, side load apps onto the iPhone
in a way other than through the app store,
people will put all kinds of dangerous malware
and stuff on the iPhone and you'll be sorry.
If we let people pay for things on external websites,
then people will run all kinds of scams and people will be taken
advantage of.
And so by implementing these rules, we're really protecting our customers.
It's for your own benefit, essentially.
And I think it'll be really interesting to see if when these restrictions are gone, people
actually do say, we wish that Apple were taking a more active role here.
We want some of these restrictions back.
Or if the net results is just gonna be
that people have more choice
and they pay a little less for stuff
because the developers making that stuff
are not having to pay 30% of the revenue to Apple.
Well, I think that's gonna be the case.
This whole argument that Apple maintains
this pristine, vigilant control over the App Store,
I think has always been mostly a fantasy.
Think about in the early days of like
ChatGPT before there was an app, you know,
you would go onto the app store and you would search
for ChatGPT, you would see a dozen plus apps
that were all just clearly misrepresenting themselves
as open AI that were some of the most revenue generating apps
in the entire app store.
Apple could have stepped in to prevent that.
They didn't.
I'll give you a more recent example.
One of the best video games of the year is called Blueprints.
P-R-I-N-C-E.
All of the gaming bloggers love it.
I've been playing it. I've been loving it myself.
The day it came out, somebody just ripped it off
and just uploaded it onto the App Store
and was selling it for, I don't know, 10 bucks or something.
Why didn't Apple know that?
They are not paying the attention to the App Store
that they are telling you that they are paying.
Yeah, I mean, to me,
the most interesting part of this,
as with a lot of these antitrust trials
that are going on right now,
was just seeing the internal communications
at these companies.
And in this ruling,
there are all these fascinating excerpts
from these emails and messages between Apple executives
sort of talking about the various plans
that they had to sort of circumvent this injunction
and charge this 27% fee.
They had all these code names like Project Michigan
or Project Wisconsin,
so that they could talk about this stuff
in a way that would not be obvious
that they were doing some sort of price fixing.
And it just makes you realize
like these giant tech monopolies
did not end up that way by accident, right?
They have had to work very hard for a very long time
to prevent competition,
to keep their market power and their dominance.
And I don't know, man,
there's just something really depressing about that.
Like these are companies that used to succeed
by making good things that
people loved. And in some respects they still do that, but they also spend just a ton of time.
Their top executives are in these meetings talking about whether the fee should be 27%
or some other number. And it just makes you realize like they have really lost the plot here.
Absolutely. Well, let me try to cheer you up a little bit then, Kevin, because I think there actually is a negative consequence for these folks
of just growing their profits so big on the basis of this extremely easy money
where, you know, they just make every developer pay this very high rent to
them, and that is Apple has been missing the boat on next-generation technologies.
We know that they invested billions of dollars
into a car project that they could never figure out
and had to abandon, right?
We know that they are struggling to figure out
how to do anything with AI,
and have had to walk back a bunch of claims recently
in a really embarrassing way.
We know that the Vision Pro,
their most recent hardware initiative, is not taking off,
in part because
developers do not want to make apps for it because they have not been able to get rich
making apps for it, right?
So all of this stuff is just adding up in a way where Apple's decisions really are coming
back to haunt it.
And while it remains a giant, and I'm sure will for a very long time, we are starting
to see some little cracks in its armor.
Yes, and yet, Apple just reported its earnings
for the last quarter.
It made $95.4 billion in revenue up 5% year over year.
So despite the fact that they are missing
all of these new innovations and trends,
that they're late on generative AI,
that they haven't succeeded with the Vision Pro in the way that they had hoped,
they are still doing quite well as a company.
So I don't know that this is actually coming back to
bite them in the way that we might hope it would.
Well, let's see what happens.
The idea behind these rules was never to make Apple a tiny company
that was struggling to get by.
It was just to get them to share a very small portion of the wealth
with a large number of developers.
Like, you know, Apple has done a ton of incredible innovative things.
They deserve to be rewarded for that.
They deserve to take some sort of commission from the apps in the app store, right? But this has been about trying to create a more level playing field for other
developers out there. And you know, if the end result of this is that Apple is still
pretty rich and profitable, I think that will actually make the point that the judge is making,
which is that there is no need for Apple to, you know, engage in the sort of shenanigans it's been up to.
Yeah, I think the best outcome possible here
is that all the big developers that can afford
to sort of develop their own payment systems
for their apps or send people to external websites
to buy things, that they do that.
And they start charging way, way less than 27% for that.
And that Apple is ultimately forced
to improve its own payment system,
to maybe reduce its fees, to, in other words, compete.
That is what all of this is about, is forcing Apple,
a company that has not had to compete
for the affections of iOS developers in a long time,
to finally step up and do something different.
Keep in mind, even Microsoft,
which was sued for anti-competitive behavior, you know, back in the early 2000s,
they never said we want to take a 30% cut of every software
program sold on Windows. They actually left a lot of money on
the table. And it helped that ecosystem to thrive, right? I
would like to believe something similar could happen here.
When we come back, we'll talk to author Karen Howe about her new book on
OpenAI and the the rounds this week.
Yeah, although I don't know if this is so much drama as the company is trying to retreat
from drama, Kevin.
Yes, so OpenAI announced on Monday of this week
that it was no longer trying to get out
from under the control of its nonprofit board.
That was something that a lot of people,
including Elon Musk, had objected to.
A lot of former OpenAI employees
and others in the AI field had said,
hey, wait a minute, you can't do that.
You've still got to have this nonprofit board controlling
you and OpenAI after hearing from some attorneys general
that they were not happy about this plan has retreated.
So what is the new plan, Casey?
And how is it different than the old plan?
So the old plan was basically the nonprofit is going to
no longer have any control over the for
profit enterprise. It's going to go be a separate thing. It's going to invest in,
you know, various AI related causes and philanthropies under the new plan.
The nonprofit is going to retain control over the for profit.
So basically the status quo is going to be in effect, Kevin,
except for a couple of key changes. One is what is now a limited liability corporation
is gonna become what they call
a public benefit corporation.
And a PBC, as they are called, has responsibility
not just to think about shareholders like Microsoft
and SoftBank and everybody else who owns a chunk of OpenAI,
but also to think about the general public, right?
So that's sort of one important idea that's there.
The other big idea is that the nonprofit is currently set
to get some unlimited amount of profits.
If, you know, OpenAI does eventually become
a trillion dollar company,
that's not gonna be the case anymore.
Under this new model,
the for-profit is gonna give some stake to the nonprofit,
but after that, it's gonna be a very normal tech company.
Everybody who owns shares, all of the employees,
they can get unlimited upside.
And the more money that OpenAI makes,
the more money that they can make, too.
Right.
So these profit caps that OpenAI had previously
had in place where investors like Microsoft
were sort of limited to earning some multiple of the amount
that they put in and no more, those caps are now going away.
Yeah, they put on their thinking caps
and they said, we're getting rid of the profit caps.
Well, it just goes to your point
that you've been making on this show for years now,
which is that OpenAI is a very weird company.
Yes, and I have to say,
when Sam Altman wrote a letter to employees this week,
the first sentence of the letter was, quote,
OpenAI is not a normal company and never will be.
And I felt so seen.
Somebody's been listening to hard for it.
And in other OpenAI corporate news,
the company announced late Wednesday
that its board member, Fiji Simo,
would leave her job as CEO of Instacart
to come be the company's new CEO of applications overseeing
its business and product divisions.
So we are not going to do a whole segment about the OpenAI corporate conversion story this week.
Because we love you too much.
We love our listeners too much.
We would not subject you to that.
But we are going to talk about it and many other things
related to OpenAI with Karen Howe.
Karen Howe is a reporter who has been covering
OpenAI and the AI industry for years now.
And she has a book that's coming out later this month
called Empire of AI, where she writes about Sam Altman
and OpenAI and what she calls the dreams and nightmares
of this very strange company.
Yeah, and you know, but by the way,
I think she should already start working on a sequel
and call it The Empire Strikes Back.
Something to think about.
Yes, and this is a very buzzy book.
People in Silicon Valley and at the AI companies
have been sort of nervously waiting for it.
Karen is very unsparing in her descriptions
of AI companies and the AI industry.
I would not say it is a book that the AI industry
will think is flattering,
but it's important conversation to have
because I think it's got a lot of people talking.
Absolutely, and before we do that, Kevin,
do we have anything we want to disclose?
Well, let me make mine first. My boyfriend works at Anthropic.
Kevin, you're coming out. I'm so happy for you.
No, I work at the New York Times Company, which is suing OpenAI and Microsoft for alleged copyright
violations. Interesting. And my boyfriend works at Anthropic.
Yours too? Yes.
Anyways, let's bring in Karen.
Karen Howe, welcome to Heart Fork.
Thanks so much for having me.
So I imagine your book is sitting there behind you
on the shelf, it's all printed up, it's ready to go.
And then this very week, OpenAI puts out a story,
say, hey, maybe we're gonna change our structure around again, why the heck not? It's ready to go and then this very weak opening eye puts out a story say hey
Maybe we're gonna change our structure around again
Why the heck not so what's it like trying to write a self-contained book about a company that just never stops making news?
tiring
Yeah, but you know like honestly people have been asking me this question a lot like how do you even write a book at a book scale because usually it's like a month on end before it goes to publish.
And I think sometimes the news is actually a little bit distracting in that, yes, there
are a lot of changes happening, yes, things are evolving really fast, but there are some
fundamentals that are kind of ever present.
And so I try to keep the book focused on the things that
don't change so much.
Yeah, well, and among other things, this book is a history
of OpenAI.
So maybe let's go back all the way to the beginning.
What was this company like when you started writing about it?
So I started writing about OpenAI in 2019.
And I went to the office to embed with them
for three days as a first journalist to profile what had just become a newly minted company.
So right before I started covering it, it was still founded as a nonprofit, and it had
this explicit goal that it should be a counterbalance to for-profit companies.
And it sort of became clear to me during my time
at the company then that the idea that this was a bastion
of idealism and transparency and was going to be totally open
and share all of its technologies to the world
and not at all beholden to any kind of commercialization
was already going away.
And there were a lot of kind of early signs of that
that I picked up on while I was there.
There was a lot of secrecy for a company
that purported to be incredibly transparent.
And there was a lot of competitiveness,
which to me suggested that like,
if you're going to be competitive
and you want to specifically reach AGI first, you are going to have some really hard trade-offs with this transparency mission and this
open up everything to the public mission. So I've talked to some people at OpenAI who have
said that they felt quite burned by some of your early coverage of them. They were expecting
something different
than they got.
And you write in the book that after you published
your story on them, they stopped talking to you
for three years.
I'm just curious like what you think surprised them
about your coverage or if they should have been surprised
given some of the questions you were asking.
I think they were surprised because
they gave me a lot of access and they thought that I would sort of adopt a lot of the narrative that they were giving me.
And to be honest, I kind of came in without really a lot of expectations.
It was actually my first ever company profile and I was going in kind of just with an open mind of, okay, like this company presents
itself as this like ethical lighthouse.
Let's try to understand a little bit, like how do they organize themselves and how do
they try to achieve the goals that they've set out to do?
And I just found that they couldn't quite articulate what their vision was, what their
plan was, what AGI was.
And I think the prioritization of the problems that they were saying that they were focusing
on just didn't quite feel right to me.
Like I pointed out to them that there were environmental issues that were starting to
become more and more of a concern as AI models were scaling larger and larger.
And you know, Ilya said to me, he was like, yes, of course, that's a concern, but when
we get to AGI, climate change will be solved.
And that was just like, okay, like that's kind of a, you know, it's like a cop-out
card to just be like, well, when we get to the thing that we don't know how to define,
all the problems that we might have created along the way will just magically disappear.
And so that's when I started being like, I think we need to scrutinize this company more and just
be more cautious about taking all of the things that they say at face value.
Right. I mean, it sort of sounds like a microcosm of the arguments that have taken place
for the last few years among the AI safety crowd
and the AI ethics crowd that, you know,
the AI safety people, they're worried about existential risk
and bio weapons and, you know, malicious use
of these systems and the AI ethics crowd
are much more worried about like issues like bias
and the environmental concerns and things like that.
So I want to make sure I'm characterizing it fairly.
You yourself are coming from more of a perspective of the AI ethics crowd and that you think
we should be paying more attention to immediate harms of these models rather than trying to
avert some future harms.
Yeah.
So I would call it the AI accountability crowd.
And the reason why I use the term accountability
instead of ethics is because I think accountability
acknowledges that there's a huge power dynamic happening here,
where the developers of these technologies
have an extraordinary amount of power
that they have accrued and amassed
and are continuing to accrue and amass based on this narrative
that they need all of these resources
to build so-called AGI, right? So,
I definitely come from that perspective. And I think that if we take seriously the present-day
harms of what is happening now, that will help us not get to future harms, because we will be
more thoughtful about how we develop AI systems today so that they don't end up having wild
detrimental effects in the future.
I think this idea that we don't really know how bad AGI might happen or what the catastrophic
scenarios are is not quite right in that we have already so much evidence right now of how AI is affecting people in society.
And also AI is harming people literally right now.
So we need to address that.
We need to document that.
We need to change that.
One of the central arguments of your book is that open AI
and the sort of AI industry in general has become an empire.
It's the title of your book, Empire of AI,
and that is done so by exploiting people
and resources around the world for their own benefit.
Sketch that argument for us.
Yeah, so if we think about empires of old,
the long, centuries long history of European colonialism,
they effectively went around the world,
laid claim to resources that were not their own,
but they designed rules that suggested that they suddenly were.
They exploited a lot of labor, as in they didn't pay the labor,
or they paid extremely little amounts to the labor
that ultimately helped to fortify the empire.
And all of that, like resource extraction, labor exploitation, little amounts to the labor that ultimately helped to fortify the empire.
All of that resource extraction, labor exploitation went and accrued benefits to the empire.
They did this all under a justification of a civilizing mission.
They're ultimately doing this to bring progress and modernity to the rest of the world.
We're literally seeing empires of AI
effectively do the same thing.
And what I say in the book is like,
they are not as overtly violent as empires of old.
We've had 150 years of like social mores and progress.
So there isn't that kind of overt violence today,
but they are doing the same thing of laying claim to resources
that are not their own, that includes like the labor of a lot of artists and a
lot of writers, that includes all the data that people have put online that
they've just scraped in these internet loads of data sets, that includes
exploiting labor of the people who they contract to help clean their models and annotate the
data that goes into their models.
That also includes labor exploitation in the sense that they are building technologies
that are ultimately, like OpenAI literally says, their definition of AGI is to create
AI systems that will be able to outperform most humans at economically valuable work.
That is a labor automation machine.
So they're also exploiting labor in the sense
that they're creating these AI systems
that will dramatically make it more difficult
for workers to kind of demand rights.
And they're doing it under this civilizing mission
where they're saying like, ultimately,
this is for the benefit of all of humanity.
But what we're seeing is that's you know, that's not true when you go far and away from
Silicon Valley when you go to places like the global south when you go to rural communities impoverished communities marginalized communities
They really feel like the brunt of this AI development at this extraction in this exploitation
of this AI development, this extraction and this exploitation, and they're not at all receiving any of the supposed benefits
of this accelerating AI, quote unquote, progress.
Let's talk about some of that extraction of natural resources.
This is one of the things that your book gets into that I think doesn't get discussed a lot
in the context of AI. Tell us about some of your reporting and what you saw.
Yeah, so I ended up spending a lot of time in Latin America and also in Arizona to kind of
understand the just sheer amount of computational infrastructure that is now being built to support
the generative AI paradigm and the quest to AGI. And these are, you know, massive data centers and supercomputers that are being
plopped kind of in communities that initially accept this kind of infrastructure, either because
they don't know about it, because companies enter these communities in like shell companies and
aren't transparent about actually putting this infrastructure there, or they're sort of persuaded into it
because there seems to be a really positive economic case
where a company comes in, says, we're
going to give you hundreds of millions of dollars
to build this data center here, and it's
going to create a bunch of jobs.
And what they don't say is that the jobs are not permanent.
They're talking about construction jobs.
And once the construction jobs are over, there's actually not that many jobs for not permanent. They're talking about construction jobs, and once the construction jobs are over,
there's actually not that many jobs
for running the data center.
And these data centers,
they consume an enormous amount of power,
and they consume an enormous amount of water
because they need to be cooled
when they're training these models 24-7.
And this infrastructure is permanent.
So once it gets put there, even if a city
doesn't have that kind of energy anymore or the water to provide to these data centers,
they can't really roll it back. And in Chile, I was like with activists who had been fighting
tooth and nail to try and get these data centers from
not literally taking all of their drinking water.
And they were entering also communities in Uruguay, where I was spending time as well,
during a drought, where people literally were drinking bottled water if they could afford
it or they were drinking contaminated water if they could not, because there was not enough fresh drinking water
to go around.
And that was when Google decided to build a data center there.
So that's kind of when I say that there is like,
the current AI development paradigm
is creating a lot of harms at a mass scale.
That's the kind of stuff that I'm referring to.
Yeah.
I mean, part of empire building is about exerting political power, right?
I'm curious why the governments in Chile and Uruguay are okay with this.
What is the mechanism through which they're deciding to grant all of this power to these
AI companies?
A lot of governments learn that they have to serve the global north if they want to
get more investment and more jobs and more opportunity into their country. And in the AI case, it ends
up not being a good bargain, but a lot of them don't know that upfront. And so they
think that if they can open up their land, their water, their energy to these
companies, that somehow they will get more investment, more high-quality like
white-collar
jobs in the future.
I was talking with politicians who said that they hoped that if they allowed a data center,
then eventually Microsoft would bring in an office with software engineering jobs nearby
their data center.
That's the reason why they end up doing this.
Chile has a really interesting history in particular in that they have dealt with just like centuries of extraction. Most recently, they've become
like a huge provider of lithium for the lithium boom. And so they sort of have developed this
mentality over time that like, this is what they do. Like they open up their natural resources
to these multinationals
and that somehow this will convert into economic growth,
broad-based economic growth for people.
But unfortunately, it doesn't really.
Well, I wanna push back on that a little bit
because I think if I'm being like sort of
trying to be sympathetic to the people,
the politicians, the communities that are accepting this stuff, I think there's of trying to be sympathetic to the people, the politicians, the communities
that are accepting this stuff.
I think there's a case to be made that it is actually helping them, maybe not in terms
of direct GDP or economic growth.
But like the World Bank recently did a randomized control trial with students in Nigeria who
were given access to GPT-4 for AI assisted tutoring and found that it boosted their test
scores significantly and
that the gains were especially big among girls who were behind in their classes.
So like as I'm hearing you talk about the exploitation taking place, I'm thinking, well,
maybe there is something that they're getting in return.
Maybe there is something worth it to them.
Maybe this technology can in some instances help level the playing field between poor
countries in the global south and places like America. Maybe this technology can, in some instances, help level the playing field between poorer countries
in the global south and places like America.
And maybe there's a deal to be had where it's like, OK,
you want to extract our lithium.
You want to build a data center in our country.
Sure, but you have to give all of our students free access
to ChatGPT Pro or something like that.
Is there any sort of fair exchange
that you can imagine that would help these people?
So I think this question is kind of premised on the idea that we have to make these trade-offs
in order to get that kind of gain.
Like, we have to give you our lithium
in order to have some kind of educational boost
from Chachibiti.
And that's kind of a premise that I just don't agree with.
I think that there are ways to develop AI
that gives you the gains without this kind of extraction.
So the reason why I call it empire of AI in the book
is in part to point out that this is not the only pathway
to AI development.
These companies have chosen a very particular pathway
of AI development that is predicated
on absolutely massive amounts of scale,
massive amounts of resources, massive amounts of resources, massive
amounts of data.
Well, that's how you get the models to be general and good and to be able to work in
all kinds of different languages.
Is there another path that you're suggesting there's another path?
What is the path other than through scale?
So we don't necessarily know what it is yet, but it isn't being explored at all.
And there are already signs that there can be other ways to get to these more general capabilities
without that scale.
DeepSeq is a really interesting example of this.
I think there are a lot of also problems with DeepSeq.
But DeepSeq demonstrated that there is a, even
in a resource-constrained environment,
you can actually develop models that have more generality.
And so, I mean, this is what science is. Like, you
have to discover kind of the frontiers of what we don't know
yet. And the industry has fallen into this very specific scaling
paradigm that they know works, but it has so many externalities
with it, that it's ultimately not actually achieving what opening eyes has
its mission is, benefit all of humanity.
And so, like, if we constrained the problem to think, like, how can we get more positives
out of this technology without having all of that negative harm?
I think there would actually be more innovation that would come out, like true innovation
that would come out that would be more beneficial.
Karen, one thing that is very clear in your book is that you are not a fan of the big general purpose AI models.
You call them monstrosities built from consuming
previously unfathomable amounts of data, labor, computing power,
and natural resources.
Is there any way for people to engage ethically
with these models in your view,
or is it all fruit from a poison tree?
I think the way that they're being developed right now,
me personally, I do think that it's
true from a poisoned tree.
Do you use chat-shippy tea at all?
Not really, no.
Have you ever?
Yes, I have.
I'm just curious because writing a book is,
I'm doing it now and I'm finding a lot of uses for AI.
I'm just curious, this is a very well thoroughly researched
book.
Was it helpful?
Any AI tools were used in the creation of this book?
So no generative AI tools, but I did use predictive AI tools.
So I used Google Reverse Image Search
to try and figure out the price of OpenAI's furniture,
because they had some really nice chairs.
And I was trying to explain the level of upgrade that happened when they went from a nonprofit
in one office to this new Microsoft-backed capped profit entity in this other office.
And when I ran the reverse image search through,
it came up, it was like Brazilian designer chairs
that were like $10,000 each.
Yeah, so I mean, I do use predictive AI,
but I did not use generative AI for this book
other than to just understand how the tool works
and test its new features,
but I never used it for like getting research
or organizing thoughts or anything like that.
Because at the end of the day,
I'm writing a book about OpenAI
and like I'm not gonna like willingly hand a bunch
of my data about like what I'm thinking about
and what I'm researching to OpenAI in the process.
And that's where you and Kevin are different.
So I want you guys to interact about this a little bit
because, Karen, let me tell you,
if Kevin can use generative AI to do something,
he's doing it, okay?
There's gonna be a lot of generative AI
that's going into the making of this book.
You're writing, right?
Well, in the research phase,
because I found that it's not that good at composing.
Right.
But it is super, super useful for doing,
give me a history of the term AGI and where it originated and who,
who is the first people to use it and how it evolved over the years and how is
every lab defined it in all of their various publications.
Like that kind of thing would have taken me weeks before and now it's like minutes.
Right. So Karen, make your case that Kevin should stop doing that.
So I'm not going to make that case,
but what I'm going to say is this is like the perfect
use case for these tools because like these companies are constantly testing their tools
for like on like AI topics. Like that is like the thing that they like stress test their tools on.
And so if there were any topic in the world that, like, these chaplets would be particularly
good at talking about, it would be AI and AGI.
And so, Kevin, like, move forward.
Fire away.
No, but, so, here's another thing that I wanted to ask you, Karen, because I think this is
another place where we sort of disagree.
Yeah. place where we sort of disagree. You are very skeptical about the claims
that the AI labs are making about AI safety
or the concept of AGI.
And I guess I'm trying to understand that argument.
My view on these folks is that they are sincere.
That they are sincere when they worry about AI
posing risks to humanity.
I think that's why they're investing tons of money
into AI safety and trying to work on things
like interpretability, figuring out how these language
models work.
Is your view that they are sincere but just wrong
about AI being an existential threat, possibly,
or that they don't believe it at all
and that they're just kind of using AI safety as a smoke
screen or an excuse for sort of raising money
and continuing to build their models.
I think it totally depends on who you're talking about.
So in general, I think there are a lot of people
that are incredibly sincere about believing
in these problems.
I don't have any doubt about that.
I talked with a lot of them for my book. And you know, like I talked to people who
were like, their voice was quivering while they were telling me about being really, really
scared about the demise of humanity. Like, you know, like that's a sincere belief and
a sincere reaction. I think there are other people who pretend that they believe in this as, you know, the smoke screen.
But I think by and large, like a lot of these people do truly believe in their heart is where
their mouth is, and they are trying to do good by the world. My critique is that this particular
worldview is just really narrow.
It's just really, really narrow
and like a product of like being in Silicon Valley,
which is like one of the wealthiest epicenters
of one of the wealthiest countries in the world.
Like, of course you are going to have the luxury
to think about these like really far off problems
that don't have to do with things that are literally harming and affecting people all around the world today.
And it's not that I don't think we should focus on
any research to these problems, like that's not what I'm saying,
but I think the sheer amount of resources that are
going to prioritizing these problems over present-day problems is a super,
it's just like not at all proportional to like what the
problem landscape literally is in reality. Yeah, so when people like Sam Altman or
Dario Amadei or Demis Esavis say that we are, you know, a couple years away from something like
AGI or even super intelligence, Your view is that that just has
no reflection on reality or that we
should cross that bridge when we come to it and pay attention
to the stuff that we can actually observe in the world now.
So, I think it also depends on how they define AGI.
When OpenAI says that they are two years away from
potentially automating away most labor, on how they define AGI. Like when OpenAI says that they are two years away from potentially
automating away most labor, I could believe that they're on a path to systems that would appear
to do so in two years and then lead to, you know, a lot of company executives deciding to hire the
AI instead of hiring workers. If we're talking about AGI in another definition, then I mean,
it would have to be like on a case by case, like, how are they defining AGI and what is their time scale?
But do I think that opening has high convictions to try and create a labor automating machine and that they have the resources to start making a dent in labor opportunities for people?
Like, yes, I do. Well, maybe let's have the kind of
how do you define AGI conversation.
It's come up a few times during this conversation
and I know there are a lot of folks who regularly remark
that the definition of AGI seems really sort of amorphous
and slippery to them.
I have to say, it doesn't feel that amorphous to me.
I work with an assistant,
my assistant does customer service stuff,
scheduling stuff, a little bit of sales.
If there was a tool that I could use
and pay a subscription to that did those things
on my behalf, I think I would say,
yeah, I think that feels like AGI.
So that's kind of how I conceive of it in my mind,
but I know there are so many folks out there who say,
no, no, no, no, no, the definition is always changing
and slippery and this is a really big problem.
So Karen, how do you feel about it?
I mean, what you are describing,
like, yeah, like if you wanted to define that as AGI,
that's totally fine, but I don't think that's how
the companies are necessarily defining it as AGI, right?
They are not defining it well.
But when they need to raise capital, when they need to rally public support, when they
need to get in front of Congress and try and ward off regulation, the things that they
say are one day AGI will solve climate change,
one day it will cure cancer.
Like I think the AGI system that you're describing
is not exactly the AGI system that they are sketching out
in that kind of broad sweeping vision
that they're trying to use as justification
to continue doing what they're doing.
Right, there's a lot of hand waving that goes on
when somebody says that some future AI technology
is going to cure cancer.
It's leaving out many, many steps.
Well, but in partial defense of the labs here,
I think like we have seen things like Alpha Fold,
which was Google DeepMind's system
that solved the protein folding problem essentially.
And that was not something that they thought
was going to be the end of their progress
toward scientific cures for disease.
That was sort of the beginning stages.
And actually, if you talk to biomedical researchers,
they say that was a huge deal and really
did make it possible to do all kinds of new drug discoveries.
And I guess that part feels a little separate to me
than the AGI discussion.
But it does feel like the quest for AGI,
the scaling up of these models, the attempt
to make them more general, there have just
been good things that fall out of that process,
and also some externalities that you mentioned, Karen.
But I'm just curious if you see any positive applications
of the scaling hypothesis and the sort of dominant paradigm?
I don't think I've come across a positive application that I think justifies the amount of cost going into it.
And I think to return back to also DeepMind AlphaFold,
that was not a general intelligence system.
That was a task-specific system, right?
Which I advocate for.
Like I think we need more task-specific AI systems where we give them a well-scoped problem,
we curate the data, we then train the model, and then it does remarkable things.
I totally agree that Alifold was a remarkable achievement.
I don't think that that has much correlation with what AGI labs are now doing with the
scaling paradigm. Those are like doing with the scaling paradigm.
That's not, those are like two perpendicular tracks to me.
Yeah, the, I mean, I think it's clear that the hype is far ahead of the results right now.
We have heard a lot more about AGI curing cancer than we've actually seen
progress toward curing cancer in the moment of this recording.
Now, some people believe that's going to change very soon,
but I can understand why if you read a lot of headlines and you don't see cancer being cured yet,
that you'd have some questions.
Yeah, and I think the other thing here is,
I mean, these companies are continuing to say that they're AGI-less, they're pursuing AGI,
but like, they've dramatically shifted, and now they're really just focused on like building products
and services that they can charge lots of money for. And like all of the maneuvering that they've
tried to do to make it seem like that is on exactly the same path as to what they're saying is age.
Like come on, like that's probably not what's happening here. And like ultimately these
companies are building these like, I mean, you I mean, in the last episode that you guys were talking
about AI Flattery and the debacle around that
and how they're turning to maximizing for engagement
because this is the thing that they've realized,
get some a lot of users, get some more cashflow.
And that is ultimately what they're now building.
So I think what they're saying they're building
and what they're building is also starting to diverge
in the kind of new era, I guess,
where they need to be able to justify
like a $40 billion raise.
Yeah, well, let's sort of bring it home here
by talking about one thing
that I think all three of us agree on.
You write that the most urgent question of our generation is how do we govern artificial intelligence?
I agree with you on that front, Karen. And so let me ask, how do we govern artificial intelligence?
Please help us.
Democratically.
Yes. So what is a more democratic way of governing AI look like?
Yes. So what is a more democratic way of governing AI look like? So to me, it's like you consider the supply chain of AI development.
You have data, you have compute, you have models, you have applications.
I think at every single stage of that supply chain, there should be input from people, not just the companies.
Like when companies decide that they're going to train,
to curate a data set, there should be people that can opt in and opt out of that data set.
There should be people that, not just for their own data, but maybe there's consortiums that are
debating what kind of data, like publicly accessible data should or should not go into these
tools. There should be debates like content moderation of the data because
as I write in the book there were a lot of moments in OpenAI's history where they kind of just
debated internally like should we keep in pornographic images in the data set or not?
And then they just decided it on the fly. Like that to me is not democratic governance. Like we
should be having open public discourse about those types of decisions. When it comes to compute, like, there should be an ability for communities to even know
that data centers are coming in to their communities.
And they should then be able to go to a city council meeting and actually talk with their
city council, talk with the companies about whether or not they want the data center,
and have, like, good solid information about, like, they want the data center and have like good solid
information about like what actually the long-term trajectory of hosting a data center
would look like and when it comes to like the labor the contract workers that are working for AI like there should be
You know, they should follow international human rights norms
Because a lot of the conditions in which these workers are working do not follow international human rights norms. Because a lot of the conditions in which these workers are working do not follow international human
rights norms. So I think that's the way that I think about like
all of these different stages all need to be democratic. And
when OpenAI says like, we're going to develop democratic AI
simply because we're an American company, like that's not how it
works. Everyone actually has to participate, have agency, have a say to shape and change what is and isn't developed and how.
Well, Karen, this has been a fascinating conversation.
Really appreciate your time and thanks.
Thank you so much for having me.
When we come back, turn your brain off.
It's time to talk about Italian brain rot.
Ooh, sounds fancy. Kevin, if I were to start referring to you as Kevinini Russellini, what would that mean
to you?
I would think it was some sort of mockery of my Italian heritage.
I would never.
I would never.
What about Tralallero Tralala? You know him? No, I would never, I would never. What about tra la la tra la la, you know him?
No, I think you're having a stroke.
What about bombardino croquadillo?
Okay, now this is just getting ridiculous.
Ballerina cappuccino?
Nope.
All right, listen, if you or someone you love
recognizes any of these terms, Kevin,
you may be suffering from a case of Italian brain rot.
I'm almost afraid to ask. I have not been following
this story, although I know you were very excited to tell me about it today. What is going on with
Italian brain rot? Do not be afraid of Italian brain rot, Kevin. If you have been on TikTok or
Instagram or YouTube over the past many weeks, you may have encountered this unique form of AI enabled insanity.
Now, typically I know that brain rot refers
to this kind of feeling of, I don't know,
cognitive decline related to excessive use
of social media or something like that.
People on TikTok are always complaining
about their brain rot, but what is Italian brain rot?
Well, if you wanna catch up on this,
I highly recommend a story in the Times
by Alisha Heridasani Gupta, who kind of catches you up.
This stuff started to emerge in January,
and it really is an AI phenomenon.
You know, recently, Kevin, we've seen advances
in some of these text-to-video generators.
So you might be able to, for example,
create a short clip of a little coffee cup
that is also a ballerina.
Well, congratulations, you just invented ballerina cappuccino.
I mean, to me, like, this is sort of the difference
between this age of viral content
and previous generations of viral content.
Like, I spend a lot of time on TikTok,
but I have never, literally never,
seen anything about Italian brain rot.
And it's such a contrast to, like,
everyone knew that Ice Bucket Challenge was happening, right?
Because you could see it everywhere,
but things have become so, like, siloed and atomized
that, like, you could tell me
literally anything was happening on TikTok,
and then millions of people were into it.
It was the trend sweeping the youth, and I would have no idea. So either that means I'm old, that like you could tell me literally anything was happening on TikTok and then millions of people were into it.
It was the trend sweeping the youth
and I would have no idea.
So either that means I'm old
or something has changed about social media.
Well, this is why you have to have your younger colleagues
like myself come in and tell you
what's happening in middle school.
You are not younger than me.
Well, spiritually, I think there's a case for it.
So listen, there's no way to talk about Italian brain rot
that improves on the experience of actually watching it.
So let's watch a couple of clips of brain rot.
And I believe we have one queued up.
I hope I get hazard pay for this.
Tung, tung, tung, tung, tung, tung, tung, tung, tung,
sahur.
Brr, brr, patapim.
Il mio cappello è pieno di sli.
A pu cappuccino, assassino!
Ballerina cappuccina, mi mi mi mi,
chimpanzee, bananini, wah wah wah,
troppi, troppa trippa.
Glorbo, frutto drillo.
So if you are not watching these,
let me just describe what I just saw.
This is sort of a compilation
of these Italian brain rot memes,
which were all kind of like AI generated weird characters.
Like one of them was like,
looked like a sort of hamster
poking out from a half of a coconut.
That's right.
And they're just saying these like Italian phrases.
So this is Italian brain rot?
This is Italian brain rot. You know, you're probably saying these like Italian phrases. So this is Italian brain rot? This is Italian brain rot.
You know, you're probably grasping the Italian part
because they're sort of being voiced
in this over the top Italian accent.
And all of these sort of strange phrases that you're hearing
are the names of the characters.
So I know you're probably wondering
who is Trippy, Tropy, Tropicripa?
And that's a shrimp with a cat head.
So I love this one because, you know,
a lot of meme explainers,
there's like a lot of excavating to do
of where did this come from and what this about.
Here, it really is just what it says on the tin.
It is an Italian accent over a series of images
that make you feel like you're going insane.
Yes, and was this made by an Italian?
No.
In fact, in the Times, one of the main creators, this was the person who created Ballerina
Cappuccino, was Susano Sava Tudor, who is a 24-year-old from Romania, and who told the
Times that this is just a form of absurd humor that really has very little to do with Italy, but this creator just sort of created
the name Ballerina Cappuccina,
and they've gotten more than 45 million views on TikTok
and 3.8 million likes.
Oh my God.
Now, like at the risk of explaining a joke
and thereby killing it,
like is there any point to Italian brain rot?
Is it making some sort of social commentary?
Is it trying to
say like Italians are big users of social media and therefore are getting brain rot?
Well, so I actually do have a theory about this. Like I think here is what makes this
feel new is that whatever this is actually does feel fresh. And we live in a time where
everything that Hollywood is giving us feels like a recycled version
of something else.
We are on phase six of the Marvel cinematic universe.
And in that world where it's like,
oh, and here's Ant-Man's cousin.
People are saying, F that, give me ballerina cappuccino.
It does just feel like there is some organic hunger
out there for just really stupid shit.
Just really random, I was thinking about this recently,
you know the Minecraft movie is a big hit, right?
People are, it's one of the biggest movies of the year.
And there's this moment in the movie,
apparently I've not seen it,
but where someone says the word chicken jockey,
Jack Black does, I think.
And at that moment, like teens and other young people
have decided that this is the moment in the movie
to like stand up and cause a ruckus.
They started throwing popcorn.
Someone actually, I saw brought a live chicken
to the theater and like held it up.
Like this feels like of a league with Chicken Jockey
from the Minecraft movie in this sense
that it is just absurdist trying to explain it
actually makes you dumber in some way.
And so there's a kind of appealing randomness to it.
Yeah, and by the way, I think that is actually part
of being a young person is building a language
that is inaccessible to people older than you, right?
Like that is sort of how
the identity formation process works is there are older
people, older people have no idea who trippy, tropy, tropy, trippy is.
And that is something you can talk about with your friends that belongs to you.
What are some of the other ones?
OK, well, so I'm glad you asked, because we haven't actually watched enough of these
videos yet. So, Kevin, I would now like to direct your attention to one Salamino-Penguino.
Salamino-Penguino, mezzo salame, mezzo penguino,
tutto problema, non scivola, si affetta,
no parla, si bilan, mamma mia.
Like wearing almost like a sort of headdress
made out of salami.
Esputa peperoni piccanti, Salamino-Penguino,
la leggenda della Salumeria.
Now, let's take a look at Glorbo. Pickante, Salamino, Pinguino, la leggenda della Salumeria.
Now let's take a look at Glorbo. Glorbo, frutto drillo.
Okay, this is a crocodile or alligator
with a watermelon for a body.
Glorbo, frutto drillo, regnava.
This is a still image with 578,000 likes.
Ma la testa e la coda?
Tutto, alligatore.
Everybody loves Glorbo.
Is this even real Italian? Are we sure it's real Italian?
I'm pretty sure it's not real Italian.
Let's stop that one there.
And then let's sort of...
Now I know what you're saying.
Casey, these characters are just standing around.
That seems like super boring.
What if I were to tell you that other creators are now incorporating them into dramas, Kevin?
Oh boy.
Let's take a look at one of those.
And this one stars Trollolero Trollolah,
who is a shark wearing steve.
And is that Ballerina Cappuccino I see?
That is Ballerina Cappuccino,
and she's with Tung Tung Tung Sahur.
Tunga Tunga Tunga and Tung Tung Tung.
Sahur enjoying their honey.
So he leaves for the day and oh,
there comes Trollolero Trollolah the shark,
and now they're kissing in bed
and that's Bumbley and he sends it an airstrike. So that was, let's just review.
That was, I don't know, that was 10 or 15 seconds.
In that, you see two of these characters.
One of them gets into an affair, has a love child.
Her partner finds out and then sends in an airstrike
to attack the sort of cheater.
So they're doing a lot in 15 seconds.
Yeah.
Wow.
That was not a Pixar film.
That was really something.
I feel like I'm on a very powerful psychedelic right now.
Well, you know, you mentioned earlier that, you know,
in the old days we would do things
like the ice bucket challenge.
Kevin, what if I told you
that some of these Italian Brain Rot characters are actually doing
the ice bucket challenge? No!
Yeah, let's watch that one.
My name is Chimpanzee Bonanini, and I've been nominated for the USC.
This is a chimpanzee who is also a banana.
Brain Rot ice bucket challenge. I nominate Bon Bon Bini Gussini,
Trippy Tropy, and Boneca Ambola.
He's nominating the other characters to do ice cream. Oh! Oh! Oh!
Oh!
Oh!
Oh!
This is so dumb!
Yeah.
It's so dumb.
It's very funny though.
I am like genuinely laughing at this,
but it is like, I could not explain to you
why this is funny if you paid me.
Well, here, listen, I have done a little bit of comedy
in my life, and one thing that I learned in improv
was that everyone goes nuts
for an over the top Italian accent.
It's extremely funny.
All I have to do is say, make a bowl of spaghetti.
You're already laughing.
See, I didn't even do anything.
The Italian brain functions much in the same way,
but they are taking advantage of this AI thing.
And you know, look, we talked earlier on this show,
these systems are being trained with other people's art
without their consent.
There are some people who feel like you can never make anything truly creative or truly
artistic with AI. And yet here you have this bona fide viral phenomenon that is people
making extremely silly stuff using AI. And it is resonating with us. And I think this
has been one of the more counterintuitive lessons of AI slop is a year or so ago, we
were looking at images of Shrimp Jesus all over Facebook and we were saying, that seems silly. I'm
sure the company is going to get rid of this. No, no, no, my friend, they're going to lean
into it because there are riches that lie down this path. And Italian brain rot is the
first example, I think, of that happening.
God, it just, I mean, so I have a couple of reactions. One of them is, yes, I absolutely think that like,
AI has a utility and that there are good things
that have come out with it,
but seeing Italian brain rot makes me want
to nuke the data center.
So I'm like, shut it all down.
We've gone too far.
But seriously, I do think there is something here,
not just in the sort of like absurdist humor of this thing,
but I do think there are going to be new kinds
of entertainment that are birthed out of these tools
because if you wanted to make something like a ballerina
with a cappuccino for a head 10 years ago,
you needed to be an animator to do that,
or at least have some facility with animating software.
Now you just go into an AI tool and you type,
give me a ballerina cappuccino, and out comes this pretty perfect animation.
Yeah, which has always been the case for this sort of tool, by the way,
is that it takes people who do not have those kinds of artistic skills
and lets them express themselves creatively if they can think it,
they can visualize it, they can make it available to other people.
Here is my case why this is actually a good thing, Kevin.
You know, I was thinking this morning
about a few years back during the height of the crypto boom,
when people started talking about how crypto could be used
to fund these alternative worlds of entertainment, right?
Like the Bored Apes Yacht Club
was gonna become this mega franchise,
but what made it cool was that anybody could buy in.
Anyone could get a Slurp Juice.
Anyone could get a Slurp Juice, put it on a mutant ape, transform your mutant ape, etc.
And people didn't really get into this because I think nobody wanted to be involved.
What was essentially like a homeowners association for creating entertainment.
But I look at Italian Brain Rot and I see something similar happening
where it's like as far as I can tell, no one has a trademark on
Ballerina Cappuccino or Chimpanzee Bananini.
You could just sort of make your own version of it and put it up there
and nobody's going to issue a copyright strike.
You can have these characters do whatever you want to.
So it feels like there is actually a freedom
in making this that people are really responding to.
And so maybe we do actually get the next version
of like crowdsource entertainment
and it all comes out of these bizarre text to video makers.
I gotta say, I believe you
and you say that that is a possible outcome,
but my brain just goes immediately to like some office at like Disney headquarters,
where they're like watching these Italian brain rot memes
and like furiously trying to license the IP
to make like a series of seven movies about Chimpanzee-ni-Banana-ni-ni.
Yeah.
And I do think that there's a possibility that this becomes
just like any other entertainment franchise.
It could go that way, but you know, maybe that sort of robs it of the fun of it that, you know, makes it go viral today to begin with.
And they're making movies out of Minecraft. They can make movies on anything.
They're really running out of things to make movies out of, as far as I can tell.
So do I lean optimistic about this? Yes. At the same time, do I think that if China
had just sort of come up with this idea independently
as a way of bringing down American civilization,
it would be a great idea.
If they were like, what if we just sort of did
weird characters and Italian accents?
Could that distract all of American middle schoolers
for a year?
Probably worth doing.
How hard could it be?
for a year, probably worth doing. How hard could it be?
This is all a CCP plot to undermine American sovereignty.
That's kind of always been the thing with TikTok.
It's like, I don't think it's a Chinese plot
to destroy America, but it is working.
Well, if Cappuccino Ballerina starts talking,
singing the praises of Xi Jinping,
we'll know that something grave has gone wrong.
Yeah, we'll keep our eyes on that one. Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited this week by Matt Collette.
We're fact checked by Ina Alvarado.
Today's show is engineered by Chris Wood.
Original music by Alicia McEachoop, Diane Wong, and Dan Powell.
Our executive producer is Jen Poyon. Video production by Sawyer Rokay,
Pat Gunther and Chris Schott. You can watch this whole episode on YouTube at youtube.com
slash hard fork. Special thanks to Paula Schumann, Pui Wing Tam, Dalia Haddad and Jeffrey Miranda.
You can email us at hardfork at ny times dot com or should I say hardforkini at nytimes.com or should I say heartforkini at nytimes.com anini.
Matthew Feeney Don't actually send a message to that email
address.
It will bounce back.
Aaron Powell Yeah, that address is not active.