Hard Fork - Australia Kicks Kids Off Social Media + Is the A.I. Water Issue Fake? + Hard Fork Wrapped
Episode Date: December 12, 2025This week, Australia implemented the most aggressive social media ban in a democracy to date, kicking children under 16 off 10 of the most popular social platforms. We discuss how the platforms lost t...he argument around child safety and whether others will follow Australia’s lead. Then, the blogger Andy Masley joins us to separate fact from fiction on the topic of A.I. water use. Is it a distraction from other more pressing environmental concerns? And finally, our first-ever “Hard Fork” Wrapped: We break down our favorite “Hard Fork” stats from 2025 and bring you up to date on three of our biggest stories of the year. Guests:Andy Masley, blogger The Weird Turn Pro Additional Reading: A Grand Social Media Experiment Begins in AustraliaThe A.I. water issue is fakeWhy Is Everyone So Wrong About A.I. Water Use?Trump Signs Executive Order to Neuter State A.I. LawsTrump Clears Sale of More Powerful Nvidia A.I. Chips to ChinaWe Asked Roblox’s C.E.O. About Child Safety. It Got Tense. We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Transcript
Discussion (0)
The other day, my friend Kyle is out in the suburbs here in the Bay Area, and he's like in a parking lot, and he sees a car, happen to be a Tesla, and he notices the license plate. And do you know what the license plate said? What did it say?
The license plate was hard fork. No.
So Kyle listens to the show, and for whatever reason, assumed that this must have been your car. Wow.
Wow. And so he walked up to the sort of, you know, passenger's side window, which was down, and he says to the man behind the wheel, oh, hey, do you know Casey? Because he assumes this is you. And the man looks at him and is like, no. And Kyle, you know, of course, becomes embarrassed and explains, oh, well, you know, like there's a podcast and, you know, is my friend host it. And the man looks at him and says this, oh, this is the
second time this has happened.
Wait, that's amazing.
Yeah.
So whoever is out there driving the Tesla with the hard fork license plate, we're sorry
for the fact that you're just going to be accosted now pretty much every time you,
you know, park your vehicle.
You know how people put those bumper stickers on their Teslas that say, like, I bought
this before Elon went crazy?
That guy should buy a bumper sticker that says, I bought this before there was a hard fork
podcast would solve all his problems.
I'm Kevin Russo. Tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is hard for this week.
The Drama Down Under as Australia bans children under 16 from social networks.
Will the rest of the world follow?
Then blogger Andy Maisley joins us to separate fact from fiction on AI's environmental impact.
And finally, it's our first ever hard.
Fork wrapped.
Wrap it up, Kevin.
Well, Casey, I miss you.
I know you're in San Francisco this week.
I am back in my hometown in Ohio,
visiting some family, and it's cold here.
Yeah, and let me just explain this.
You know, sometimes podcasters get in a huge fight,
and they just don't really want.
to be in the same room for a little bit and one of them will go home to their family and just
kind of cool off so kevin thank you for taking a step back and you know hopefully uh cooler heads
will prevail here yeah you're welcome and i hate you no actually kevin went home for the holiday and
it's very sweet you know going there to be with your family it is weird to be like on someone
else's like zoom set up though okay say more it feels very personal yeah it feels like you're
sleeping in their bed or something you know it's like things aren't where they
were, you know, my windows are everywhere. I've got one monitor over here, one monitor over
here. Well, with that in mind, maybe we should zoom in to what's happening in Australia.
Great transition. I love that segue. Thank you. But yes, this is the big tech news this week.
On Wednesday of this week, the children of Australia lost access to their social media accounts,
many of them as a new nationwide law went into effect that bars anyone under 16 from having
an account on the most popular social media sites.
That's right, Kevin.
And I think this is really big news because for more than a decade now, countries have
been reckoning over what to do about social media.
It seems to have a lot of secondary effects on their people and their countries that they
don't like.
And yet what to do about it has remained an open question.
and people have tried to pass, you know, little regulations here and there.
This is the biggest step we have seen a democracy yet take to simply say, hey, if you aren't 16 years old, you just actually can't use this at all.
And I think that this is the first domino to fall in a move that could reshape how we all use the Internet.
Yeah, it's a big deal.
And if I were to summarize the sort of reaction to this law going into effect in Australia in a way that,
Australians might appreciate, I might say that some people are mad as a cut snake.
They think the government's made a dog's breakfast of this policy, which they think has bucklies of working.
But others reckon it's fair dinkum, and that while improving teenage mental health, maybe hard yaka, you can't just spit the dummy and do nothing.
Wow.
Am I right?
Wow, I didn't really get one word of that, but I'm going to trust that chat GPT has not led you astray there and that those are all.
actually Australian phrases.
It was Gemini. Thank you.
Gemini. Did you at least try to slip in one fake Australian phrase just to irritate our
Australian listeners?
We have Australian listeners. They're going to be so mad about this segment.
No, they're going to love it. We're shining a spotlight on what is happening in their country,
Kevin. That's true. That's true. So, yeah, let's talk about this because I know you have been writing
about this. I have been watching it play out a little bit, and I've been reading a little bit about it,
but you know much more about this than I do.
So why don't you, like, give me the sort of capsule summary history
of this new Australian law that went into effect this week?
Yeah, so the law says that if you are under 16,
you cannot use the following 10 apps.
Facebook, Instagram, threads, TikTok, Twitch, Snapchat, YouTube, X, Reddit, or Kik.
Okay?
So, you know, you add all of those up,
and that is where Australian teens, just like teens all over the world,
are spending the bulk of their time online.
If you want to try to have an account there,
you're going to need to try to verify your age in some way.
The most popular way tends to be this age estimation video selfie that you see people do.
Of course, this can famously be tricked by video game characters sometimes,
but it also works a lot of the time.
And so that's the move.
And it all came about, Kevin, after the wife of the premiere of Australia's second
smallest state read the anxious generation, a book by past hard forecast Jonathan Haidt, and said,
well, this seems really bad. We ought to do something about it. And that got some legislation going
and less than a year later, so this was last November, Australia passed the law. And the platforms
have now had a year to kind of consider this and figure out how they're going to implement it.
And this week, it finally went into effect. Interesting. The power of books in this day and age,
You know, we can write them, and sometimes people read them and pass laws based on them.
Absolutely.
Questions about this.
One is, does every platform have to do some kind of proactive age estimation, or are they allowed to just say, like, here's a drop-down menu, put in your birthday, and we'll tell you whether you can use the site or not?
You cannot do the drop-down menu.
So these sort of fake age verification measures that have existed since the dawn of the Internet are no longer going to cut it.
And so that does mean that, yes, if you are an adult and you're just signing up for your first Snapchat
account, you are going to have to verify your age via one of these means.
Got it.
And you listed off a bunch of apps.
Two that I didn't hear mentioned, one was Roblox, which we have recently talked about on the show.
The other was Discord.
So are there other apps that are popular with teens in Australia that are not included under this provision?
Sure.
So, you know, it only applies to the 10 that I mentioned.
Some folks that I talk to with the platforms are irritated that Minecraft isn't on this list.
Steam, which is a popular way for kids to play PC games.
But it's really Roblox and Discord that are generating the most eye roles at the platforms who were targeted
because folks there believe that this is an example of, as I've been told, the government,
picking winners and losers and effectively saying, like, hey, we're going to make it really hard for you,
meta, TikTok, YouTube, but Roblox, for whatever reason, is going to get a pass.
And Julie Inman Grant, who is the e-safety commissioner of Australia and who is sort of overseeing this effort when asked about the Roblox omission, basically said, yeah, we were concerned about them and we went to them. And then they rolled out a bunch of new features, including age estimation technology. And so we feel like they're okay for now. But it is possible that in the future they could get added to the list as well.
Got it. And what about YouTube? How are they handling that? Because there are lots of ways to watch YouTube videos that don't involve, like, signing up for a YouTube account.
Yeah, so this is a really interesting one. So as you just mentioned, Kevin, you can watch YouTube without logging in, although of course YouTube will constantly prod you to log in because they can sort of make more money off of you that way. But it is possible to watch in what they call the logged out state. And interestingly, you can actually still personalize a feed even in that logged out state, right? So just by seeing what kind of videos you're watching, taking other cues, you know, what browser are you using, where are you located? They can
show you things that they think that you might like. And so this is another criticism that has been
leveled at the law, which is that, you know, if you're meta and you make Instagram, Instagram doesn't
have that logged out experience, right? And so folks at meta may feel like this feels like a giveaway
to YouTube because they're going to have a way to sort of continue to get engagement from their
younger viewers. Of course, what I think is likely to happen is meta is simply going to introduce a logged out
experience for Instagram and threads and Facebook. Yeah. So speaking of how these companies may
try to get around this or push back on it.
Like, what are the companies doing?
I imagine they've been lobbying against this law,
but what have they been doing in anticipation
of this thing going into effect?
Basically just frowning and stamping their feet.
You know, YouTube has said that they feel like it's very rushed,
you know, because, of course, we've only had 20 years
of seeing what the internet does to young people's brains.
Interesting.
What about kids in Australia?
Do we know how they're reacting to this?
Yes, so kids are doing a number of things.
Some smarties, and I have a feeling that you would be one of these kids, Kevin, they've had a year to prepare for this.
So they created backup accounts where they set their ages to, you know, like 19 or 26.
And so they're planning on using those.
They're also downloading apps that aren't covered by the band, right?
Another pretty predictable response.
That includes a bunch of apps that I'm not super familiar with, including Coverstar, Yope, and Lemonate.
Do you have a huge following on any of those platforms?
Yeah, I met my wife on Cover Star.
Okay, interesting, interesting.
Yeah, that's a Casey Newton joke for those keeping score at home.
So they're doing that, but Kevin, I was most interested by this thing that these kids in Australia are doing.
They've gone back to using this ancient technology that I didn't know if we'd ever see this again in my lifetime.
But I'm told that Australian teens in preparation for this ban have been exchanging phone numbers with each other.
Come on. That's unrealistic.
Yeah. Really?
Yeah. And so what happens is so each of these kids, they have a seven digit or, you know, maybe it's 10,
I don't know how many digits it is in Australia,
but you have sort of a unique code that belongs to your phone,
and you can give that to another person.
And then if they put that in their phone,
you can then send them messages and maybe even do a video call.
So this is apparently seeing a huge surge of interest in Australia right now.
That's incredible. I love that.
I'm so curious what the sort of second and third order effects of this will be.
Like, I think right now it seems like the primary people who would be benefiting from this
would be parents and also maybe the teenagers.
But I can imagine kind of some fears about maybe long-term competitiveness if everywhere else the teenagers are using social media and the Internet and using that to not only like amuse themselves, but to learn things like, do you end up in a situation where your kids and teens are behind in some ways?
Or do they race ahead because they're the only country where the teens aren't glued to their Instagram feeds all day?
Well, and you're getting at what I think is maybe one of the most interesting aspects of this whole thing, Kevin, which is that we are now setting up a really interesting natural experiment in what happens when you take social media away from children. For the past decade now, I have been covering this endless debate about to what degree can we hold social networks responsible for some of the mental health challenges that the younger generation has. And those challenges, by the way, are enormous. We've talked about it on the show many times.
but we have not had an island where everyone under 16 is being prevented from using these
technologies. And I think it's worth asking. And I'm actually kind of curious to get a prediction
from you on how big of an effect this will have. You know, I was talking with somebody at one of the
platforms the other day that was basically saying, look, if you think that this ban is going to go
into effect and all the kids are going to put their phones down and go play in the park,
you've got another thing coming, right? These kids have already grown up in a world where they
are used to being online all the time, and we're not going to throw back the clock to the
1980s in one fell swoop. On the other hand, if these kids don't have a million push notifications
bothering them at all hours of the day and night while they're sitting in class, making them
anxious about their physical appearance, sending them down eating disorder rabbit holes,
I can't imagine these kids actually coming out on the other side of that better off. So it's a
really interesting natural experiment, and I should add that Australia is actually going to study
this. They're putting together a panel of academic researchers that is going to essentially
track what is happening to these kids once they're banned from these social networks,
and they're going to publish that research. So we should actually, within a couple of years,
have some really interesting data here. But what do your instincts tell you, like, what do you
think is going to happen to the children of Australia? Which, by the way, I'd love for somebody
to clip that and just take that quote out of context and put it online somewhere. What do you think
is going to happen to the children of Australia? I mean, I don't know. I think the sort of first
couple of years we may not see like huge short-term effects like i think i think that part of the
the dream is probably unrealistic um but i can very well imagine that like five or ten years from now
when sort of a whole generation or a micro generation has had time to like go through childhood
with this rule in effect i can imagine there being some actually like pretty steep differences
between them and the generation just before them who had this stuff all along i think i've been
like radicalized a little bit by the debates around school phone bans here in the US. This is a topic
that we've talked about a lot. And we heard these sorts of same arguments on both sides in the sort
months and years leading up to those phone bans going into effect. Oh, it's, you know, how are kids
going to communicate with each other? They're going to feel so left out. Like, you know, the parents
aren't going to be able to get in touch with them if there's an emergency. And then these bands went
into effect. And from what I can tell, they have been universally successful. Like, I'm sure
there are some districts where people are unhappy about it. But, like, everything that I've seen,
every teacher or parent that I've talked to has said, like, this has been a great thing for our
schools. And so I am just very skeptical of these sort of, you know, worry warts who say, oh, if we
take social media away from children, it's going to, like, have all these bad knock on effects.
Because I think what we've seen so far is, like, sometimes this stuff.
just works. Sometimes it just produces a positive net result. But I don't know, how are you feeling
about this Australian law? Well, so, you know, I have talked on the show before about some of my
concerns around implementing these sorts of bands, right? If you're a 13 or 14 or 15-year-old
creator who has been building a business and you're now going to lose that business, that sucks for you.
If you're 14 and you're LGBT and you have an unsupportive family and you're about to be cut off from some
online resources that made you feel less alone in the world, that sucks. And I do think that there is
some number of kids who can just sort of safely use Instagram and TikTok and their lives are
basically fine and they use it for entertainment and they are still able to like, you know,
become self-actualized humans and everything else. But to use the public health analogy, Kevin,
it's also true that a lot of people can smoke and drink without it ruining their lives. And we still
made the choice that we wanted to restrict access to alcohol and tobacco until those kids were
a little more grown up. And over time, that resulted in fewer people drinking and smoking.
And I think most countries are better off for that case. So while I'm concerned about what some
of these kids are going to be losing at the margins, I just keep coming back to the argument
that I never hear anyone make. You know what that argument is? What's that? Looking at Instagram is
really good for your kid.
Do you know what I mean?
Nobody is ever like, hey, you've got to get your 14-year-old looking at TikTok.
If he's not looking at TikTok, he's really going to be missing out.
You never hear people say that.
At most, this is like a tolerated distraction.
And at worst, it does cause immense harm.
And so I think particularly for the under 13s, it's just a no-brainer to get kids off
of this stuff.
They weren't supposed to be there on the first place, but we know that they were there
and the platforms took very few steps to get them off.
I think it's slightly trickier for the 13 through 15-year-olds,
but ultimately not tricky enough
that I am unwilling to let Australia run this experiment
and see what we learn.
Totally. So, Casey, let's wrap this up by making a prediction.
Do you think this law will succeed?
And what would success look like to you for this?
Well, I think what Australia is hoping to see
is decreased rates of mental health issues
with teenagers in Australia.
maybe higher test scores, right?
Some sense that the mental health of teenagers in Australia is improving.
And I have to say, if I had to guess today, my guess is that this would have some positive marginal effect on the teens of Australia.
Again, I don't know that it's going to get them off of their phones.
I'm sure that some of them are going to find their way into very dark corners online that are unregulated and they're going to have, you know, issue similar to the ones that they're now having on Instagram and the other app.
But on balance, I do think that we should expect to see some, at least some positive benefits from this.
What do you think?
Yeah, I think so.
I think the relevant time horizon is probably longer than people want.
Like here, I don't think you can do a study 12 months in and say, have the mental health outcomes improved and have that be sort of an indicator of whether the law is working or not.
So I think this is going to be more like a sort of, you know, five, 10 year project.
But I think it'll be really interesting.
and I actually don't think that other states, other nations are going to wait five or ten years before they decide if this is something that they want to do.
In fact, I think probably a year or two in, if there is like no indication that Australia's sort of teenage cohort is falling apart as a result of this law, I think a lot of other democracies are going to do something very similar.
Absolutely. You're already starting to see it. Denmark is pursuing a ban for under 15s. Norway also now pursuing a ban for under 15s.
Malaysia has said it is essentially intending to copy the Australian law. So this is really striking for me, Kevin, because you know, you and I both covered social media for a long time. And I think particularly in the United States, after 2016, we get this big backlash against those social networks. And yet it felt like in America in particular as if there were no consequences for these companies, right? Their brand value plummeted, right? Because these companies are widely disliked, but they continue to grow. They're making more money than ever. This is the moment where I
was like, oh, all the bad choices that they made over the past decade have actually caught up
with them. They have lost this argument. A growing number of countries now simply are saying,
we don't believe you anymore. We ran your experiment to see what would happen when we let
anyone use your platforms. We didn't like the result. And so now no one under 16 can use your
platform. That is what it looks like to lose an argument. Yeah, I agree. So while some people in
Australia may be carrying on like a pork chop, I think this one's a ripper.
You know, Kevin, sometimes I worry that your Australia jokes are going to boomerang back against you.
You'll be in real trouble.
When we come back, is the AI water use narrative fake?
We'll talk with press critic Andy Maisley about what we know and don't know about AI's water footprint.
Well, Kevin, you can lead a horse to water, but will there be enough in the trough to run one chat GPT query?
That is what some people have asked over the past year as questions about the
The water usage of AI systems has come to the forefront of many online discussions.
Yeah, this is something that I wanted to talk about on the show for a long time now.
In particular, because I've been observing that, like, after anything happens with AI,
like any big, you know, scandal or drama or some piece of viral slop goes viral.
Like, inevitably, the responses to that will talk about water, right?
It'll say, like, why are we drying out the lakes and the oceans to create this technology?
And so I just think that it's been a very fraught, emotionally charged issue for a lot of people.
There are a lot of people out there who see AI and all the infrastructure that we're building to support it
and just think like, this cannot possibly be good for the environment.
And so today we're going to have a conversation about that.
Yeah.
And I'm glad we are because this is just something that I see all the time in memes on social media.
Like the other day, I saw one that was like, hey, can you get out of the shower?
I need the water for chat GPT.
And it got many, many likes and repost.
And that is sort of, I think, the default understanding on social media of the relationship between water and AI, which is basically you can only have one or the other.
And this has found its way into a lot of mainstream coverage.
And there is basically one blogger out there who got a bee in his bonnet about it and launched a one-person crusade to try to change that conversation.
Yes. So today our guest is Andy Maisley.
Andy is not an environmental scientist.
He is not an AI scientist.
He's by his own admission, just a guy on the internet.
He is the director of the Effective Altruism DC organization.
And before that, he was a high school physics teacher.
But where I really took note of his work on his substack, which is called The Weird Turn Pro,
is that he has just been incessantly responding to what he says is this sort of made-up narrative about AI.
water use. He has been unusually persistent in kind of debunking and explaining these specific
claims and statistics. And his blog is just full of like these long explainer posts about
how he believes the AI water issue is essentially fake, that we are sort of working ourselves
into a lather about something that is not a problem on the individual level. He has
written about putting the AI water use numbers in the context of things we do every day,
like watching YouTube videos or eating meat or relying on a car to get us around.
And it has been really helpful his work for me in trying to figure out where to place my own concern
within the many things that you could be concerned about with AI.
Yeah. And as you'll hear in the conversation, Andy is not here to give a permanent free pass
on environmental issues to AI companies. In fact, he says that as AI becomes a larger and larger
part of the economy. There are areas where we do want to pay more attention. But in his view,
if we continue to focus on the water issue, we may miss the bigger picture and the real issues to focus
on. So I thought that was a perspective that is worth airing out because it does kind of fly in the
face of the way that AI is often talked about online. So let's get into it with our guests, Andy Maisley.
Andy Maisley, welcome to Hard Fork.
Long-time listener, first-time caller, great to be here.
Yeah, thank you.
So you have been spending the better part of 2025 fighting with people on the internet about this issue of AI and its environmental impact and specifically within that the water use issue.
And I wonder if we could start by just sort of getting into why you decided to pick this particular fight.
What made you interested in writing and talking and thinking about this topic?
The main way I started writing about this, honestly, was I was just bumping into a lot of people, like, at parties and stuff.
I think we've all had maybe similar experiences where we talk about, like, using chatbots or chat GPT, and someone will be like, oh, like, how can you do that if you claim to care about the environment or whatever?
People would bring up the, like, now kind of defunct statistic that, like, oh, a chatbot uses, like, 10 times as much energy as a Google search.
And my first reaction to that at the time was, like, that's not very much energy, you know?
like if I said I had done like a thousand Google searches in a day, that on its own like wouldn't
really stand out, you know, like you wouldn't think, oh, Andy doesn't care about the environment
so much. So basically, I had started to dig into how this was being covered, finding I was
disagreeing with a lot of it, going a little bit crazy. I started to go down more and more rabbit
holes. And like now I've ended up here as like one of the main commentators on the issue,
despite being basically just some guy who's doing this as a hobby. I'd love to like situate you
in the landscape a little bit
for our listeners
who may not be readers
of your substack
you are not employed
by any of the AI companies
you consider yourself
an environmentalist
am I correct in that?
Yeah, receiving no
financial incentive
to do this outside of
like my loyal substack subscribers
basically I'm motivated
to do this mostly
by just bumping into
a lot of what I think
are pretty simple misconceptions
and honestly as like
a long time environmentalist
and physics teacher
of seven years, I'm pretty worried that a lot of people just have these really wildly off
misconceptions that are distracting them from what I think of as more serious issues. Like one
story that happened recently, a friend was in a pizza place, and he'd overheard this
conversation where someone had said, like, oh, this person uses chat GPT. And so she can't
consider herself to be an environmentalist. And then immediately after went and ordered like a meat
lover's pizza. And like, you know, even just there, I'm like, oh, this person just really doesn't
understand the different tradeoffs involved in different things. Like, one is just so much worse
than the other. And so a part of this is also motivated by just worrying that a lot of the
environmental conversation about this stuff is a little bit off base and people are losing
track of the big picture and like where the actual bad guys are.
Got it. Well, maybe we should just dive into one of the claims that you made, Andy, that you
say you're very, very confident about, which is that an individual prompt of a chatbot like
ChatGPT just does not have much impact on somebody's overall water usage. Make that case.
for us. Yeah, a lot to say. So most importantly, there's this common wisdom floating around
that I think is pretty conclusively wrong, which is the idea that an individual chatbot prompt
uses an entire bottle of water. This is based on a misreading of a Washington Post article, which
itself I thought was pretty strange where the article implied that writing a single email with
chat GPT uses a whole bottle of water. And for that to be true, the article seems to assume,
among other things, that you're sending like 10 to 20 chat GPT prompts to write this 100-word email.
Like, I'm a power user of chatbots myself, but I don't use them that much.
It's also assuming a lot, like a lot of the water that it considers is actually in like off-site
lakes dammed by hydroelectric plants.
And it's considering the water evaporated on those lakes by the sun.
And then even there, after that, it's also assuming that these models are not any more
efficient than they were in like the year 2020 specifically.
So I would really like that talking point.
to go away. Basically, like, that's just a horrible misreading of a kind of, like,
inconclusive article. Our best current data on this, I think, comes from Google,
which seems to imply that the on-site, like, in a data center water cost of a chatbot prompt,
is about 0.26 milliliters. And if you add the off-site costs, like the water cost of generating
the electricity, that might rise to, like, two milliliters or something. And obviously, like,
there's a range of uncertainty in different prompts, use different amounts. But I want to be
able to at least start by saying, like, well, let's look at the median prompt. And so, like,
putting that number in context, if you look at like two mill liters and ask how much does this
contribute to my daily water footprint, my best understanding is that the average American's
total daily water footprint that they consume is something like 1,600 liters altogether.
So what this means is that this is about 800,000 times as much water as a prompt uses.
And so to raise your total daily water footprint by even 1%, you would have to send something
like 8,000 median prompts.
And if you're spending your entire day doing that, just like constantly
poking and sending these prompts. Like, it's very likely that you're not doing a lot of other
things that also use a lot of water. And so it's actually very likely that using AI on net
might actually significantly reduce your total water footprint like over the course of the day.
Like the numbers are just so small that it's very comical. It's something like if you don't
buy like one additional pair of jeans, that's like two million prompts, which is probably
more than I'll do in my lifetime. You know, so like I think there especially people just have a lot
of wild misconceptions about how this is impacting their water footprint. Yeah, I mean,
got it, and Kevin actually hasn't bought a new pair of pants in years. So that actually is probably
going to give him a lot more chat GPT credits than the average person. So that's the, that's the kind
of micro argument here, but more often these days, and I still do see these, like, statistics
about, you know, each chat GPT prompt uses a bottle of water. I still see those floating around,
but more what I see is sort of the industrial, the sort of macro view of this, which is, look,
we're building all these AI data centers. They're using a lot of water. Now, using is sort of a broad
term. It can include, as I have learned from your writing and other writing, some of this water
just gets recycled back into the water source after it's used to cool the data centers, but some of
it is evaporated or does sort of get taken out of the water sort of ecosystem. So there are lots of
different kinds of water usage going on. But what do you think about the argument that the sort of
scale of these data centers, the scale of the water needs of these data centers, is something
that people should be paying attention to relative to other ways that we use water?
Yeah, I mean, first I just want to flag that I'm very happy that the conversation has shifted in this direction.
I think that whenever we're talking about climate, it's almost always a distraction to focus too much on your personal footprint.
And we should be talking about the big systematic things in society and what impacts they can have.
So, yeah, there's a lot to say about the total industrial use of water by AI and data centers more broadly.
It's definitely a lot by the standards of an individual.
You know, like, you can read about like millions of gallons used every day and think like, oh, you know, I don't use that much.
That's crazy.
you know, and I think the first thing I want to get across to people is that literally all industrial use of water in society sounds crazy from the perspective of an individual. Like if you report something as using millions of gallons per day, like this could either apply to like AI. It could apply to like berries that we grow. It could apply to the U.S. steel industry and things like that. There are just so many different ways that these huge numbers can pop up when we're talking about this stuff. So again, like instead of talking about big general numbers or numbers of gallons,
of water that AI uses. I try to contextualize it just using either the total amount that like
America uses altogether or just comparing it to other things. Another complicating factor here is that
the last year we have really good data on this is 2023. And obviously AI has grown a lot since then.
So I can kind of start with like a 2023 background and say my best guess for like data center
water use, like how much water data centers were using in America specifically in 2023 with
something like eight times the water that my hometown used. I'm from a town of about 15,000.
people. And so like AI altogether was effectively like building eight new towns like that
across the country. And like obviously that's not nothing. That's like a lot from the perspective
of an individual. But it's kind of something that I would expect the water system to be able to
absorb and to manage pretty well. Andy, tell us where that 2023 number comes from and maybe why
we don't have more updated numbers. Yeah. I mean, the reason we don't have more updated numbers is
that AI is just very fast moving and it's hard to understand like exactly how much water.
companies are using because they're not always totally transparent. This number comes from the Lawrence
Berkeley National Laboratory. They have a really great general report on data center energy and
water usage. I think that's probably the best single document if you want to start doing deep dives
on this yourself. They're just like broadly respected. This is the most comprehensive general
report that we have. So a lot of my writing has been based on that. But yeah, because 2023 is the last
year we have good data and there's a lot that's changed since then. I'm starting with that,
but flagging that obviously like a lot's grown since then, but we also need to make some reasonable
guesses about how much the growth has happened.
Got it. And that that 2023 data that you have, I wonder if you could put it in the context
of like another industry. Like how does the AI industry compare to like other industries that
we have in America when it comes to their water use? Yeah, it's a great question. By 2030,
it could be as much as about 250 square miles of irrigated corn, which is about 1% of America's
corn. So again, I'm trying to frame this as like, you know, we should be about as worried about this
as America's irrigated corn growing by about 1% or something.
And again, like, this really isn't nothing.
Like, that's a huge amount of water.
But the reason I'm comparing this to other industries, like agriculture,
other things to, like, even just household leaks, use a lot more.
The reason why I'm making these comparisons is more to say, like,
should we expect the current water system to be able to absorb this
and where should we expect to actually see problems?
And I think just based on how much water we use in most other contexts,
we shouldn't actually expect this to be a huge disruptive,
disaster for water use. Like, there could be problems in the way that there are problems with other
large industries, but I kind of expect this to be like one of many just normal industrial uses
of water going forward. So I've come away thinking this really isn't as much of an issue as
people think. Yeah, but obviously there are issues with the AI infrastructure build out. Yeah.
We've talked about some of them on the show. One of them is that there's just like not enough
power to power these data centers. And so because it's very hard and time consuming to
build new renewable sources for these data centers like solar and wind farms and things like that
and nuclear companies have been doing things like fracking or relying on older or sort of less
energy efficient, less environmentally friendly forms of power generation to power these data
centers. So are you worried about those things? Are you saying it's all overblown and we shouldn't
be worried about any of the environmental impacts of AI? Definitely not saying that is overblown,
especially. I basically think the water case seems kind of open and shut to me right now,
whereas the power case is a huge question mark. One way to think about this that I think is
useful is that as a proportion of our total system, the increase in electricity going to
AI will be an order of magnitude more. Like from what I can tell, it's going to rise from like
0.5% to like 5% by 2030 of our national electric grid. Whereas for water, it might only rise to about
like 0.5%. So like there's an order of magnitude difference there.
And there's just a lot to think about.
And there have already been, I think, like, pretty bad effects on the environment from data centers.
Like, there's been a lot of instances of, like, coal plants staying open.
And so this is the main thing that I am really quite worried about.
And I think that, like, the answer to this is going to involve a lot of different things.
Like, it might involve in some cases data center's not getting built.
Like, I'm definitely not coming in saying, like, oh, data center should always be built everywhere or whatever.
It's just that I do want the full number of tradeoffs to be considered.
And there are also potentially a lot of positives about the buildout for the end.
energy system, like a lot of AI companies are, you know, committing to buy more like renewable
energy as a proportion of their total energy than a lot of other industries and stuff like that.
So that could at least lead to like economies of scale in renewable energy. And that might not
cancel out the negative effects either. I kind of just want people to be aware of the full
picture here. But yeah, I have to admit, I'm really agnostic about the future of AI and energy
stuff. And like this is a place where there are just so many unknown unknowns that it feels
really hard to say anything concretely. So my ask isn't that like, we assume,
it's fine. I just ask that like we weigh the pros and cons and consider the tradeoffs rather than
just saying like, oh, any new energy usage is bad or like if AI causes any new emissions,
it must be bad because ultimately any new industry is going to come with some additional
emissions. That is a very intellectually honest and principled crusade. It has also been my
observation that your arguments are beloved by the AI accelerationists, David Sacks,
the White House AIsar, just reposted you.
on X.
Yeah.
And I think, frankly, a lot of people, when they read your stuff, they think, gosh, this is
great.
I can finally relax about the environmental impact of AI.
And I can maybe ignore some of the other stories that I was seeing.
Now, I know that your intention is just to give people the facts about water in particular.
And now you're also sharing some information with us about data centers.
But I wonder how you feel about the ways in which your work is being used, maybe by people
who actually don't care about the environment at all.
I mean, not great, to be clear.
Like, I think the general weird thing is I'm diving into this huge contentious debate about, like,
one of the largest new industrial buildouts ever, period, definitely in my lifetime, personally.
And so, like, some weird allies are going to pop up.
And, yeah, I'm not especially excited when people use this as a cudgel against environmentalists.
Like, I definitely identify very strongly as an environmentalist.
I do really worry about anybody who's concerned about climate and the environment,
worrying at all about these really tiny levels of emissions in their day-to-day lives.
like it makes me really sad. Honestly, I feel like back when I was in college, there was just a much stronger consensus that like personal emissions are like okay to worry about. But the main game, like the main thing to worry about is the green energy build out. And like I feel like if I could take myself forward in time to the present and see like, oh, now everyone is kind of litigating like equivalently like whether they watch YouTube for a few extra seconds a day or something. Like that would just kind of depress me. So I do think that like from an environmentalist standpoint, I do kind of want to snap.
people out of this incredibly tiny concern and parts of their lives. And if I make some allies
with people who hate environmentalists, that's unfortunate. But like, I do want to get the message
out there. I think more broadly, though, I'm becoming more and more convinced that like bad
arguments and bad arguments against AI are mostly just drowning out the good ones. Like,
I think there are just a ton of reasons to be worried, even for like non-weird sci-fi scenarios
that like AI as it currently exists can be harming people a lot. And I think that to get a full
sense of what's going on, like the average person should probably at least poke around
a chatbot a little bit and just have some general idea of, like, what can these models do?
Like, you don't have to use them in your life ever. Like, I'm not saying, like, you are
obligated to use AI. But I would at least like people to understand that these are very different
from, like, a roulette wheel that just lands on random words and can't possibly produce any value.
Because a lot of people still seem to be motivated by this. And, like, they seem to be motivated
in part to not use them by these environmental arguments. I do actually, I have worried about
this a lot where I'm like, oh, I'm worried that I'm coming off on, like, one strong side of this and
just helping the pro-AI side, which I don't really want to do. But I have to admit, for the most
part, just like the general understanding of this stuff is so bad. And I would like my side to at least
be, like, armed with a correct understanding of the world. And this just feels like, honestly,
shooting fish in a barrel, like so many misunderstandings are still floating around everywhere,
that this all leads me to the direction of thinking, like, okay, I'd like people to actually
understand that this one thing is incorrect. Let's focus on the more serious stuff instead,
including, like, things that I think a lot of AI critics would otherwise, like, agree with me on.
Yeah, I mean, I think if I try to, like, steal man the argument that you may face or put myself in the shoes of someone who's very worried about AI water use, I would just say, like, look, whatever the actual statistics are, however much water AI does or doesn't use, the issue is, like, we didn't sign up for this as a society.
We didn't ask these giant Silicon Valley tech companies to start putting up data centers all over the country or straining the electrical grid or just diverting a huge percentage of the economy into building out these.
powerful AI systems. This all happened very suddenly and without any real like democratic
participation. And it just feels bad to see this transformation of our society into something
that seems designed to support this technology that I don't like that doesn't seem that
useful to me and that, you know, I was never consulted about. So like what's your response to that
version of the argument? Yeah, a lot of things to say. I guess I want to respond to it with a lot of
respect, first of all, because I'm trying to write for an audience where it doesn't matter whether
you love or hate AI. Like, you know, I'm very worried about AI overall, but also a very big fan of
AI tools as they currently exist. Like, I use chatbots pretty regularly. And this does bias my writing
to an extent where I do think, you know, like, I have a lot of positive impressions of AI as it
currently exists. And I want to try to reach people who don't, basically. If you think AI is bad,
then obviously you should also, like, generally oppose it using water for the same reason that
you should oppose anything else bad that's using water.
But that opposition should also come with a sense of proportion of where the actual water
issues are in the country.
And like there are a lot of other things I think are bad that use a lot of water, specifically
animal agriculture.
And like there, I do actually feel a little bit more justified in focusing on that a lot just
because it's using such a huge amount of the nation's water, something like a quarter
of your total dietary footprint for water comes from like the food that is grown to
eventually feed animals and stuff like that.
And so in thinking about this, I do want people to not be upset if they don't like AI, but I do want them to at least prioritize like, okay, if I care about water, will it actually make much of a dent at all if we shut down all AI data centers like tomorrow?
Like how much will this actually help America's water problems?
And my basic claim is that it's not really going to make much of a dent compared to a lot of other issues.
The very final thing is that I do worry that in these debates, a lot of the potential positive aspects of data centers for local communities are getting drowned out.
And obviously, there are a lot of tradeoffs, there are a lot of negatives, too.
But data centers also just often add a ton of tax revenue for sometimes very poor communities.
They can provide a lot of utility revenue for utilities to upgrade aging infrastructure.
And in a lot of places in America, the main water access issue isn't necessarily the raw amount of fresh water.
It's like aging infrastructure that might have dangerous amounts of lead or just make it more expensive to deliver the water in other ways.
And so, like, a new very large buyer can actually help water access here.
And again, I don't want to say that makes data centers good all the time, but I do want to say that we need to at least consider these tradeoffs and not just potentially steamroll something that might, in very specific instances, help communities rather than hurt them.
Hank Green, a past hard forked guest made a YouTube video the other day about this AI water use issue.
And one of the points I thought he made that was interesting to me was that there's just bad data and misdirection happening on both sides of this argument.
Basically, you know, there are some AI critics out there who are using bad data to make unsupported claims about water use.
But the AI companies themselves are also guilty of sort of making their environmental footprints look smaller than they are by doing sort of fancy footwork with the data, doing things like telling us how much a single prompt uses when most of the environmental cost of these AI models happens during training, not inference, and ignoring the fact that a lot of these AI models now have these sort of long reasoning chains that output like a bunch more tokens than a stand.
chatbot query, and they're not giving us like those numbers for the most part.
So what were your thoughts on his argument and are there any places you disagreed with it?
I mean, I love the video personally. I think I've been going a little bit crazy by
reading a lot of what I thought of as very bad coverage of the issue that left people less
informed. And like, Hank's video was just phenomenal. I thought like throughout it, I was like
nodding along thinking like, man, this is actually just going to give people such a better
understanding of water in America, like where AI fits into this. It's weird for me to like
defend AI companies here. I will say that for a while, there was a lot of demand from a lot of
different places for AI companies to, like, release data on the per prompt cost of chatbots. And,
like, we're not being told the full picture here. And now that they do that, people will be like,
oh, well, this distracts from the industrial use and the total use. And I'm like, well, you know,
like, I would rather have this information about the individual prompts. And we can look separately
at the total amounts. But like, we kind of have to hold them to account for one thing at a time.
And here I think they've done an okay job, basically. And then, yeah, there are a lot of like
invisible costs like the cost of training, which is a little bit nebulous and then the offsite
cost and things like that. I guess when like Hank says things like that or when other people say
things like we're not being given the full picture here, I usually want to clarify that if you
actually dig into that full picture and try to draw error bars around like what is the absolute
most water this thing could be using, it still just never comes up to a point where I would
personally worry about this adding to my personal water footprint at all. And so like I did find
one place I disagreed with Hank was more in his tone.
he was just like hedging a ton and being like, oh, there are all these different complexities
and I usually want to push people when they're doing that to say like, yeah, the complexities
are real, but I still want to acknowledge that we have this like reasonable range of uncertainty
and within that range, like I'm just not finding a place where this is a serious concern on the
individual level.
Now, Andy, I know you said that in general we shouldn't focus on our, you know, individual
impact on the environment, but I wonder how you think we should think about the fact that
Kevin flies a private plane from Oakland to San Francisco every week for our tapings.
That's unchill.
Yeah.
Yeah, yeah, yeah, yeah, a little bit in chill.
Yeah, I mean, at the extremes, I think this is actually, like, a good lead-in to, like, if you are going to focus on your individual impact, there are just some things that just use hundreds of thousands of times as much energy and water as other stuff where, like, they're just such crazy, easy wins everywhere.
So definitely, like, consider that.
But just, like, hold in your heart that, like, the main challenge here, the main fight that matters is, like, helping with the green energy transition.
So, Andy, as you think about the year to come, how are you expecting to blog?
Are you going to continue shooting fish in a barrel, as you say, and sort of find more instances
in the mainstream media, people sort of, you know, making these arguments about water?
Or do you feel like you've sort of concluded that debate and you're moving on?
I don't know that there's too much else to say.
Like, it would be exciting to write about that elsewhere.
But I think for my blog itself, like, my main pieces on this are just pretty comprehensive.
So I would definitely recommend if people want to see my full argument.
You can read them there.
Yeah, I'm honestly just very, like, personally tired of the topic.
Like, every time I type data centers and water now, I just feel myself getting tired, honestly.
So there's a lot of other stuff to say.
We'd definitely like to cover the electricity stuff going forward.
Like, I think that's just the most interesting, complicated question here with a lot of unknown unknowns in both directions.
And it's really cool.
It, like, incorporates a lot of, like, climate ethics and questions about, like, you know, grid stuff,
which I've been really excited about since I was a different.
teenager, honestly, just like learning about the grid is really cool. And then eventually would also
like to write more about like other AI related stuff. Like I do worry that I've kind of pigeonholed
myself a little bit in like writing so much about how like, oh, the environment stuff is really
overblown that I think people are sometimes missing that like, oh, I'm quite worried about this
for all these other reasons. So I would definitely like to write more about how like, oh, hey,
like what will be the effects on labor? Like how will this enable like government surveillance and
stuff like that. Like, what is the extra case like and things like that? So we definitely like
to pivot to that more over the next year. Andy, thank you so much. Really appreciate it.
Yeah, it was a blast. Thank you guys.
Well, Casey, that segment is H2 over. When we come back, we'll send off the year that was
with a new segment called Hard Fork Wrapped.
Well, Kevin, the end of the year is upon us, and that means it is time for hard fork wrapped.
Yes, and what is hard fork wrapped?
Well, as you may know, since 2015, Spotify has been giving users a recap of their music and listening habits for the year.
And so this year we thought, why don't we just steal that idea, and we will tell listeners a little bit about what they've been listening to all here.
And beyond just giving you a few stats, we can also give you a few updates on some of our most talked about topics and segments of the year.
Yes, we can do that?
But first, can I just say what my favorite piece of the Spotify rap discourse has been this year?
Please say it.
So this year, people started noticing that some of their Spotify wrapped was made by AI, and people started posting about it online.
unhappy that like Spotify was using AI to do their rap and like the backlash was like you thought
they were just doing this by hand you thought there was like a guy in Spotify's headquarters just
like scribbling down your own personal thing anyway I derive pleasure from that yeah I thought it was
that that AI DJ I thought he was the one who is taking all the notes on my listening yeah he's the
author with that Kevin I think we should start by sharing a few stats with listeners about the year that
we've had here on the hard fork show what a year it's been so Casey so far in 2025 we have done
a drum roll please collective 3,524 minutes of hard fork that's about 59 hours or about two and a half
days two and a half days we spent talking to each other this year actually that's an undercount
because we only run about a third of what we record so yes yes and according to our
on Spotify, which we get as creators, we saw that we had collectively 1.4 million hours of people
listening to Hard Fork on Spotify, which I'll say it, that's too much. That's 160 years of continuous
listening that you did this year. No, you're thinking about this the wrong way, Kevin. That's 1.4 million
hours of going on a run, of doing those dishes, of folding that laundry. Every single minute,
you spent listening to Hard Fork and doing something else
was a minute that you also spent doing something else
and I salute you.
Yes.
Yes, I would love the breakdown in the Spotify wrapped
of actual time that people were listening
and, you know, there's some way to like use AI
to determine whether you were just tuned out.
You just left it on in the kitchen.
Forgot about it.
I want to know that too.
Just a few more fun facts.
We've done 52 episodes this year.
I think we're planning on 56 by the time
that the end of the year is over.
You will probably guess our listeners
top artist, a sort of
pretty famous pop star by the name of
Taylor Swift. Oh, yeah? Yeah.
And you may also have guessed
our listeners' top audiobook,
Abundance by Ezra Klein and Jared Thompson.
Yeah, yeah. Wow.
One final set of
stats about the year that was, Kevin,
our producer asked
Notebook LM,
which we have been training on a
transcript of every episode of the show what it thinks our audiences listening age was this was a new
Spotify wrapped feature this year it told you how old it thinks you are based on the music you listen
to it told me that my listening age was 86 that is not a joke um here's the good news for
wait really that's incredible wait i want to i have to say what in your listening history made it
think that you were 86 years old i listened to the beetles and i'm not even joking but that's the
problem was that I like the Beatles and they were like, are you a hundred? I was like, no, I like
pop music. Unbelievable. Wait, that's amazing. Mine was only 18. Wow. Well, good news. You can still
listen to Spotify and Australia then. I think it's because my kids like blippy music got mixed up with
mine. So it sort of took the average. I see. Well, here's the good news for a hard forked
listeners, your listening age is estimated to be 35 to 45. And we actually got some demographic data
back recently that says that actually that is exactly right. And statistically, you are between
35 and 45 if you listen to the show. But we also have a lot of younger listeners as well, and even
some older ones. So, you know, hard for a show for the whole family. Whole family.
But what we really wanted to do, Kevin, with this rap segment, is to put a bow, if you will,
on some of the most discussed topics and stories of the year so far.
And so we came up with a couple of cute categories, not unlike Spotify.
And first up, in a year that has been dominated by our discussion of artificial intelligence,
we have an update that we're calling the AI regulation regulation of the year.
Okay, I'm listening.
Now, as you know, states, including California and Colorado, have started to pass laws regulating AI,
California, for example, passed a law requiring that AI labs publish transparency reports about
new risks that their models create and how they plan to handle catastrophes.
And the accelerationists in the Trump administration hate this because they say it's going to make
life too difficult for the AI labs.
It's going to make us less competitive against China.
And so this week, President Trump said in a post on true social that he will sign an executive
order to try to curb these state laws from.
being passed. At the time of this recording, we don't have the final language, but in a draft
executive order that circulated last month, Trump directs the U.S. attorney to sue states to overturn
AI laws, particularly if they think they can prove that they infringe on interstate commerce,
and they also plan to tell federal regulators to withhold broadband funds and other funding to
states that pass these AI laws. So, Kevin, what do you make of our AI regulation
and regulation of the year.
So this is the most enraging part of the AI regulation debate
is that there is this whole group of people,
mostly in the White House and around the White House,
who have just sort of bet the House on what they call federal preemption, right?
This is the notion that instead of having a bunch of different state laws governing AI,
the federal government should step in and make its own plan,
which on its face, I think, would be a good idea.
The problem is when people ask,
okay, so what's your federal plan
and like how are you going to get it through Congress?
They basically just start
like hemming and hawing and try to change the topic.
Yeah, I think this one
is going to face a lot of trouble in the courts.
You have to remember, there was an earlier effort
to pass a moratorium on state AI regulations.
The Senate voted against it 99 to 1.
So this is an idea that both Democrats
and Republicans really, really don't like.
That was only one of the
attempts. They also tried to attach it to the defense spending authorization bill, and that failed as
well. So, like, after trying to sneak this thing in through several different means, they have now
decided, well, we're just going to do it by executive order. Yeah. And based on the reading I've done,
that does not seem particularly legal, although, of course, you know, what will the Supreme Court decide
if it gets there? Who knows? But this remains one to watch, because as you say, Kevin, if states are
truly forbidden from passing any AI regulations over the next year, it just completely changes
the landscape of what these labs are and aren't allowed to do in ways that I think might be
bad. Totally. All right. What's next on our hard fork wrapped? Next up. Well, you know, Kevin,
we talk about many different nations on hard fork over the course of the year. But when it comes
time to doing rapt, there's really only one country that qualifies in the category.
of other country of the year.
Is it Australia?
You know what?
It came very close.
It was a very close race this year.
But for 2025,
according to Hard Fork Wrapped,
our other country of the year is China.
China.
You know, China is our other country of the year
for many reasons.
I think it starts way back
when Deep Seeks,
R1 model came out,
and we saw this temporary freakout
over whether China
was on the verge of overtaking the United
States in AI development. That idea has since loomed over all efforts to regulate AI here in the
United States. And while DeepSeek, I think, hasn't lasted in the public imagination in quite the
way that we thought it was going to, it has continued to release some new updates. And maybe those are
actually worth talking about for a minute. Yeah. So Deep Seek, first of all, I can't believe that was this
year. If you'd asked me what year did the Deep Seek thing happen, I would have guessed 20, 23.
But yes, that was just earlier this year that the stock market reeled from the news that this Chinese AI company appeared to have trained a model that was competitive with our state-of-the-art American models for much less than American companies were spending.
I think that was always a little bit questionable as a narrative.
I never really bought the figures that were thrown around.
But I think this was a big moment in terms of people realizing that China was not actually.
you know, that far behind the state of the art. And I think they've continued to show that.
They released some new models recently that show that maybe they're not right at the frontier,
but they're, you know, a couple steps behind. Yeah, it is a good model. Deepseek makes good models.
There has been reporting that the number of free open source models used by American companies
that our Chinese models has been on the rise all year and that now a significant percentage
of U.S. companies that are building with these models are using Chinese options.
Deepseek is a big part of that story, even if they are not as good as I would say, the leading
three frontier labs.
What else went on in China recently, Kevin, that gives it the designation of our other country
of the year?
So the big story this week has been about whether NVIDIA can sell these chips, these H-200
chips to China.
these are some of the most advanced chips on the market.
And for obvious reasons,
NVIDIA wants to be able to sell them to companies in China.
Jensen Huang, the CEO, has been lobbying hard for this.
Currently, they are limited in which chips they can sell to Chinese tech companies.
And a lot of people in the U.S. national security establishment
and at the frontier AI companies have been very worried about this.
They've basically been saying, look, China is our biggest strategic adversary.
We know that AI is a huge priority for them.
why would we allow them to have these very powerful chips
and sort of erode the lead that we have over them
when it comes to creating these frontier models?
So a lot of folks have concerns about this,
but in the Trump administration,
they have basically said,
we don't think this is going to be a strategic advantage for China.
And so Trump this week said that Nvidia
can now sell these more powerful H-200 chips to China,
and the government will take a 25% cut of these sales.
Now, one interesting wrinkle here is that it's not at all clear yet
whether the Chinese government actually is going to let Chinese companies use these chips.
They have been building their own chips domestically.
Companies like Huawei have been developing their own chips.
But I think it's fair to say that they will find some use for these advanced chips
if they are allowed to buy them.
Well, here's my case that this gives China a competitive advantage, Kevin.
There was a story in the information this week that said that Deepseek is developing its new AI model
using several thousand NVIDIA blackwell chips, which are NVIDIA's top of the line chips,
which remain banned for export to China.
And apparently, Deep Seek was able to smuggle those chips into China via third-party countries,
essentially having some other country like Malaysia, you know, make the purchase happen there,
get it into China.
So, you know, if Deep Seek thinks it's worth running a smuggling operation to get these
top-of-the-line chips, my guess is that China is actually going to find it useful to get these
H-200s into China.
Yeah. I mean, this to me just feels like a complete victory for the NVIDIA lobbying machine, which has been pushing hard for these, you know, the relaxing of these regulations for some time. I don't understand why anyone in the Trump administration thinks this is a good idea. I've heard their arguments about how the entire world's AI stack needs to be sort of American. But I think that allowing your biggest geopolitical adversary to have access.
to the key ingredient in building powerful AI systems
at a time when American companies are racing
to stay at the frontier, I just can't see any good reason
for that other than we got worked by NVIDIA's lobbyists.
Well, Kevin, that brings us to the final item
in this year's hard fork-wrapped,
and that is the most podcasted about podcast.
And this year, our most podcasted about podcast
was our interview with Robloch CEO, David Bazuki.
Yes, this was a while.
one. We haven't actually gotten the chance to catch up about this because it aired on the show a few weeks ago, and then we immediately went into another episode that we had been planning in the holiday break. But Casey, this was by far the most response that we got to an episode all year. If you have not listened to that episode, you should definitely go back and listen to it. What feedback did you get, Casey? I mean, I think we honestly just got a lot of really nice notes from listeners.
I think that listeners, like us, get very frustrated with the world of tech.
And as journalists, we get the rare opportunity to actually sit across the table from people
who run these companies and try to ask them the questions that are on the minds of a lot of
their listeners.
Like, why did you let strangers contact children on your platform for 20 years before
you thought to do something about it?
And I think the response just really shows that people are craving some kind of
accountability for the decisions that these platform executives are making. What kind of
feedback did you get? Yeah, I mean, people were really upset about how Dave Bazuki answered our
questions. There were some other podcasts, as you mentioned, that sort of either followed up or
covered this in some way. Some YouTube creators took it upon themselves to make videos about
the situation. There were lots of articles and, like, gaming publications. This became, like, a news
story. And I was surprised not by the sort of reaction. I think in the room with Dave, it felt
very tense and confrontational and like it had some explosive potential. But I think I was sort of
just surprised because I don't think of us as a podcast that brings on CEOs and kind of ambushes
them with like hard-hitting gotcha questions. Like obviously we're journalists and we feel a responsibility
to ask tough questions of powerful and important people
and leaders at these companies
certainly are powerful and important.
But I was not ready for a fight
when we went into that interview.
And I think he was just so defensive
and so taken aback by our line of questioning
in a way that was frankly kind of wild to me
because his team had been fully aware
that we planned to talk about child safety
and yet he seemed totally unprepared for those questions.
So I was just, I was sort of surprised.
and I guess I was encouraged by the feedback.
We had lots of parents writing in saying, you know, this sort of changed the way I think about
Roblox or I'm not going to let my kid use this now.
And to me, that just suggests that like doing journalism has consequences and I feel good
about what we did and how we showed up that day.
Yeah, we also just got a lot of emails from people whose kids or whose family members
had had terrible experiences on Roblox.
And that was actually quite eye-opening for me because I think as somebody who
doesn't cover Roblox that much. I just had not been aware of just how many kids were having
trouble on this platform. I'll say, Kevin, like, setting aside the kind of, you know, tougher questions
that we had planned to ask him just because they had been very much in the news and they seemed
like they deserved answers. At the end of the day, I think the answer that he gave that surprised me
the most was when he said essentially that, yes, he did think that we should consider putting
prediction markets into these games for children, like essentially onboarding them into the, you know,
vibrant and dynamic world of gambling, like, that was the moment, like, sitting here in the studio
where I was just sort of like, I can't believe we're talking about this. So that would just
really surprise me, and I think we'll maybe stay with me as long as, you know, anything else
that happened during that interview. Yeah. So one theory that I heard in the days after that
episode that I wanted to ask your opinion of is, like, there's this kind of universe now of
podcasts, of shows where CEOs can kind of go to get kind of softball questions from friendly
interviewers. And someone I know was saying, like, this has basically left these CEOs sort of
unprepared for encounters with real journalists, like taking your like, you know, your dog and like
expecting it to live in the wild after it's been domesticated. It's like these CEOs just,
they've spent so much time going on these friendly, you know, non-confrontation.
podcast and doing these sort of interviews with these creators that they have sort of forgotten
what it is like to be interviewed by a real journalist. Do you do buy that at all? I mean, I do think
you can just observe over the past five years. CEOs in general have been spending less time
going on podcasts hosted by journalists and more time going on podcasts hosted by creators. And I think
there's a lot of stuff in that happens in that world that is like undisclosed and frankly kind of
shady. Like, I'm aware of podcasters who are accepting, like, $25,000 to agree to interview somebody,
right? Like, there's a lot of pay for play that's going on out there. And that is, I think,
kind of maybe dulling some of the senses of these CEOs. But, you know, at the end of the day,
I think these are people, these are literal billionaires, right? Dave Bozuki is a billionaire.
These people have profited massively from the success of their platforms. And they should just be
able to answer basic questions about safety and not feel like that is like a vicious attack on
their character. So yeah, maybe that is part of the dynamic here. Or maybe, Kevin, it's just that
ultimately the CEOs of these social platforms in particular have just never wanted to talk about
this stuff. I mean, I feel like that's how it's been with a lot of the CEOs that I cover.
They've always hated talking about the safety, the policy questions, the stuff that involves
difficult tradeoffs, like cases where there were harms on their platforms.
they have really always shied away from talking about it. And, you know, the more that you shy away
from talking about something, probably the worst you're going to be at talking about it.
So I think there's a lesson there. Totally. All right, Casey, that is Hard Fork wrapped for
2025. What have we learned about our show and the topics that we cover? I think based on what
we just covered, what I've learned is that Hard Fork is one of the most interesting shows in the
entire world. And I want to give credit to you and the entire team for a great 52 episodes.
we will continue to be making fresh hard fork episodes for you
through the very end of 2025, and that's the hard fork promise.
We never rest, just like Santa.
Actually, he does rest.
We're 9-96, baby.
Yeah.
At least Santa gets, like, the summer off, you know?
Santa is asleep at the wheel, and we're grinding.
Casey, before we go, let's make our AI disclosures.
I work at the New York Times company, which is suing open AI and Microsoft over alleged copyright violations.
And my boyfriend works at Anthropic.
Hartfork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poyant.
Today's show was fact-checked by Will Pyshal
and engineered by Katie McMurton
Original music by
Marion Lazzano, Rowan Nemistow
and Dan Powell
Video production by Soya Roque
Pat Gunther, Jake Nickle,
and Chris Schott. You can watch this whole
episode on YouTube at YouTube.com
slash hardfork. Special thanks
to Paula Schumann, Puiwinktam,
and Dahlia Hadad. You can
email us at Hardfork at
NYTimes.com with your water
usage for the year.
You know,
