Hard Fork - Charlie Kirk and Online Rage + Inside Trump’s Chip Flip + This Week in A.I.
Episode Date: September 19, 2025This week, we discuss the assassination of Charlie Kirk, freedom of speech and the terrifying new reality of extremely online violence. Then The Times’s David Yaffe-Bellany brings us inside the bloc...kbuster New York Times investigation into a $2 billion investment in Trump’s crypto company World Liberty Financial and a controversial deal to send the most powerful A.I. chips to the United Arab Emirates. And finally, it’s time to round up This Week in A.I. Guests:David Yaffe-Bellany, New York Times technology reporter covering the crypto industry Additional Reading:Trump Administration Wields Its Full Toolbox to Bring Media to HeelSocial Platforms Duck Blame for Inflaming Divisions Before Charlie Kirk’s DeathIn Giant Deals, U.A.E. Got Chips, and Trump Team Got Crypto RichesOpenAI’s Risky Step to Protect Teens We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Are you watching this Alien Earth show?
No.
Okay, first of all, it's so good.
It's my favorite thing I've seen this year.
Second of all, one of the subplots is that there is a character who is a kind of like,
you know, synthetic, a humanoid person, and she has mastered the alien's language.
So she can communicate with the language, presumably using some sort of machine learning.
Yeah.
And it's paying huge dividends for her because that alien is eating a lot of people.
He's living her alone.
And that's why communication is so important.
I do think that Interspecies podcasting is going to be huge.
Yeah?
Like, I recently got a pitch.
There was a, you know, a PR person I know who was sort of telling me,
oh, you know, we have this AI company we're working with,
and they're learning how to translate whale song using AI.
And would you like to have the founder on?
I said, no, but I'd love to have the whale on.
If the whale would like to come on hard fork, we would love to interview the whale.
I have so many questions for a whale.
Well, like, what would you ask?
Uh, feelings on SeaWorld, free willy.
Yep.
Um, let's talk about, uh, what it's like at the bottom of the sea.
Tell us about the most interesting krill and plankton you've come across recently.
I feel like you don't actually have a lot of questions for this whale.
I like the idea, though.
In the old days, it was dog with a blog.
Remember dog with a blog?
Yes.
And now it's going to be cod with a pod.
I'm Kevin Rus, a tech columnist at the New York Times.
I'm Casey Newton from Platformer.
And this is Hard Fork.
This week are thoughts on the assassination of Charlie Kirk
and the terrifying new reality of extremely online violence.
Then the Times David Yaffe Bellany joins us
to unpack a blockbuster investigation
into a $2 billion investment in Trump's crypto company
and a controversial deal to send high-end AI chips to the Middle East.
And finally, it's time for this week in AI.
Well, Casey, we're going to start the show on a more somber note than usual.
Obviously, many of you, like us, have been reading about the assassination of Charlie Kirk,
which happened while we were taping last week's episode, so we couldn't really get into it then.
We also just didn't know a lot.
But in the week or so, since then, we have learned a lot more.
and we've had a lot more time to think and reflect and try to figure out what, if anything,
we could add to the conversation.
So, Casey, we're going to talk a little bit about what we've been talking about amongst
ourselves, what we're seeing, what we're thinking about, and some of the connections between
these internet platforms that we talk about on the show and these violent events out in the physical
world.
And what, if anything, can be done about that?
So, Casey, how are you doing?
How are you feeling what has your sort of reaction been to all of these events over the past week?
Well, I think like a lot of people, I was horrified by the event itself and have continued to be horrified as further events have unfolded.
I was not a close student of Charlie Kirk's work, but I know that he was a master of manipulating the platforms that I cover.
And so over the past week, I have been thinking a lot about his relationship to the platforms, what has happened on platforms since his murder, and what it tells us about how media works in this moment, and how politics and power work in this moment.
Say more about that, because I share your sense that, like, Charlie Kirk is sort of wrapped up in the Internet, and these were the
the platforms where he kind of made his reputation, Turning Point USA, the organization he started
was a major force in kind of early 2010's online activism and did a lot of viral content,
for lack of a better word, and that's sort of how he became famous, how he became who he was.
It wasn't by running for office or going on CNN. It was by making videos and posting them online.
Yeah, and specifically, I think he excelled at making what some of the platform nerds that I write
about would call borderline content. So basically saying things that come right up to the line of
breaking a platform's policy without quite going over. So for example, he speculated that
vaccines may have killed more than a million people without backing that up. He said it was a mistake
to pass the Civil Rights Act. He would criticize Jewish donors for funding, quote, radical open
border neoliberal quasi-Marxist policies, then reject claims he was being anti-Semitic.
And maybe he wasn't. But over and over again, you see him flirting with this line.
And the reason that that's important is because platforms love it when you flirt with the line.
It turns out that the most compelling thing you can do on social media is to almost break a policy.
So that is one of the key methods that Charlie Kirk used to gain notoriety, prominence,
and then eventually power and money. And he did it by mastering this particular technique.
And the thing about this technique is that as effective as it was for Charlie Kirk,
I think it's really corrosive on our politics. Because the more it succeeds, the more that
when you open up TikTok or Instagram Reels, you're seeing something.
thing that is designed to upset you. It's designed to make you uncomfortable, to challenge you.
And that kind of thing, which Charlie Kirk was so good at, is now kind of the way that a wide
swath of our politics works. Yeah. Yeah. I think that's a really good point. I remember this
graph that Mark Zuckerberg once posted on his Facebook account. This was a much earlier version of
Mark Zuckerberg who, you know, had not gone sort of, you know, full MAGA yet. But he was sort of
sharing some of his observations about the kind of content that worked on Facebook. And maybe we
could, you know, pull that graph up and put it in the show. But it basically showed that the
closer you get to the line, no matter where the line is drawn, this was the interesting thing.
It was like, no matter where a platform decides to draw the line and say this content is
unacceptable, the closer you get to that line, the more engagement you get. And, and, you know,
And he was sharing this not as sort of a celebration of this phenomenon.
He was just kind of like, look, this is what we observe happening on our platforms.
And it didn't seem like he knew what to do about it then.
And it doesn't seem like anyone knows what to do about it now.
Well, what platforms said in both, you know, Facebook at the time and YouTube said,
was that they were going to downrank this kind of thing.
They said, hey, we see you edge lords out there.
We're not going to let you get away with this.
So we're going to try to restrict the spread.
of this borderline content. But as we look at the media landscape today, Kevin, it's a lot of
stuff that is just right on the border, right? It's stuff that you kind of can't believe that
people are getting away with. So people are sort of always finding these new borders and they're
trying to exploit them. And they're not always political. I'm thinking about, you remember the
milk crate challenge on TikTok? Yes. People are stacking up milk crates and then trying to like walk
across them, even though it's an incredibly rickety and dangerous structure. You know, at the time,
TikTok didn't have a policy for that, but then TikTok watch a bunch of kids falling off milk crates
and they had to say, we're not going to promote this sort of thing anymore. So this is just the
nature of platforms that are looking for the most engaging thing for you to look at. They're
always going to be drawing your attention to something that is just on the brink of maybe not being
okay. And in the wake of the Kirk assassination, Kevin, I've been struck by how it seems like
the entire cultural moment just feels like so many people are making borderline content, right?
It's people making edgy jokes online. Some of those people are being fired. Some are being
celebrated. Some are getting, you know, angry letters from politicians. And in a way,
this whole thing would have been a perfect story for Charlie Kirk to comment on, right? And to say
some really edgy things about. So you really cannot separate him from
these platform dynamics that he mastered
and are now the primary mechanism
by which his death is being discussed.
Yeah, I mean, I, unlike you,
I have watched a lot of Charlie Kirk videos
a couple years ago.
I made a podcast called Rabbit Hole
about sort of the world of online radicalization
and extremism.
And as part of that,
I spent a long time watching the videos
and listening to the podcasts of the people who were sort of big parts of that story.
And Charlie Kirk was one of them.
And I should say, I don't think he was the edgiest or the most sort of radical by any means.
In fact, if I had to sort of put him into a bucket of online political influencer,
it would be kind of the kind of debate me bro genre was sort of what he was known for,
where he would, you know, show up at a college campus and have some debates.
And then there'd be clips of those debates that'd be posted on YouTube or other platforms.
And sometimes the video title would be like, you know, Charlie Kirk destroys liberal.
There was always this sort of very violent verb in there, you know, destroys, humiliates, owns.
This was sort of the blood sports of that era.
And I'll never forget this conversation I had with a left-wing YouTuber around this time.
And he was sort of saying, well, yeah, like, we have decided to fight,
with fire. Like, we cannot get any reach, any views, any distribution for our content on the
left if we do not sort of mimic the aesthetics of the right. So they would do their own
videos that were like, you know, liberal destroys Charlie Kirk or, you know, college student
owns Ben Shapiro. And so it was almost like they were just sort of adopting the same kind
of rage optimization framework, but just like switching the sign on it.
And that really struck me because it was like, I feel like the whole culture you're talking about now, like our whole society has kind of absorbed this optimization strategy.
Like we as a culture are optimizing for rage now. You see it on the social platforms. You see it from politicians calling for revenge for the assassination of Charlie Kirk. You even see it in these kind of individual cases of people getting like extremely mad at, you know,
you know, the person who made a joke about Charlie Kirk that was, you know,
edgy and tasteless, and going to, you know, report them to their employer and get them fired.
It's all this sort of spectacle of rage, this culture of destroying and owning and humiliating.
And I guess to me, what I've been feeling over the past week is that there just don't seem like a lot of checks on that.
There don't seem like a lot of stabilizing forces right now.
and the temperature of the internet and of our society feels really high right now.
Yes, and I think it is particularly true on X, and I think it might be worth drawing a distinction
between how X handled this situation and how the Twitter of old may have handled a similar
kind of murder. In the old days of Twitter, when there was a very violent video that was spreading,
you would have teams that would either try to take it down
or to ensure that it was hidden behind a screen
and they would probably downrank it,
you know, try to ensure that it was not being recommended to people.
Twitter was never perfect at that sort of thing.
But at the Twitter of old,
there was an effort to try to prevent people
who didn't want to see this content from seeing it.
Of course, Twitter also banned some of the most aggressive
and violent right-wing accounts, right?
people who had posted really racist and inflammatory stuff, they all got banned. Then Elon Musk buys
it, turns it into X, and what does he do? Well, first of all, he brings back all those right-wing
accounts, lets them post almost whatever they want. And then, crucially, he decides to use the
platform to push his own very inflammatory views, right? In the immediate aftermath of this,
before we know anything about the shooter, he tweets, the left is the party of murder.
And so today, if you look at X, it is easy to get the sense that we are on the brink of Civil War
and the right wing is gearing up for battle. In reality, I think the vast majority of Americans
are not on the brink of civil war. But if you were one of these elites who is still Twitter
brain and is still checking this site 20, 100 times a day, it is going to change your perception
and it is going to polarize you, and it is going to make you think that things are worse than they are.
So this is just a set of dynamics that I think is enormously consequential because the elites of this country are still on X,
and what they are seeing is a sort of very bloodthirsty vision of America, and that scares me quite a lot.
I don't know, man. Don't you feel like this is happening sort of on every platform?
Don't you feel like the temperature is rising on blue sky and TikTok and Instagram?
Like, I feel like this is broader than X to me, and obviously X is a particularly potent example of a platform that has shifted its policies and its approach to sort of borderline content.
But I feel this everywhere I go online.
I do not feel like there is a place on the Internet right now that is being optimized for civil and respectful discussion and debate.
I feel like it is all rage bait.
I basically agree with you, but I do think it matters that the leader of one of the platforms
is using his platform where he has the largest audience of anyone on the platform to promote
these views as aggressively as he has been doing. But to your point, yes, the broader dynamics do
hold. If you go on blue sky, you will see a very angry version of the left. Now, for what it's worth,
what I have seen there has seen far less violent than what I have seen on X. I have actually
seen much more violent leftist content on X than I have on Blue Sky. But I think your point
is a good one that this broader dynamic of rage bait everywhere polarizing us all the time
is not just an X phenomenon. And it does just seem to be how the app-based media ecosystem
works. Yeah. And like, to be clear, we don't fully understand all of the motives at play
behind this particular violent act.
I don't believe that anything
is ever just the internet's fault,
but it does feel like everyone has kind of absorbed
the logic of the platforms.
And there used to be people inside the platform companies
who thought about this stuff.
Like during the 2020 election cycle,
there was an effort at Facebook
to sort of take down the temperature of the news feed.
There was this whole set of what they called
break glass.
measures. Basically, if it looks like the country is spiraling into some kind of civil war,
there are these levers we can pull and these knobs we can turn inside Facebook to like bring
down the level of discord and hostility on the site. And not all that stuff was implemented.
They very quickly sort of, you know, figured out that it was bad for engagement to do that.
But like, it is an option. And I wish there were more people at the place.
who cared about this and thought about maybe sacrificing some of their engagement.
But as we know, this stuff is very good for engagement.
And it turns out that last week, X had more first-time downloads in the United States
than any single day in its history, including before Elon Musk owned it.
That's according to Nikita Beer, who works in product over at X.
So as violent and as awful and as rage-filled,
as the platform is right now, that does appear to be working for them,
at least by this one very shallow metric.
Yeah, this makes me want to talk about one potential solution
that I think has not been fully explored.
It would not be a complete solution.
It would just be something I'd like to see more platforms try.
And it's called bridging-based algorithms.
We've talked about them on the show before.
This is an idea from a guy named Aviv Ovaja,
who's come on hard for.
It's probably been a couple years since he was here.
he came up with the idea that is now at the heart of the community notes that you now see on
X and on Meta's platforms, right? You know how if you see something that's wrong on X, you can
add a note to it and say, well, no, actually, this is the truth and then link to whatever the
truth is. And the way that bridging-based algorithms work is they show them to people across
the political spectrum, right? They have various ways of figuring out sort of where you're aligned
politically. And they only show the note if people who are more on the left and more on the
right agree, right? They sort of see a bridge between the two of you. And they think, well,
if Republicans and Democrats both think this is true, this is likelyer to be true. And so we're
going to show that note. And that is a way that using an algorithm, you can try to create some
consensus, right? You can try to build a bridge. I'm not aware of any platform that has tried to
do this in the sort of core feed. It may be that this is less engaging and a worse business
than the ones that we have today. But we do know that there is a way to build technological
systems that bring people together because they are already in use today on some of the
worst platforms on Earth. So I do think that there is an opportunity here for somebody who cares
to try to bring this idea to more places. Yeah. Okay. So if that's the platform piece of it,
What are you looking at now looking forward?
Like, obviously, we've seen not just the kind of, you know,
culture war side of this, but we're now seeing a government crackdown on speech.
You know, we talked on this show back during the sort of early days of this Trump administration
about people like Brendan Carr, the chairman of the FCC,
who had explicitly sort of stated that one of his goals was to take the fight over free speech
to the broadcast networks, to the platforms.
We're now seeing things like Jimmy Kimmel
being put on indefinite leave
from his ABC show
after Brendan Carr went on a podcast
and condemned his comments
about how the right was characterizing
Charlie Kirk's alleged shooter.
So what are you thinking about the free speech
of it all?
I mean, this is just terrifying, full stop.
This is what the First Amendment
is supposed to protect us from political speech
and, in particular, offensive, tasteless political speech.
This is why we have
a First Amendment. And over the past decade, I've watched Republican after Republican complain
about policies on platforms that they felt did not enable the free speech to the degree that they
thought they deserved it. Charlie Kirk himself was a big free speech warrior, and he was part of
this culture, we're arguing that conservatives were being disadvantaged on these platforms.
When the Trump administration retook power earlier this year, at first it was just essentially,
let's make things fair and balanced. So we'll approve your media merger, but we're going to put some
sort of minder at your company to make sure that you're not too mean to President Trump. That was
already a bridge way too far for me. Now for the government to come in and to threaten individual
broadcasters because of political speech that they made, we're getting pretty close to as bad as it
gets. So this is something that I think all Americans should be paying attention to because trust me,
you do not want to live in a country
where you no longer have the First Amendment.
Yeah, or to put it even finer point on it,
I've seen a lot of people pointing out
that, like, you really do not want to live in a country
where the comedians on TV cannot make fun of the president.
Yeah.
Like, that is rarely a sign of a healthy democratic society.
Yeah.
I also have been fascinated and disturbed
about this whole sort of crowdsource surveillance
and snitching culture that the Internet seems to have created.
Like, I'm sure you saw this.
But in the wake of the Charlie Kirk shooting, there were these kind of accounts that would signal boost, you know, people reporting their Facebook friends, their neighbors, the people who, you know, ran small businesses in their towns, saying, oh, this person made a tasteless joke.
And then they would, you know, get retweeted by libs of TikTok or they'd send it to some, you know, Charlie Kirk people saying mean things about Charlie Kirk database that would, like, sort of publish it.
And people would sort of write letters to their employers, urging.
them to be fired. And, like, that's not new behavior that has been happening since at least,
you know, the mid-2010s with Gamergate, but this kind of incentive structure where, like,
social media and these sort of influencers are incentivizing regular people to sort of engage in
this kind of surveillance and reporting. I have seen that before, but I have not seen it
at this scale in this organized manner. So I wonder what you make of that.
I mean, to me, this is just another platform incentives thing, right?
This kind of surveillance and doxing is essentially a kind of video game that you can play
on X, and people like to play video games, right?
And because you're playing with people's real lives, it feels really edgy and cool and fun
for those who are participating in this.
And, you know, candidly, like, part of this is the flip side of free speech, right?
It's like, if you want to say that people have the right to express themselves online,
One thing that they can express is this person made a tasteless joke and I think that they can lose their job.
I'm willing to accept that as a fact of life as long as I still get to engage in political speech.
You know what I mean?
And so I think we also need to start thinking about what kind of tradeoffs we want to have in a world where these platforms are mediating so much of our reality.
Yeah, I think that's right.
I just want, I just wish I felt like there was any stabilizing force, any, anyone in any position of power right now who could sort of take down the temperature, turn down the knobs, sort of encourage people to like take a deep breath, either sort of, you know, literally or spiritually.
And like, I just, I am feeling very anxious. I've just noticed my anxiety over the past week has really gone through the roof.
think it's because I just, I feel this sort of ambient rising of rage in our society. And I want
it to stop. I want people to take a deep breath. People should absolutely take a deep breath.
I would say if you're looking for reasons of optimism, we are now a week out from this shooting.
There have not been huge upsurges of violence across the country. I think people are very sad.
They're very anxious about the future. But so far, they're not taking.
to the streets with guns. And I think that speaks to the fact that the vast majority of Americans
do not want to participate in a violent cultural war with people who disagree with them.
And that when we get most of our information from platforms, we are always at risk of forgetting that
because on a platform, it always looks like civil war is about to break out. So if nothing else,
I think, this is a moment for all of us to think about our media diets, think about the way that
whatever media we're consuming is manipulating our emotions, and then deciding if we want to
make a change there, right? Because perception can become reality in too many cases, and it just
might be a moment to find something a little bit calmer to help you understand what's going on
in the world. For example, a podcast hosted by two friends who mostly like to tell jokes about
technology companies. Just throwing that out there. I mean, you're joking, but I am thinking about
our role in this moment and what we do.
And I want to continue to encourage people to look around, to touch grass,
to not just see the world through this kind of refracted funhouse mirror of social media.
Because I think when you do that, it leads to some really dark places.
Wow, I couldn't help but end this one on a dark note.
Sorry, swear my head's at.
I brought it to such a nice place, and then you were just like.
I know, and then I brought it back down into hell.
I'm sorry. I'm sorry.
Let's talk about something else.
Well, Casey, when we come back, we're going to talk with D.Y.B.
About the UAE and GPUs and AI.
Well, Casey, there was another big story this week involving politics and tech.
And this one did not get a ton of attention relative to the Charlie Kirk assassination story.
but I thought it was a big, important investigation from my colleagues at the Times that deserved more attention than it got.
I completely agree with you. This one is a really extraordinary piece of journalism, and so I'm glad we're highlighting it today.
Yeah. So this story involves two seemingly intertwined deals between the Trump administration, World Liberty Financial, which is the Trump family part-owned cryptocurrency company and the United Arab Emirates. And it raises just a ton of fascinating questions.
Yeah, but it's also a story about the future of AI and potentially AI safety. As you know, there are so many people all around the world who want to get their hands on state.
state-of-the-art chips, but the United States has export controls in place that prevent
some of our adversaries or countries that we're just kind of worried about from getting them.
Yeah, and I think it would be a really big deal in any other administration. I think it would
have gotten, you know, months of airtime. It would have been all anybody would have talked about
for weeks on end. There would have been hearings and, you know, calls for impeachment and things
like that, but I think we've just gotten so inured to the sort of everyday business dealings of
the Trump administration and the Trump family and the various associates that it sort of passed
by without much notice.
Yeah, this is the sort of story than the old days.
We would have been on Twitter saying, this is not normal with a bunch of those clapping
hands emojis in between all of the different words.
So if you're a person who likes to know when this is not normal, we're here to tell you today,
this is not normal.
So to talk about this very abnormal story with us today is one of the reporters who wrote it, my colleague and friend of the show, David Jaffe Bellamy.
David Jaffe Bellany, welcome back to Hard Fork.
Thanks so much for having me.
So, DYB, this story that you reported, this very interesting and complicated story, starts as all great tech stories do on a super yacht.
specifically a super yacht off the coast of Sardinia, where you wrote about a meeting that took place this summer between the owner of the yacht, a man named Sheikh Tanun bin Zayyad al-Nayan, a member of the ruling family of the United Arab Emirates, and Steve Whitkoff, President Trump's envoy to the Middle East.
Now, before we get into the contents of this meeting, I have to ask, do I be, what is a super yacht, what makes it different than a regular yacht, and how can I get invited on one?
Doesn't it just have guacamole and sour cream?
Well, did you guys read that New Yorker story from a few years ago,
The Haves and the Hav Yachts?
That's very good.
That's my main reference point for all yacht-related questions.
And I believe that there are actually very formal distinctions
between yachts, super yachts, and mega-yots.
Oh, wow.
And it has to do with the length of the vessel.
But clearly you guys aren't spending a lot of time on the coast of Sardinia.
I heard it wasn't about the length of the vessel, but how you use it.
It's about the motion of the ocean.
That's right.
Anyway, we're getting off track here.
We're getting off track.
Okay, so, D.YB, let's talk about this meeting.
What were Sheikh Tanoon and Steve Witkoff there on this yacht to discuss?
So the White House told us that this discussion centered on resolving international conflicts, essentially.
But what our reporting showed is that the relationship between these two men, you know, spans both diplomatic matters like that.
and also, you know, essentially a business partnership between Steve Whitkoff's family company
and Sheikh Toc Noon's sort of family investment fund in the United Arab Emirates.
Yeah, so you're reporting, D.I.B and that of our colleagues,
centers on these two big deals that took place earlier this year.
Tell us about these deals.
So one deal involved crypto, the other deal involved artificial intelligence.
The first deal, which is the crypto deal, the parties to that were World Liberty Financial, the Trump and Witkoff family crypto company, and an investment firm in the United Arab Emirates called MGX, which is essentially owned, run by the royal family of the Emirates.
And the deal was that MGX would use the World Liberty Financial stable coin to make a $2 billion investment in another party to this transaction, finance, the big.
crypto exchange. And this was a huge deal for World Liberty because the company is trying to get
traction for its stable coin. And as you guys know, the way stable coins work is, you know, if somebody
is using the coin, then there are deposits backing it up, and the issuer of the coin can
invest those deposits in ways that generate a yield. And it's a very profitable business in the
crypto world. So that was the first of these two deals. You suggest in your story that just
by getting that $2 billion infusion of cash,
that World Liberty will likely be able to make
tens of millions of dollars in sort of interest?
Yes, exactly.
And companies like Circle and Tether in the crypto world
have found this business model is really profitable.
And frankly, not that complicated.
You just kind of sit on these deposits.
You can invest them in fairly safe ways
to generate a big return if the principle is large enough,
which in this case it was $2 billion.
So tell us about deal two.
So deal number two is the artificial intelligence deal. And this is a tentative agreement between the United States and the Emirates for the U.S. to allow the export of American-made chips to the UAE. These are the most valuable, powerful chips that fuel AI. They're developed by NVIDIA. And without them, you know, you can't really become sort of a tech power.
our house in the way that the UAE wants to be. And, you know, the Emirates has been trying to get
access to these chips for for years and years. And the issue has always been that U.S. national
security officials worried that if we give these chips to the UAE, then they'll end up in the hands
of China because there are really tight ties between those countries. Yeah, tell us about some of
those ties. Joint military exercises, technology sharing agreements at the center of this
is G-42, which is another kind of Emirati tech firm, also run by Sheikh Takh Noon.
And so, so, yeah, the fact that the Emirates is getting these chips is like, it was a big deal
and not necessarily an inevitability given, you know, the posture of the Trump administration
toward China and given the resistance in the Biden administration toward, you know,
kind of giving free access to these chips.
And there was resistance in the Trump administration as well to giving the UAE those chips.
Is that right?
Absolutely. This is something that we found in our reporting that there were disputes within the White House and people who were very concerned about the national security implications of allowing access to these chips.
You know, I should be clear, you know, this is like a preliminary agreement and there might be new security guarantees that are built in when those chips actually leave the U.S. and go to the UAE.
But at the moment, they're definitely kind of concerns from some people in that national security community.
Right.
Right. So, David, let me just repeat back what I think I heard from you describing these two deals and maybe drawing some lines that maybe are suggested but not explicitly said in your reporting.
So correct me if I get any of this wrong. The UAE wants to be a global player in AI. They look at this technology. They say, we want to be one of the countries that's developing these sort of frontier models. But it has a problem. It can't get the chips that it needs to run these powerful models because,
Under the Biden administration, the export of the most powerful Nvidia chips was limited to the
U.S. and its allies. And the UAE, while technically, you know, not an adversary of the United States,
also does some stuff with China. And so people worried that maybe they would send some chips to the
UAE and China would get access to them. So the UAE can't get all the kinds of chips they want.
And then it plows $2 billion into a cryptocurrency controlled by the Trump family business,
and around the time of that deal, the Trump administration changed its posture about chips, rolled back some of these restrictions, at least in a tentative agreement, fired the, we'll get to this, but fired the main guy who was sort of advocating for these export controls and kind of lets the chips flow to the UAE as they had wanted. Am I missing anything important?
I mean, I would say that the main thing that we kind of say in our story and that is important to understand is that we did not find evidence of a kind of direct.
quid pro quo here. What the story basically shows is that these two deals involved many of the
same people. They were negotiated in roughly the same time period, and they were interconnected
in ways that people weren't aware of when these sort of back-to-back announcements happened in May.
I'll give you an example of that from our reporting. An employee at G42, the big Emirati tech
firm that wanted the chips, was simultaneously working for that firm and for World Liberty
financial at the same time while all of this was going on. And so, you know, you see this sort of
intermingling and it raises all sorts of concerns about ethics rules, conflicts of interest,
you know, whether U.S. interests are being subordinated, you know, to the kind of commercial
priorities of the Amaradis and the families of the people in the Trump administration. All of that
is at play in our story, but we certainly can't say that there was, you know, an explicit quid pro quo.
You can't say that, but, I mean, just this character.
of Steve Whitkoff, who is President Trump's envoy to the Middle East, this longtime golf buddy
and friend of Trump's. At the same time, he is the envoy to the Middle East, where he is supposed
to be representing America's interests. He is also, what, a co-founder of World Liberty Financial,
which is about to reap the rewards of this $2 billion investment. Is that right?
Yeah, absolutely. And, you know, still has a financial interest in World Liberty, you know,
even now. I mean, his financial disclosure form was released just a couple of days before the
story came out, and it showed that he still has an interest in world liberty, despite statements
by world liberty that he was going to divest. We were told he's still in the process of divesting.
Yeah. How unusual is it for an envoy like this to also have an interest in a business that is
benefiting directly from a foreign government he is directly dealing with as a diplomat?
these sorts of overlaps between, you know, personal business and government responsibilities
don't have that much precedent in the U.S. This is a new thing in the Trump administration
that's sort of becoming increasingly common. I mean, these lines are blurring. I mean,
even before the Trump administration started back in December, Steve Wittkoff flew to the Middle
East, you know, was taking diplomatic meetings, he appeared at a crypto conference where he met
with a World Liberty financial investor, you know, at that conference, he was sort of presented
as the U.S. Middle East envoy, even though, you know, Trump hadn't actually taken office yet.
This sort of blurring of lines started immediately, and, you know, it's continued throughout
this administration.
Now, let's talk about some of the other figures involved in this story. In particular, I want
to ask you about the role played by David Sacks, who's the AI and Crypto-Zar in the Trump
administration. He was also, as you report, involved in some of these deals. So what does David
Sacks role in all this? So David Sacks, he's the AI and Crypto-Zar, and, you know, he's interesting
because, you know, similar to Wiccoff, he has a foot in both worlds. He's simultaneously working
in government, and he has this tech job at Kraft Ventures, his VC firm. And what we found in our
reporting is that he was one of the driving forces behind the chip steel with the UAE.
He was pushing forward aggressively in public and also behind the scenes, you know, within the White House.
And that raised concerns from some of his White House colleagues about potential conflicts of interest.
The reason being, you know, Sacks is a tech investor.
He's still working as a tech investor.
His company has invested in AI ventures in the past.
And one of the original backers of Kraft was another kind of Emirati investment firm that is currently
chaired by Shake Tok Noon.
And so those concerns sort of bubbled up within the administration while this deal
was being negotiated.
I should also say that, you know, Sacks received ethics waivers from the White House that
basically said, you know, you're divesting from a lot of these stakes, you know, what's left
is sort of de minimis and isn't going to affect your work and sort of it's okay for you
to kind of play in these areas that kind of intersect with your business.
So he did get that sort of permission slip from the Trump administration, but there were
still concerns from colleagues. I have to say, very little is funnier to me in this moment than the
idea of getting an ethics waiver from the Trump administration. Just what is that process like?
Okay, I have one more character that I need to ask you about, David, who is another David, David Fythe.
He was Trump's senior director for technology on the National Security Council. So a member of the Trump
administration who was known inside the government as being sort of the export controls guy for chips.
the guy who thought it was a really bad idea for the U.S. and its companies to be shipping the most
powerful AI chips overseas. This was a view that he shared with some folks in the Biden
administration and its Security Council. And he was fired earlier this year as part of
sort of a clearing of the House at the NSC by President Trump. So what did you report about
that firing and what may have motivated it? I mean, that was a crucial moment.
during these chip negotiations
where a kind of potential roadblock
to a deal going through
was abruptly removed
and it kind of cleared the field
in a way that allowed David Sachs
to eventually kind of take control
of these negotiations.
So it was definitely a good thing
for the Emirates
and their priorities that this happened.
The reason that he got fired
is that Laura Lumer,
the kind of right-wing agitator,
had a meeting with Trump
in which she urged the firings of fight
and several other national security colleagues. And the reason that Lumer gives for this is,
you know, partly that Fyth's father, Doug Fythe, was sort of aligned with the kind of neocon,
sort of Bush-era Republican Party, which she sort of advocates against. And so she essentially
argued, you know, to Trump, like, you've got to get rid of all these people for that reason.
We should also say, for people who are not familiar with Laura Lumer in her work,
this is a woman who has been banned from every major tech product, including my favorite
Laura Lumer, fun fact. She's both banned from Uber and Lyft for being too racist. Do you know how hard
it is to get banned from Uber and Lyft for being too racist? Think about how many racists use Uber
that still have their accounts. Anyway, so Laura Lumer, for some reason, gets a bee in her bonnet about
this guy, David Fife, and sort of talks to Trump and then Trump fires him and a number of his other
colleagues. This is the part of the story where I sort of something, a light bulb went off for me,
Because I remember earlier this year when the Trump administration was sort of seemed to change its posture on AI and AI sort of proliferation very suddenly, right?
This was an administration that had a lot of people in it that were very hawkish on China that saw us as being in a kind of race with China to develop powerful AI systems.
And if you are in a race with your biggest adversary to develop a technology that is,
dependent on powerful chips from invidia, you do not want your adversary getting those chips
from invidia. Like that seemed to be a pretty accepted part of the Republican tech policy
platform. And then all of a sudden, the tune changed. Then all of a sudden, David Sacks and
other people in the administration started talking about how it was actually good if these American
Nvidia chips got to China and all of our other adversaries because we wanted companies and
governments to be building AI on top of our technology. That was sort of their America
first strategy. And I could never quite understand why that change had occurred so quickly or what
was motivating it. And David, your story helped me kind of connect some dots there that had been
missing. Yeah. I mean, that sort of moment where the kind of Trump administration's tone on
this shifts is kind of really important to what's going on. And again, I mean, like this reporting is
really difficult and complicated, and I'm sure there's more about these two deals that will,
you know, come out over the coming years. You know, and we don't know exactly what went into
all of the sort of decisions behind the scenes, but, you know, I think what we were able to
show is sort of, you know, broadly the timelines of how these two deals developed and how they
intersected. Yeah. I mean, tell us, give us a little flavor of the behind the scenes
reporting process, because there are a lot of bylines on this story. It's you, Eric Lipton, Bradley,
hope, Trip Mickle and Paul Moser. What can you tell us about how this story came together?
I mean, this is one of those kind of big projects that require a lot of collaboration
among different departments within the Times. I mean, not only is it, you know, five different
reporters on the byline and like a bunch of others who contributed as well, but, you know,
three different continents, you know, people in different parts of the world of access to different
types of sources. The reporting for the story really began for me when I was in Dubai in
late April and early May, which is when Zach Whitkoff, Steve Wickcoff's son, went on stage at a
crypto conference and kind of announced this $2 billion deal. And I was there in the room for that
and kind of covered it at the time. And from the moment it happened, we wondered, you know,
how did this come together? I mean, it was just so unusual for a foreign leader to be channeling
money to the family of a top White House advisor. I mean, that's just not the sort of thing. I mean, that's just
not the sort of thing that happens in the United States.
I mean, in a way, these sort of interconnections
between business and government
are more kind of familiar to the way
business is done in the Persian Gulf
and has been for years. And one interesting thing
that Zach Witkoff said on stage at that
event is that we should really
take a page out of the Emirati
Royals book, essentially.
You know, we should try to be more like them.
So that's when the reporting started for me, and then, yeah,
became this really fruitful collaboration
among people with different expertise
is, you know, all over the world.
Are you going to try any of that Dubai chocolate while you were over there?
You know, I saw the Dubai chocolate.
I looked at the price tag.
I wondered whether it would fit within the New York Times expense policy, and I passed.
I didn't get an ethics waiver to buy the Dubai chocolate, yes.
Got it, got it.
How has the White House responded to this reporting?
What did they say?
Did they acknowledge any of what you had reported was newsworthy or interesting?
Both the White House and World Liberty Financial told us that there was no connection between these two deals.
One was government business.
One was private business.
They weren't connected.
That was a statement that both those kind of parties made.
The White House also told us that, you know, David Sachs didn't have any financial stake in the UAE chip's deal that he didn't know the key Amarotti players in that deal before it came together and that he was really just focused on advancing the administration's priorities.
And then the White Alas also told us that Steve Wittkoff is, you know, working with ethics lawyers to make sure that he is in compliance with all the relevant rules and that he's still in the process of divesting eight months into the administration.
So that was kind of basically, basically the response.
And I should say, you know, the Amarotti's also told us that, you know, G42 spokesmen said that, you know, they have protocols to protect against conflicts of interest and that type of thing.
right so let's kind of take a step back from this and zoom out a little bit uh david you've been
covering crypto for a long time uh you've been on this show many times to talk about the various
entanglements that the trump administration has with the crypto industry it seems to me like what
this story really illustrates is that crypto has become the primary vehicle for influence in the
trump administration that basically people around the world are starting to figure out that
you want to change the Trump administration's mind about something or influence its policymaking,
one way you can do that and potentially a very direct and effective way is by buying a bunch of
stuff that the Trump family has a business interest in through this World Liberty Financial
crypto company. Is that true? It has certainly become a pattern in the crypto industry that,
you know, players that want something from the U.S. government and virtually everybody in the
crypto world wants something. It's such a kind of nascent area that, like, there are new regulations
in the works and all sorts of things that everybody's pushing for. And so many of these players
are now connecting themselves to the Trump business in some way, whether that is investing
in the World Liberty coin or like attending the meme coin dinner. And again, in none of these
cases, can you point to like an explicit sort of quid pro quo, but it's just become part of the
way that business is done in the crypto world, is that you at least consider, you know,
some sort of collaboration with the sort of Trump family apparatus.
It's sort of like, it reminds me of during the first Trump administration when, like,
there were all these stories about the Trump Hotel and people choosing to hold their big
events there and their banquets and their dinners and, you know, book big blocks of rooms
when they would go visit Washington, in part because they thought that, like, Trump or someone in his
family might notice, hey, these guys are really supporting our hotel here. This is just a much more
direct route to getting the attention of the Trump administration. If you say, I'm going to buy
$2 billion worth of your stable coin from you. I mean, it's really hard to book $2 billion worth
of anything at one hotel, but you can go online and buy $2 billion worth of a stable coin.
It's a good, the hotel is a good reference point. It is sort of like the Trump Hotel on steroids,
and it's way more global, too.
I mean, anybody anywhere in the world
can buy the Trump meme coin
or buy the various tokens associated with world liberty.
And, you know, in previous reporting,
we found that there are huge numbers
of foreign buyers of all of those things.
And remember, foreigners are restricted
from making campaign contributions.
So this is a way for people in other countries
to support the president, you know,
without going through the channels
that have traditionally existed and been kind of restricted to them.
David, I'm curious, what has the reaction been to this story?
The reaction's been really interesting,
and it's been similar in certain ways to how, you know,
previous reporting we've done on conflicts of interest
within the Trump administration has been received.
I mean, a couple of Democratic senators, you know,
put out statements, sort of condemning these sorts of business dealings.
You know, you've got various kind of good government groups,
ethics lawyers sounding the alarm over this. But, you know, ultimately in the Trump era,
those Democrats and those sorts of ethics lawyers are kind of impotent. I mean, they don't have
the ability to, you know, turn this into kind of a giant investigation. And so there's a limit to
what can happen. I mean, we did a previous story about World Liberty Financial and the conflicts
of interest associated with it. And that contributed briefly to a delay in the stable coin
legislation moving through the Senate, you know, as Democrat sounded the alarm, but ultimately that
legislation passed and with Democratic votes. So it sort of remains to be seen whether, you know,
the left will take a major stand on this issue. Yeah. Well, David, fascinating reporting,
really, really important work. And I hope you keep going. And next time, bring us some of that
Dubai chocolate. All right. Thank you so much. Thanks.
When we come back, an update on Italian brainrod and other news from the week in AI.
Well, Casey, that's enough politics for today because, as with every week, seemingly, it has been a huge week in AI.
So it's time for our segment, this week in AI.
This week in AI.
Now, before we get into the big AI news of the week, let's hit our disclosures.
I work at the New York Times, which is suing open AI in Microsoft for copyright infringement.
Yes, and this week in Boyfriends, my boyfriend works in Anthropic.
This week in Boyfriends.
Honestly, it would be a great segment.
Boyfriends are always up to something.
Okay, well, Casey, the big AI stories of the week, let's go through them.
Number one, Business Insider tells reporters they can use AI to generate first drafts of articles.
This was first reported by Oliver Darcy and his great status newsletter.
This is about an internal memo that went out to the staff of Business Insider last week from editor-in-cheek.
from editor-in-chief Jamie Heller telling them that they could use AI not just for research and
various other things, but for writing the first drafts of their stories as long as reporters,
quote, make sure your final work is yours. Casey, what did you make of this?
Very curious what she means by make sure your final work is yours. If the AI did all of the
research and wrote the first draft, you, what, changed the placement of a couple of commas.
Look, I'm very curious to see what happens with this one.
The cynic in me suspects that this sort of thing is going to become a lot more common.
And as AI writing tools improve, I think more reporters are going to feel like maybe I should let it take a stab at a first draft.
I should say for myself, I do use AI systems to edit my copy, so not to do the writing, but to catch the typos and some of the factual errors.
So there is kind of a spectrum of what I think you and I think is like okay right now.
This goes further than I would, but this just seems like one of those frontiers that keeps advancing
and before long a lot of other folks are going to do it.
You know, right now I think that letting reporters draft the first versions of their articles with AI
is actually kind of risky from like a reputational standpoint because, look, the AI writing
still has sort of various stylistic flaws in it.
you can kind of tell when you're looking at something.
I can always tell. I don't think you can always tell, but I think you can sometimes tell.
And I think that's a big problem because if users, readers start to sort of find evidence of
AI generated content on your website, I think they're just going to start losing respect for you
as a publication. But I think probably at some point, the writing will get good and then we won't
be able to tell anymore. And I think this will just sort of become part of the background noise
of the internet. I don't know. If you're a reporter and you start doing this, I think you have to
ask yourself the question, am I participating in the automation away of my own job, right?
There is not that far of a bridge from AI is writing the first draft of my story to AI just
has the job that I used to have. So this is a business insiders playing with fire.
Well, it's really interesting the business insider in particular is the outlet that's sort
of pushing the vanguard here because business insider and several other publications
had to take down articles from their sites earlier this year when it was discovered that they
had published sort of fabricated stories that supposedly came from a freelancer who had generated
them using AI. So they want their own staff writers to use AI to create the first drafts of
stories. But if freelancers use AI to submit fake stories, that will apparently not be okay.
Oh, I just have, that's very stupid. The whole thing, this whole thing. We're just sort of in
stupid town right now. Well, Casey, to continue on the theme of AI and media, I want to bring you
the next story this week in AI, which is about Penske. Penske, the parent company behind Rolling Stone
and the Hollywood Reporter is suing Google over its AI overviews. The company says that Google is
illegally using its reporting in the AI generated summaries, and as a result, is depressing its
online traffic. Penske claims in the suit that revenue from affiliate links, that's the kind of, you know,
e-commerce link that supports a lot of the sort of websites that recommend products online.
They say that the revenue from affiliate links was down by over a third at the end of last year compared to its peak,
and they attribute that directly to a drop in traffic from Google.
Casey, what did you make of this?
So the stat that you just pulled out to me is the most interesting part of this lawsuit.
We talk to Google all of the time, and we've been asking them for over a year about this exact possibility, right?
What happens as you shift more people to AI overviews?
What happens to all the publishers that rely on our traffic?
And we reliably get this message from Google, you know, Casey,
Kevin, we send billions of clicks every single day.
It's very important to us to have this thriving ecosystem.
But then you get down to the level of the individual publisher, and it's like, well,
a third of our revenue is gone, right?
So I'm very interested to see where this case goes.
You know, I can't say for sure that what Google is doing is illegal, but I do know that
this is bad for the overall health of the web and that Penske is not the only suffering
publisher here.
Yeah.
All right.
Well, Kevin, this one caught my eye.
Albania has appointed an AI minister.
Yeah?
This comes from the BBC.
Albania has a new AI minister named Diella, which apparently means son in Albanian,
son like the star, not the child.
And Diella is going to be tasked with ensuring that Albania will become, quote,
a country where public tenders are 100% free of corruption.
So an anti-corruption minister that is made out of AI.
What do you make of that?
Well, I was going to say that sounds like a terrible idea, but after the story we just heard from David Jaffe Bellany, I'm ready to replace all of our ministers with AI tube.
I do think that, like, ChatGPT in general has a stronger moral compass than some of the people now running the U.S. government.
I'll say that.
No, I mean, look, this is one where I think AI can be useful for a lot of things.
I'm not sure it's quite good enough yet to put into a role monitoring public corruption.
Totally.
I mean, do you remember when we had that guy on?
who ran for, what was it, mayor, using AI.
Yeah.
And he was just kind of going to be the human face of the chatbot
and like, you know, do all of the votes that the chat bot recommended
and put in the place the policies that the chat bot suggested.
I think there's probably some meaningful percentage of lawmakers today
who are using chat GPT to help them decide what to vote on and how to vote
and, you know, how to run their piece of the government.
And so, yeah, I think it is inevitable that someone will just say, let's cut out the middleman and just put the AI in power directly.
Well, at least in this case, do you know who I would have put in charge of overseeing corruption in Albania?
Who?
A person of prominent Albanian heritage, Doolipa.
Really?
I didn't realize she's Albanian.
Yeah, absolutely.
I think she would have done a great job.
All right.
What else is in the news, Kevin?
Well, our next story comes to us from the world of Roblox.
That is, of course, the hit game crew.
creation platform, where, according to several accounts that track Roblox activity on X, a new game
recently broke the all-time record for most concurrent players on the platform. Something like
23.4 million people were simultaneously playing a game based off the viral AI sensation of Italian
brain rot. Italian brain rot, I think, one of our favorite finds of the year. And you're telling me
it's a video game now. Yes. So this game, which I guess you can
build games inside Roblox. I'm not myself a big Roblox user, is called Steal a BrainRot,
and I did spend some time on YouTube trying to understand this phenomenon yesterday. I watched
a video about a guy who spent 24 hours playing Steel A BrainRot inside Roblox.
That sounds good for you.
Yes. The video, thankfully, was not 24 hours long. But basically, this is a video game that
users of Roblox, or one user in particular, has created where you kind of just go around stealing
people's characters from the Italian brain rot franchise, which as you will remember, as you
introduced me to earlier this year, was created on TikTok as kind of a self-referential form
of zoomer humor. So is this kind of like play on Pokemon, except instead of stealing
Bulbassur, you're stealing Tung-Tung-Tung-Sahur and Ballerina Capuchina? Yes, you can go around,
you can buy various characters with your Robux. You can buy a Nubini-Pizanini.
and use it to do various things.
You can then go around stealing other people's characters that they've bought.
This has become a huge sensation.
23.4 million people is massive.
And I think this is a phenomenon that is essentially invisible to people who are over the age of 18.
But if you were a teenager, this has been a very big deal.
And I think it goes to one of the things that you've been saying on this show for a while now,
which is that there is just a generation of people who love AI-generated content.
who are not bothered at all by any of the moral or ethical or copyright concerns who are just like, yes, I love Beninini chimpanini or whatever.
Kevin, it's chimpanzee, Beninini. Please show some respect.
And I am going to devote many hours of my life to playing a video game in Roblox inspired by these characters.
Very interesting. We'll have to give that a shot.
I have one other wrinkle to bring you. This is a late breaking wrinkle in this story, which is that apparently the Tung Tung Tung-Sahur character,
has disappeared from the Roblox game due to a copyright dispute.
Really? Who owns the copyright on Tung Tung Tung Sahur?
Is he part of the Marvel Cinematic Universe now?
Well, according to Dexerto, this character was removed from the game by an agency
representing the creator Knoxashd, who claimed that the steal of brain rot developer
Sammy pulled the character while legal discussions over licensing were on.
going. Wow. What more iconic move could you make than literally stealing Italian brain rot from the steal an Italian brain rot game? I think that person won the game. I have to say, congratulations. Kevin, here's a more serious one. I'm curious to get your thoughts on. Open AI is launching a new chat GPT experience for teens. So over recent weeks, as we've talked about on the show, there has been more scrutiny of Open AI over the way that younger and more vulnerable people are using the platform. Some of the
those people have died by suicide. There was a hearing in the Senate about these issues this
week. On Tuesday, OpenAI said it's building a system that will try to identify users who are
under 18 and will give them a more limited version of chat GPT that will block graphic sexual
content and in rare cases potentially involve law enforcement. So what did you make of this story
when you saw it? Yeah, I thought this was largely a positive development. Like I am very
worried about the increasing frequency with which young people especially are using these chatbots
as sort of emotional support companions are forming these intimate relationships with them.
And as we've discussed, like for some set of them, this does really appear to be taking a toll on
their mental health. So I am all in favor of Open AI doing what it can to determine which
of its users are under 18 and to give those users a different experience. They also have been
under some pressure. Last Thursday, the FTC launched an inquiry into these AI chatbots, including
open AIs, and the company is also facing a lawsuit from the family of Adam Rain, a 16-year-old boy
who died by suicide after talking with chat GPT. So this did not come out of nowhere. They have
been under a lot of pressure to do something for their younger users, but I think this is probably
a good and overdue step. Yeah, I agree with that at the same time. I worry that. I worry
that this truly is not going to be enough.
You know, right now, I think one reason why younger people and vulnerable people are
talking and are sharing their most intimate thoughts with chat GPT is because they're
confident that those communications are private.
Now that they know that those could potentially be shared with your parents, with law enforcement,
my guess is those teens and vulnerable people are simply going to go elsewhere.
They're going to go to other platforms that don't have those same protections.
So while I agree with you, I think this is,
the right thing for open AI to do, I think we need to look around the industry at platforms that
do not have these protections and ask them, well, what are you going to do? And even after all of them
do it, there's the risk that teenagers and vulnerable people are just going to start talking to
large language models that live on their laptops that don't communicate with anyone, right? So
the horses are sort of out of the barn here in a way that makes me nervous, but I am happy to see
at least one company doing something about it. Yeah. And look,
I think this is very much a secondary element to the story here, but I am also interested in how
Open AI is going to try to determine the ages of its users. It is apparently not going to do
what lots of other internet services do, which is just put a little box on the sign-up page that
says, hey, tell us how old you are, when your birthday is, and then trust the people are not
going to sort of lie their way around that. It is instead going to try to infer a user's age
from the things that they're chatting with chat GPT about. So if you're asking about, so if you're
asking about your eighth grade math homework, it might flag you as being, you know, underage
and show you the kind of nerfed youth version of Chad Chupit instead. I did see some speculation
that maybe teenagers are going to try to get around this by talking about like back pain and mortgage
interest rates and things like that. So the system thinks they're middle aged.
Say, Chad Chit, my wife, she's busting my chops. You got to help me. But I think we'll have to
see how that goes. All right. Well,
Here's one more, Kevin.
Both OpenAI and Anthropic put out new reports this week
on how people are actually using their chatbots.
Yeah, I thought this was really interesting.
This is a kind of data that's been relatively sparse.
We just don't know a lot about how people are actually using these chatbots.
And the companies that make the chatbots are the ones, of course,
with the access to the best data about that.
But we got reports this week from both OpenAI and Anthropic
who talked about the various ways that people are using their models.
We should also say that these are not exactly apples to apples. OpenAI's report just looked at the sort of consumer version of chat GPT, so not counting all of the business and enterprise users who are using it through the API. Anthropics report did include both the sort of consumer clawed version as well as the sort of more enterprise focused coding tools. But let's talk about a few things from this. One thing that stuck out to me was that with chat GPT, there is a
closing gender gap in usage. So a year or two ago, it was the case that many more men than
women were using chat GPT, but now, according to OpenAI, more than half of chat GPT's users
appear to be women, which is surprising to me. That's so interesting. Now, did they share any
speculation on what has managed to close that gap? No. The researchers who conducted this
study did not give any hypotheses for why this gender gap has narrowed. But they did say that
by July of 2025, the share of chat GPT users with sort of typically feminine names had risen to
more than half of all chat GPT users, 52%. They also said that this has become a fast-growing
tool in low and middle-income countries. Maybe there's some sort of relationship there. They said
that by May 2025, Chat Chupiti was growing much faster in the lowest income countries than in
higher income countries. So there seems to be kind of an expansion of use of Chad Chubit
both among women and people outside the sort of richest Western countries.
It's interesting to see. I mean, and I would say that if there remained this huge gender gap
that we were seeing a year or two ago, to me, that would be like a really, like, negative
signal about AI, right? Because if this technology,
is as useful and general purpose
as executives are always telling us that it is,
there should actually be relative gender parity
in terms of who's using it.
Yeah. What else was in this report?
Another interesting thing from the OpenAI version of this report
was that something like 73% of chat chTP-t messages
are not work-related.
This was surprising to me.
I thought that maybe the usage of chat chb-tie for sort of personal
and work-related tasks would be split pretty evenly.
I certainly would guess that my own usage of chat-jubljointed
ChatGPT is about half work and half non-work.
But apparently, the non-work-related use cases of chat GPT are growing really quickly,
and they comprise now the majority, about three-quarters of all chat GPT usage.
I mean, to me, that just speaks actually to the challenge that Google has here, right?
Which is, I think that a lot of those queries are things that used to go into the Google search engine,
that we're not about work, but we're just sort of like, hey, like, how do I fix my doorknob,
you know, or like, you know, tell me about Canada because I'm going there next week.
And all of a sudden, chat GPT is serving a lot more of those queries.
Yeah, actually, that is the number two use case in this study is as a replacement for web search,
what they call seeking information.
What do you think the number one use case of chat GPT is?
Cheating on homework.
No, they did not break that out as its own category.
It's what the researchers called practical guidance, basically help me make this decision,
me think about this conversation. It's sort of the more kind of companion-like use cases that
we just talked about being sort of worrisome. No, the things we used to rely on friends and family
for, but no longer have to. Exactly. So that's the Open AI report. The Anthropic report
emphasized something totally different. They found that adoption of Claude was being driven
primarily by coding and what they called automation usage patterns in businesses using Claude through
the API. Anthropic also found that their highest Claude use per capita was in Singapore and
Israel. And within the U.S., the leading per capita users of Claude were not in California or New York,
as you might expect, but in Washington, D.C. and Utah. The Mormons love Claude. How about that?
To me, what this suggests to me is that Open AI and Anthropic are really diverging in who their
customer bases are. Open AI wants everyone. It's like a chat GPT in every house on every phone.
Anthropic is increasingly going after the businesses and the enterprises. And so my assumption is
that as you sort of fast forward in time of year, you're going to see that trend sort of continue
where chat GPT becomes sort of ubiquitous. And Anthropic becomes a thing that you use if you're
building something with code. Yeah. So let's keep tabs on this data over time. This is a kind of data that I
think we need more of from all of the big companies. And I think it's important that they do this
collection and sort of analysis in a privacy-protecting way. Both Open AI and Anthropic have said,
you know, it's not like we are looking line by line at users' conversations. They've sort of built
these tools to sort of anonymize and collect, you know, insights about users' conversations without
peering into them. But I think this is very valuable. We actually do need to know what people
are using these things for. I'd really love to see it for Croc, by the way. What would come
out ahead, would it be
gooning toward anime companions
or racism?
I think it would be one of the two.
Yes, there's a strong uptick in usage
from the Mecca Hitler
adjacent region.
What's that about?
What's it about?
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poyant.
We're fact-checked this week by Will Pichel.
Today's show was engineered by Chris Wood.
Original music by Alicia Baitoup, Marion Lazzano, Sophia Landman,
Pat McCusker, Rowan Nemistow, and Dan Powell.
Video production by Sawyer Roque, Pat Gunther,
Jake Nicol, and Chris Schott.
You can watch this whole episode on YouTube at YouTube.com
slash hard fork.
Special thanks to Paula Schumann,
Pue Wing, Tam,
Zalia Hadad, and Jeffrey Miranda.
You can email us at hardfork
at NYTimes.com,
which Italian brainrodite you steal?
Thank you.