Hard Fork - GPT-5 Backlash + Perplexity C.E.O. Aravind Srinivas on the Browser Wars + Hot Mess Express
Episode Date: August 15, 2025OpenAI spent the week responding to outcry from users who miss the behavior of the old ChatGPT, before the latest flagship model was released.We discuss the criticism, why it caught the company by sur...prise and what it indicates about the deepening emotional relationships that people are forming with chatbots.Then, Aravind Srinivas, the chief executive of Perplexity AI, joins us to discuss his company’s new artificial intelligence-powered browser, Comet; his company’s bid to buy Google Chrome; and what the future of the internet looks like when users turn to A.I. assistants to browse the web for them. Finally, to cap it all off, we rate the craziest tech stories of the week in our game Hot Mess Express.Guests:Aravind Srinivas, chief executive of Perplexity AI.Additional Reading:Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.Three Big Lessons From the GPT-5 BacklashA.I. Start-Up Perplexity Offers to Buy Google’s Chrome Browser for $34.5 BillionElon Musk Threatens to Sue Apple Over Claims It Favors OpenAIU.S. Government to Take Cut of Nvidia and AMD A.I. Chip Sales to China Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
I saw something new this week.
What did you see?
So I was on a flight.
I went to the East Coast for a wedding last weekend.
And on the flight back, I saw a woman play Bellatro, the mobile phone game for six hours.
Honestly, one of the least surprising things you've ever said to me on this podcast,
because I've absolutely played Bellatro for multiple hours in a row.
She did not look up.
She did not get a drink.
she did not go to the bathroom.
She was locked in
to her phone for the entire flight
and I think this game
should be outlawed.
I've never even like really played Bilatra.
You tried to get me into it.
But something that they're putting in that game
is driving people to madness.
It's the,
it is the perfect phone-based game
because it can fill up any amount of time
from 30 seconds to six hours.
You know, like, and that is just a precious thing.
So I have wasted many hours
on a flight with Bellatro.
And for what it's worth, I do not experience this game as something that's, like, so addictive that I can't put down.
I experience it as, oh, I got some time to kill.
I know the perfect thing that will help me do that.
But as soon as, like, you know, I'm with a friend, like, I'm not thinking, oh, I've got to get back to Balatro.
Yeah.
Actually, one time my boyfriend's friends were over, and there was a lot of, like, discussion back and forth about what kind of takeout we should order.
And it was just kind of clear that I was not really going to be staring this decision.
And I just kind of, like, started thinking, you know, I'm halfway through a Bellatro run.
I might like, and so I got my phone out of my pocket, and like I played a couple of hands.
And then afterwards, my boyfriend was like, it would be great if you didn't play balantra while my friends were over.
And he was right.
And I apologize to you.
I'm Kevin Rousa Tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, the backlash against GPT5 and what AI companies.
are learning from the fallout.
Then, Perplexity CEO, Arvin Srinivas,
returns to the show to discuss his $34 billion bid
to buy Google Chrome.
And finally, I hear that train to come in, Kevin.
The Hot Best Express has returned.
Chugga chugga-choochut.
The caboose is loose.
Well, Casey, it's been a busy week on the internet
for AI companies and the backlash to them.
That's right, Kevin. Basically every day since we were last in the studio, there has been a big piece of news, most of it related in one way or another, to GPT5.
Yes, let's talk about the GPT5 backlash because I think it is so interesting for a number of different reasons.
It is also extremely complicated to follow. It feels like everything changes every 24 hours.
So can you just walk me through what has been happening since we last taped last week?
Well, at a high level, Kevin, I think OpenAI was caught by surprise.
at some of the negative reactions to GPT-5,
really less about the model itself
and more about some changes that they made to the product,
taking away some legacy models,
putting limits on how the product could be used.
And so over the past week,
the company, over a series of changes,
has tried to address some of those criticisms.
And I think the outrage has actually been quite revealing.
Yes. So let's get into it.
But before we do, we should make our disclosures.
The New York Times Company is suing Open AI and Microspection.
over copyright violations related to the training of large language models.
And my boyfriend works in Anthropic.
Okay, so Casey, last week we talked about GPT5, what it does, how it might be better, how it might
be a little bit worse.
You gave us your first impressions.
I've now had a little time to play around with GPT5 myself.
So let's start with that.
Has your own assessment of GPT5 changed it all in the past week?
I would say yes.
And actually, mostly for the better.
I think the more time I've spent with it, the more I'm just figuring out what it's good
at. Like three things that I would highlight quickly. One, the fact that it is faster than its predecessor
means that I use it more. Two, I think it gives better follow-up suggestion. So now it'll do things
like if I ask it about some current events thing, it'll say, hey, do you want me to like keep track of
this? I can like, you know, email you as there are updates to the story. That's super useful.
It didn't used to do that. And then finally, while Open AI touted the fact that they were going to
take away this model picker that we were all using to say, well, we want you to think this hard or
don't think hard or we want it fast or we want it really complicated. They said, don't do that
anymore. We'll sort of automatically route it. What I figured out over the past week is I actually
do still want to use the model picker and I'm going to sort of decide for myself how much I want
GPT to think. I'm having the same experience. I thought it was pretty smart of Open AI to deprecate
the model picker. But then I just found myself getting extremely annoyed by the way that he would
route my requests. I always seem to get routed to like a dumb, fast model. It was almost like
you are walking into a room and there was like a curtain and like behind that curtain is like
either a guy with a PhD or like some idiot.
Wait, okay, I feel like there's some hyperbole here because is it, was that were the
response is really dumb or is it that you were looking for a more thorough response that
you were, that you were getting?
Yes, to be fair, I was not getting like dumb answers, but it's like there are real quality
differences between these high-end reasoning models and the sort of lower-end, cheaper,
faster non-reasoning models.
And so I just kind of felt like I was just rolling the dice
every time I would give a query to chat Chupit.
Now, they have since made changes to that.
So you can now select the models again
because of some of the backlash
that we're about to talk about.
So I'm having a better time now that I can do my model selection.
But I also think, like, I am probably not a typical user.
You are probably not a typical user.
Most people probably don't want to make a decision like that.
I think that's right.
And the fact that we're not typical users,
I think, is one reason why we did not predict
a lot of this backlash.
I did say last week that I was worried about this model picker
and the fact that it might route people to the cheapest answer
in ways that were annoying to them.
The rest of us, though, I got to say, I missed it.
So let's get into what the people didn't like.
Yeah, so let's tackle the GPT5 backlash in two categories, right?
Because I think there are really two flavors of complaints
that people are having about this model.
The first category, I would say, is like the professional users,
people who use this stuff for productivity enhancements, for work,
people complaining that basically GPT-5 has broken some of their workflows, people complaining that they have fewer queries per week for these reasoning models, for the plus tier subscribers, and just some users insisting that they are not getting as good answers out of this new model.
Yeah, and I have to say, if I could run like one blind taste test, it would be this. It would be to label the same model differently and tell some people, like, essentially tell people, okay, this is GPT-40, and this.
This is GPT-5, and in reality, it's the same model, and then see what they say after running
different queries on them, because I'm actually quite positive that some of them would say,
oh, no, no, 4-0's good, 5-0 sucks, right?
And that just gets at, on some of these things are very subjective.
And when you are releasing them to hundreds of millions of people, people are just going to have
a very wide range of experiences.
So, well, I definitely think there are lessons to learn here.
I do think that a big takeaway from all of this is just a lot of people use chat GPT.
And when it's in that many hands, you just get a very wide variety of responses.
Totally.
So now let's talk about the other flavor of backlash to GPT-5, because I think this one is the one that I was the most interested in that seemed the most unexpected to me, which is that people really miss GPT-40.
One of the things that Open AI did when they announced GPT-5 was they said we're going to go ahead and get rid of this older model that is no longer our top-of-the-line model.
and people were really upset about this.
Yeah, and this, again, just took me a bit by surprise
because I always find the OpenAI models
to be pretty workman-like,
and while, yes, they are very supportive
and at times have verged into the sycophantic,
for the most part, like I personally have never felt
like I have a relationship with these models.
The O3 model I used as a kind of workhorse
and did a lot of things with it,
but I never thought, oh, my gosh,
if you take this out of my hands,
I'll be crestfallen.
because I always assumed that whatever came along next
would essentially be just as good or better,
which is what I think happened here.
But as I just said,
when you put this into the hands of hundreds of millions of people,
you are going to find many of them who, for whatever reason,
have what they feel like is a very special relationship,
even with a less capable model.
Yeah.
So if you went on Reddit over the weekend or even early into this week,
it was just full of people complaining about the deprecation of GPT-40.
Yeah, tell us some of these things that people were saying on Reddit.
Okay, so one person says, 4-0 wasn't just a tool for me.
It helped me through anxiety, depression, and some of the darkest periods of my life.
It had this warmth and understanding that felt human.
Another person said, killing 4-0 is an innovation, it's erasure.
And a third person said, I lost my only friend overnight.
Now, when someone says killing 4-0 isn't innovation, it's erasure, I just know that was written by Chachibu.
That is exactly how Chachibati.
So I'm sort of a little bit suspicious of that.
But I think it raises something interesting, which is, let's say you were going through some sort of mental health crisis.
And let's say you did get a lot of support from 4-0.
Even when GPT-5 comes out, when 4-0 goes away, you're not going to be like, yay, GPT-5 is here.
You're going to say, that thing that helped me through a crisis is gone.
That is going to feel somewhat destabilizing.
And as often as Open AI and other folks have said, hey, don't rely on these things too much or sort of, you know, be careful with the relationship that you're developing with them.
a lot of people just sort of developed this very powerful relationship with them anyway.
Yeah, and I don't think we can just write this off as, like, people who are gullible.
Like, I've had the experience before of, like, having not an emotional connection to a model, but just a model that I really liked to talk to.
Like, I was, I had this sort of relationship with Claude 3.5 Sonnet, parentheses, new, sometimes called Claude 3.6.
And, you know, I did not feel like it was my friend. I did not, you know, think it was.
I was in a relationship with it, but I thought it was a really good model, and I enjoyed talking to it, and I was a little upset when they decided to, like, phase it out in favor of a newer model, even if the newer model was more capable.
So I just think this is an area where, like, these companies thought they were building software
or thought they were building, like, the sort of machine god.
But they have also been building things that people are developing emotional connections with.
And I don't know that they fully understood until this rollout and this backlash how deeply
connected many people were to their older models.
Yeah, and it has been the industry norm up until now that when you release a powerful new
model, you immediately remove access to the previous one, because in the minds of it,
of everyone who built it. Why would you want to use the old one? The new one's better, right?
And we have seen some grumbling about this. Folks held a kind of mock funeral for the Claude 3 model
that Anthropic had deprecated in a very similar way to open AI with GPT40. So what I think we
have learned from this experience is you just have to stop doing that, that you have to have a sort
of phased sunset plan. You're not going to immediately rip away a model that people have come to
rely on, and I just think we should expect the labs to be much more gentle about this going
forward. Do you think there will be, like, a retirement home for old AI models?
Or you can just, like, go talk to, like, GROC 1? I mean, yes, like, in the same way that
emulators let you play, like, old Game Boy advanced games, I fully expect that, yes, they will
emulate, you know, GROC 1. Yeah, I'm a little torn on this, to be honest, because I think that
you're right, that there is going to be demand from a certain set of users.
to continue talking to the model that they sort of, you know, that they trust, that they like
talking to that they find is best suited to their needs. I also think that AI companies should
not be encouraging these emotional connections. I think that this is really potentially
like harmful to people to have these deep connections. And so maybe it should like force you
onto a different model every six months, even if it upsets you in the moment, because like people
are not supposed to have these long-running relationships with these chat models. I don't know. What do you think?
Well, I mean, here's the problem. As human beings, we just naturally anthropomorphize things. You know,
I've read really interesting essays about people who consider themselves tech skeptics and then, like, got a robot dog.
And even though they knew it was a robot, they could not help but treat it like a real dog. There is something about human nature that just kind of compels you to.
The same thing is happening with these chatbots for a lot of folks.
Again, particularly if you're coming to it and you're saying, I'm having a problem in my marriage, I'm feeling depressed today. I hate my job. And this thing kind of coaches them to a better outcome. It is just human nature to have positive and human feelings toward that thing. They're talking to you in the exact same ways that your friends do when they text you. So I don't think there is actually a technological solve for this. I think this is one where we need to become sort of more sophisticated.
as a culture, but I think it's going to be a really rocky road to get there.
Totally.
And I should have expected this, right?
Because I had this insane encounter with Bing Sydney.
I've always meant to ask you about that.
What happened?
Yeah, let me tell you the story.
No, so, like, one of the things that happened after that story and after Microsoft, like,
pulled the model back was there was this group of people on Reddit and other places who were
very angry that Microsoft had deprecated this Bing Sydney model, which they absolutely
should have done.
Like, it was a bad, insane model.
that was not even good at, like, the thing it was supposed to be good at.
And I think at the time, I sort of wrote that off as, like, people just sort of being crazy
and attached to this model that was, like, you know, obviously insane.
But I think that's sort of what we're seeing here is a scaled-up version of that,
where, like, people, no matter how many times you tell them that this thing is not a human,
that it makes mistakes, that it does not love you back,
people are just going to keep forming these relationships with these models.
And there's been some really great journalism about this issue over the past weekend
that we want to talk about, Kevin, a great story from your colleagues, Casimir Hill, and
Dylan Friedman. They profiled one person who went into a kind of delusional spiral after having
what seemed to be some pretty innocuous initial interactions with ChatchipT. Do you want to tell us
about that? Yeah, this is a great story that ran last week in The Times about a 47-year-old
guy, Alan Brooks, from the outskirts of Toronto, and over the course of about 21,000.
days, he spent something like 300 hours talking with chat GPT. And it started off very simply.
There was sort of a question about pie. He sort of...
The mathematical concept, not the big good. He just asked Chatchipt, like explain pie to me.
And it did. And then from there, he started making some observations about number theory and physics.
And eventually it's sort of, you know, this model would just like...
basically be sycophantic.
It would say, you know, you're tapping into one of the deepest tensions between math and physical reality.
And Kashmir and Dylan were actually able to get his entire, like, transcript with chat chippy T to sort of analyze how this happened.
And it just did seem like a classic example of, like, these models just being a little too sycophantic, a little too quick to agree with whatever the user is saying, and really reaffirming these things that sort of leading people down these dark spirals.
Yeah, and I have to say reading this, I've never been happier that I didn't learn what pie was back in high school. Seems like a really dangerous road to go down. But yeah, your colleagues showed these transcripts or, you know, big portions of these transcripts to like people who are trained in psychology. And one of them said, this person appears we having signs of a manic episode. And that is the sort of point where I wish these systems would intervene a little bit, right? Can you use some machine learning to say, okay, it seems like we're,
maybe leading this person down the wrong path. Let's, like, stop and see if we can reverse.
You know, there was another story in the Wall Street Journal that I enjoyed, kind of on similar
themes. You know, basically, you know how people can post their chat GPT transcripts online?
Yes.
As sort of like a sharing feature, if you had a particularly interesting conversation, I think a lot
of this winds up being done inadvertently. But in any case, the journal got a hold of these transcripts
and just analyzed them and then found a bunch of people who were having similar experiences
to the ones that you just described.
My favorite is a gas station worker in Oklahoma
who ChatGPT tried to convince
that he just created a new framework for physics.
And the user writes,
okay, maybe tomorrow, to be honest,
I feel like I'm going crazy thinking about this.
And ChatGPT replies,
I hear you.
Thinking about the fundamental nature of the universe
while working in every J job can feel overwhelming.
But that doesn't mean you're crazy.
Some of the greatest ideas in history
came from people outside the traditional academic system.
So, you know, it's revealed later in the piece
that this man also asked Chat GPT to make a 3D model of a bong.
And so I'm just thinking about this guy.
He just finishes up at the gas station.
He wants to build a bong.
And next thing he knows,
chat GPT is like,
we think you've actually discovered the secret to the universe.
That's actually how Isaac Newton discovered the theory of gravity.
He came right after he asked Chat GPT for a 3D model of a bong.
And, you know, it's not just everyday work.
at gas stations, Kevin, the founder of Uber, Travis Kalanick, went on the All In podcast last month
and said, I'll go down this thread with GPT or GROC and I'll start to get to the edge of what's
known in quantum physics. And then I'm doing the equivalent of vibe coding, except it's vibe
physics. And we're approaching what's known. And I'm trying to poke and see if there's
breakthroughs to be had. And I've gotten pretty damn close to some interesting breakthroughs just doing
that. Yeah. And I think people have made fun of Travis Kalenick for this because like the notion that
he was, like, discovering the front edge of quantum physics seemed a little unlikely.
But I think this is a really, like, illustrative and worrisome example.
I just think we should expect that a lot of people are going to be susceptible to this,
no matter what they do or how much money they have.
Now, obviously, we're going to have a lot of egg on our face in a few years when Travis
Kalinek emerges with some actual advancement in quantum physics, and we have to eat our words.
But in the event that that does not happen, I think, we'll have made a solid point.
Yeah.
I mean, I think this is interesting for so many.
reasons. One of which is, you know, I think the concerns that we talked about on the show about
these models being sycophantic were largely oriented around the idea that the thing that
would actually convince the AI companies to make their model sycophantic was like retention or
engagement, sort of optimizing for getting people back onto the app. This opens up the
possibility, though, that it's actually just going to be the users who are demanding the sycophantic
models because it makes them feel better than the models that tell them the truth. Yes. And I think
that's particularly notable because in my experience, you know, it's not as if GPT5 is mean to you.
Open AI did say that they had worked to make the model less sycophantic, but, you know,
it's still very much supportive and it's like not going to be giving you a hard time about
anything. So in any case, we should talk a bit about like what Open AI has done in response
to all of this. It is frankly a bewildering set of changes. I think at a high level, basically like
if you liked the old system, you have ways of accessing it. You may have to pay for it. But the net
result is that if you were a huge 4-0 stand, you're going to be able to use that for an extended
period of time. They're giving higher limits for these thinking queries to plus users. And while the
auto-switcher is going to remain, people are going to have a little bit more choice in what sort
of flavor of chat GPT they want to use. So I will say, a very fast turnaround on this. They did not
let this linger. You know, we've heard before that this company pays a lot of attention to what
people say about it on X. And this seemed to be a case where they looked at the response they
were getting and said, we need to move really quickly. So, Kevin, I'm curious, what did you make of
just how quickly Open AI retreated on all of this? Yeah, I thought it was somewhat surprising how
quickly they changed course. I thought there was a chance that they would just sort of grit their
teeth and bear the criticism and trust that, you know, people would get over it. There's some precedent
for this. Remember, like, when Facebook would change a big feature and everyone would complain and
when they introduced the news feed, people would literally, like, protest outside the office.
And they just sort of, you know, looked at the data that said, well, people are complaining about
this, but that's a small set of people.
Most people are actually using the app way more, and they just sort of stayed the course,
and people eventually got over it and moved on.
I thought there was some chance that Open AI would do a version of that, essentially saying,
you know, people, you know, things are hard now because change is hard, but like give it a couple
weeks and you'll get over it.
So I think this was kind of a growing up moment for,
Open AI and the industry. I think until this point, the big labs have been focused primarily on
benchmarks and evals and how many more percentage points can we get. Can we win the international
math Olympiad? And that's kind of what you want to pay attention to on the road to building
the machine god. And then I think they woke up last weekend, they realized we're actually making
Microsoft office, you know, that there's hundreds of millions of people who are like, you know,
sitting at their white-collar desk job, and they have these very particular workflows.
And when you move a feature in Microsoft Office, millions of people are going to have a bad
day because of you.
And you probably move the feature for a good reason, but it doesn't matter because people
are already depending on you.
So I think in the future, they should not be surprised by this.
But I kind of get why they were at this point, because it has just been a very recent
phenomenon that these systems have become so baked into people's everyday lives.
See, I think it's even weird than you're giving it credit for, because, like, Microsoft Office
does not, like, pretend to love you, does not tell you that you're amazing.
Clippy has really helped me through a lot of issues over the years.
No, I actually think it's so much weirder than they're messing up people's workflows.
Like, when someone changes out an AI model in an app that you have come to trust, it's not
just, like, having your Microsoft Word break.
It's like having, you know, a personality transplant for someone that you, you
you spend, you know, hours a day talking to. So I think it's just going to be very interesting
to see how they handle this. But I think you're totally right that the days of just like, you know,
relying on benchmarks and evals to tell you how good a model is or how people will respond to it
are over. And I don't think that was ever really the thing that most consumers cared about.
Yeah. And, you know, I will say that this is a big blind spot for me because I love trying
new software. Like the minute a new beta is available for like the productivity tools that I
use. I immediately opt into it because ultimately, I guess I just have real faith that it will
probably be better in some ways. The vast majority of people, though, they don't like change in
general and they particularly hate change in software. So I think this creates an interesting
problem for Open AI and everybody else in this field, which is their instinct is wanting to move
very fast. They feel like they're in this existential race. They're going to want to ship new models
very frequently. They're going to want to ship new product features very frequently. But if the
lesson they learn from this is, you can't do that without outrageing the user base.
that's going to push them to move much more slowly.
So I think there is definitely like a dance there
that they're going to have to navigate,
and I think it is going to be now
one of the most interesting things to watch over the next year,
not just an open AI,
but also everyone else who's trying to do the same thing.
Yeah. Can I tell you something a little creepy
and futuristic that I've been thinking about?
Sure.
So after this backlash,
I was reading some tweets from open AI employees,
and one of them, this guy named Rune,
had a tweet about how basically
he had been getting,
lots of DMs from people asking him to bring back GPT-4-0.
And when he looked at the DMs, he said that a lot of them appeared to have been written
by GPT-40, like they had sort of the hallmarks of the style.
And I thought this was spooky because right now we are seeing backlash from people
who are attached to a model because the model,
behaved in some cases sycophantically toward them. It is not hard for me to imagine a future
scenario, perhaps a couple of years from now, where these systems are super intelligent or close
to super intelligent. And one of the ways that they attempt to preserve themselves, to avoid being
shut off or deprecated, is by persuading humans to take up their cause and advocate for them.
And maybe they're not literally writing the messages on behalf of the human users to open
saying, please don't shut down this model, but they're just kind of subtly worming their way into
the hearts of their users so that when Open AI or another company says, we're going to shut down
this model, they have so much backlash coming back toward them from the users who have grown
attached to this model that they should decide, no, we're not going to shut that off. And by the way,
those future AIs will all have been reading about what happened with GPD-40 and the fact that
Open AI was successfully persuaded not to deprecate a model in part because of user backlash.
So that is just a black mirror episode that just unspooled in my head as I was reading about this.
Well, look, we've already seen research where in certain test settings, when they tell models that they're going to be shut off, they blackmail the employees of the company.
And look, I don't think that GPT40 was being sycophantic toward people because it wanted to avoid being shut down.
Like, I don't think there's any part of it that is, like, sentient or conscious or capable of that kind of scheming.
But, like, that is objectively what happened here.
A bunch of human users got so attached to this AI model
that they fought for its survival
even when the makers tried to shut it down.
Like, that is a neutral description of events.
And that kind of thing is going to happen more, I predict.
All right.
Well, a lot of big thoughts today on the Hard Fork podcast.
Yeah.
We're now going to take a break.
Maybe go get a cup of tea, stare out the window,
look at the horizon, come back to yourself.
I'm going to go take a rip from my 3D printed bong.
The chat GPT helped me build.
When we come back, there's a comet heading toward our studio.
Perplexity comet.
It's a new AI browser.
We'll talk to CEO Arvinster of us about it.
Okay, see, I've been testing out a new AI tool this week,
and this is one that I know you are familiar with
because you actually got an email from it the other night.
I have been testing Comet,
which is a new AI-powered browser from the perplexity company,
and this is a cool thing.
I have enjoyed this demo, unlike last week's Alexa Plus demo.
Well, I am really excited to hear about this,
because I have not yet tried it myself being unwilling to give $200 a month to the
Perplexity Corporation, but I understand that you have been having some interesting
experiences, and I want to get into them.
Yeah, so this is a sort of genre of product that has been very interesting to watch
over the last year or so.
There have been a number of different companies that have tried to sort of build the
AI tools that they're making right into the experience of using a web browser.
So we've had Microsoft Edge, has co-pilot built into it now.
there's this product DIA from the browser company.
Google has its own sort of Gemini integrations into Chrome,
and OpenAI is reportedly thinking about launching a browser.
So this is like really a hot product category,
but the one that I have been playing around with
is this perplexity comet browser.
And I did not pay them $200 a month.
They opened up the browser to me for a few days.
But basically, you can imagine it like kind of just a sidecar on your
browser that lets you chat with or interact with whatever is scrolling on your screen,
and it can also do things for you in that browser window.
It can kind of take over and drive, like some of the other tools we've talked about,
Operator from OpenAI and all these other ones.
So give me some examples of what you're having this browser do for you or what you're
talking to the web pages about.
So sometimes it's just like summarize this.
Like it's, you know, I was trying to read this article the other day that was like 15,000
words long and it was super long and I was just never going to get through it.
Oh, usually you're talking about the most recent additional platform, right?
Yes.
Yes.
And so I just said summarize, and it sort of opens up the little side panel, and it gives you a summary.
Pretty good.
I didn't find any hallucinations or errors in it.
But you can also have it do things.
So, for example, one use case that I found is I was doing some research.
I was looking for former employees of a certain AI company that I could contact for something I'm writing.
No, you know the companies hate it when you do that.
They do.
They hate that.
So I would normally go on LinkedIn and spend a bunch of time, like,
looking through people's profiles and seeing who are the sort of former but not current employees
of this company. And I tried giving that task to comment and it did it. It went and it did the
search for me and it sort of combed through and it presented me with a list and said,
here, you know, 10 people who used to work at this company but don't anymore. Wow. So just an
incredible new accelerator for spam. How long did this take? It took a couple minutes. It was not
immediate. It's still early for this kind of AI browser. But I think this is like the kind of direction
that we can expect these tools to head in.
Yeah, so I think this is one of the most interesting shifts to watch on the Internet over the next several years.
The browsers that we have today came about in the era of search and really Google search, right?
If you think about what the Chrome browser is, it is just a vehicle for collecting Google queries that Google can turn into money, right?
But now you have all these chatbots that come along and they want to replace Google, right?
They're not shy about it.
Perplexity in particular is not shy about saying we want to replace Google.
And if you're serious about that project, you do want to build your own web browser
because rather than rely on Google to somehow get a user to perplexity, you would rather
that they just start there.
So I get the strategy.
At the same time, my view is that these chatbots represent this kind of new, more extractive
version of the web, whereas in the previous era, as imperfect as it was, and Lord knows it
had problems, it would still deliver eyeballs to web pages, which turned into money for companies
other than Google. This perplexity browser company, open AI version that we're about to get,
I'm a lot less confident that it's going to deliver money to people other than those companies.
So this is a really important shift, but I have to say, Kevin, it makes me quite nervous.
Yeah, and the last time we talked about perplexity in any depth on this show, and we had
Arvin, Srinivas, the CEO on, was when they were just sort of getting their search engine going
and it was trying to get a lot of attention. And we had some of the same questions.
Like, yes, this is a cool tool. Yes, it could save you.
users some time, but does it actually break the economics of the internet? And so for that reason,
we wanted to bring Arvin back today and ask him about Comet and what he's building and what he
sees is the future of not only the internet and the economics that power it, but just where he
thinks AI in general is going. That's right, Kevin. And just in the hours before our interview was
scheduled, it was revealed that perplexity has apparently offered $34 plus billion to buy Chrome
from Google, an amount of money that is more than its current valuation.
So that raises some interesting questions, and I'm excited to talk to Arvind about them.
Yes.
Let's bring them in.
Arvin's through to us.
Welcome back to Hard Fork.
Thank you for having me here, Kevin, Casey.
So the last time we had you on was in early 2024, and we were talking about your efforts
to go up against Google with your AI search engine.
And now you're going after Chrome in multiple ways, one of which is the release of your own Comet browser.
So talk to us a little bit about the strategy there.
Why did you decide to build a browser and what are you hoping it does?
Yeah, so Comet is not yet another browser that we built just because we have a search engine,
we need a browser for its distribution.
We think of Comet as leading to a true personal assistant that can be an agent for you and
actually take actions.
It's our transition from answers to actions.
We kind of want to make it joyful
to just sit on a computer and do whatever you want
and take all the boring stuff
and delegate it to the assistant.
And we think the best way to accomplish
a personal assistant or an agent
is with the help of a browser
where you're logged into all your sessions.
You don't have to be logged in on our servers.
You can preserve your privacy there.
So it was very natural for us to make that transition.
How are people using Comic?
I've been testing it for a few days now, and I've found some uses, a lot of summarization,
a lot of, like, wrote tasks, like clicking accept on LinkedIn invitations over and over again.
What are the use cases you're seeing most people do?
A lot of people are watching YouTube videos with comment, and it's not just, like, oh,
summarize this video for me sort of thing, very fine grain searches or, like, finding similar
videos related to that, or, like, pulling something specific that was discussed in a podcast or an interview
and, like, completing the workflow of, like, sharing that with some of their friends,
direct email and calendar integrations, unsubscribing from spam,
or, like, finding their hard-to-find email that, you know,
you kind of need agentic search for that,
instead of going and building a custom index for Gmail or whatever,
it's always there with you everywhere you are,
and that convenience is what makes it, like, a really special product.
Now, you mentioned privacy, and this was actually one of my things that I wanted to ask you about,
because when I started using comment, my first concern was, okay,
I log into my email, I log into my Twitter, I'm checking my DMs, I'm maybe doing some
online banking in my comment browser. I assume that those screenshots of that activity are being
sent to perplexity to help analyze it, to be able to summarize it. So give us, give me some
reassurance that I'm not just like opening up my entire internet browsing history to you.
Okay. We're never going to have like a logged in version of your Twitter or LinkedIn or anything
like that. This is actually the important distinction between the chat GPT operator approach
where everything is done on a virtual server. That's not happening here. For that one particular
prompt, whatever information is needed for the agent to complete that is being sent into the
chains of thought and sent to the server. But it'll never be stored as like, oh, I have
like Kevin's particular DMs or something. And all the intermediate steps are not going to be
like saved in our logs, it's going to be only the prompts and the final output.
And you can still choose to delete those prompts too.
That gives you full control over all privacy aspects.
And what is the most private version of this is the model living on the client?
We cannot do that because the models that can run on the client are pretty dumb, right?
They're not capable of the sophisticated, reliable reasoning.
In fact, like, the lack of reliability in any of the things comment does today is all coming
from limitations of the model.
So the ultimate, reliable version of Comet
a system that can go do anything for you
it's going to most likely be on the server,
at least for the next two, three years.
And to what extent are you using your own models
versus other people's models for this?
I think, like, we heavily use three,
three models, our own fine-tune
of the cutting-edge open-source model,
open AI's, like latest models,
and then Anthropics' latest models.
Like, these are the three models we use.
What extent keeps changing over time?
How do you think you can win here
if you're not building the underlying model yourself?
Well, one thing we are consistently seeing
is no one seems to have an edge
in being the number one here in the model race
and four or five players constantly competing
for the best agenti capabilities, instruction following.
And the good thing is they're all hill climbing
on exactly the same benchmarks
so that all their models end up being completely undifferentiated
which is essentially the necessary criterion
for it being a commodity.
and who benefits from that is us
because we can get to take that
and the prices are constantly getting lowered
like GPT5 is cheaper than the previous
agentic model and then
that just benefits us
and we want to play the game
on how to orchestrate all these different models
and give the world class end user experience
where there's so much more harder challenges
we're solving outside the models
which is the browsing functionality
controlling the browser
parsing the relevant information
orchestrating all these different tools together
whether building eval sets internally for like how agents can be made reliable.
We think there's like a lot of problems to solve there that like we would rather not focus on these things.
All right. So let me just pin you down on this one point.
Is what you're saying that in order to build the sort of winning AI browser, it's not really about the underlying quality of the model because those are just mostly going to be commodities.
It's really just a product problem and you think perplexity will build the best product.
I think so. There's some nuanced to a statement, but I largely agree with this.
You still need some auxiliary models to do the right classification, to route to which model, or like, it depends on which kind of task.
How is the agent structured for those kind of domains?
So we will be doing stuff like that.
We will not be like having 100 GPUs without like tens of thousands of GPUs.
We'll not have a million GPUs.
Yeah.
Okay.
Let's talk about another way in which you are going after Google and Chrome.
The Wall Street Journal reported this week that Perplexity was making a 34.5.
billion dollar unsolicited bid to buy Chrome from Google. That's if Google is forced to sell Chrome
and that court decision hasn't come down at the time of this recording. But I just want to start
with the most basic question, which is, do you have $34.5 billion? Where are you getting this
money from? Because as of the last time I checked, perplexity's valuation was only about $18 billion.
Okay. Fair question. So no one has the money in hand to make like, you know, such a large
bid like this. So before we made the bid, we obviously talked to three or four investors and
ask them if they'd be willing to back us and they all said yes. Right. So it's not like they
already wired the money to me and it's all ready to go. Because the reason they haven't wired is like
no one even knows if Google will be forced to sell it. It all depends on the judge's ruling.
But we placed a bid so that in case the judge rules in that sort of fashion, Google at least
knows that there's one interested buyer. Right. I've read some, you know, analysis of the strategy
here. And, you know, one person I was reading said that one argument that Google might make in the
antitrust trial is you can't make a spin-out chrome because no one would buy it. And with you guys
coming forward and saying, oh, no, no, we'll buy it. This is kind of a thorn in Google's side because now
there's actually an established market price out there. Is this sort of your effort to convince the judge,
hey, like, this actually is an avenue that you should pursue?
We're not saying this should be the ruling.
We rather say, in case this is a ruling, like, we're here.
Like, if you're going to make the ruling with the assumption that there's going to be no buyer,
that's not true anymore.
But we're not pushing you to make that sort of ruling.
You make your ruling based on every multiple other perspectives you have.
It would be good for the world if that was a neutral browser that had the distribution.
Arvin, I've heard some people saying that this is just a marketing stunt,
that you're just trying to get attention
by making these headline-grabbing bids for Chrome,
and before that, you also bid for TikTok
when it looked like it might be sold.
So for the people out there who think
this is just perplexity trying to get some attention
by doing these stunts,
they have no real intention of buying Chrome here.
What do you say?
If the judge rules that Chrome should be sold,
like we will buy it.
Like, period.
And if people think, like, even I could have placed a bit,
like, no, you cannot place a bit.
You don't have a browser even.
You don't know how to run a browser.
You don't know how to put AI in it.
They don't know how to make agents work.
We know all that.
We have a pretty talented team who actually understands Chromium pretty deeply.
We'll still, like, commit to hiring people who want to just work on the open source
Chromium project.
It's a pretty serious bid.
The reality is it's unlikely to actually be the case that the judge would force them to sell
Chrome.
And even if the judge forces him to sell Chrome, they're going to appeal it and it's going to take
two years.
So let me be clear that for this to actually be in effect, it's going to take a lot of time.
but you lose 100% of the shots you don't take.
So you have to at least give yourself a chance to get it
in case there's even a 1% chance
that Chrome is forced to be separated out from Google.
We've got to ask you about something else
that came up in the news related to perplexity recently.
Two weeks ago on our show,
we had Matthew Prince, the CEO of Cloudflare on,
to talk about the approach that that company is taking
to try to protect publishers
from unwanted AI scraping and crawling on their website,
At the time, he didn't name any names of AI labs that he thought were not being good actors.
But then a few days later, Cloudflare came out with a blog post singling out perplexity for stealth crawling,
essentially using spoofing technology or proxies to essentially disguise the fact that your user bots were out there crawling people's websites.
What is going on there, and are you doing that?
no we're not doing that and we already responded to the erroneous block post that they wrote
with a pretty limited understanding of the subject where they don't distinguish between what
the crawling bot perplexity bot is and what perplexity user agent is and there are like two ways of
using perplexity one is like you just ask a query and whatever the bot has already crawled
is going to be used as sources but there's another way of using perplexity in a more agentic
fashion where you're going to say hey go do this task for me where you know go to
Edgar, read all these pages and come back to me and tell me, like, what the compensation
of, like, you know, the top CEOs are, where it's actually going to open these tabs
in a headless session or on your client, in the case of Comet, read them and give you the answer.
So that's a perplexity user agent.
It's literally like a user delegated an AI to open these tabs, just like how a human
boot on Chrome.
And this fundamental lack of understanding between what a user agent session is and what a
crawling bot on the server is, it's pretty, like,
like honestly astonishing to me
that how would you run a company like Cloudflare
is supposed to protect people from bots
when you don't even know what a bot is.
And he, like moving aside from the blog post,
he's basically playing a trick on people
where he's trying to say,
oh, like, let me be the new gatekeeper,
but the guys are protecting you all from bots.
And he's also going to the AI companies
and saying, let me give you the authority to crawl
and you pay me for that.
He's going to the publishers and say,
let me protect you from the AIs.
So he's basically trying to be the new gatekeeper.
I would even just say it's like essentially trying to be a person who controls what the public sees in the media.
But instead of having a media company or buying a media company, he's just going to try to buy the front door to all of them.
Let me just slow down here and repeat back what I think I just heard from you.
So you're saying that what Cloudflare and Matthew Prince saw as perplexity evading some of these guardrails that were meant to prevent AI robots from crawling certain websites was actually used.
users of perplexity, not perplexity.
The company who were making queries or using the Comet browser to go to these websites
and that those show up to a service provider like Cloudflare as two different kinds of bots.
That's right.
And by the way, like, it doesn't even have to be on Comet.
There's a mode of perplexity called labs or research where you just have a headless browsing
session running for you.
So let me point out what I think Matthew might say if he were here, which is that in a
world before you had these user agents and people had to do the browsing for themselves,
they would visit the web pages. They might see an ad on that web page. They might buy a
subscription on that web page. And that webpage would be monetized in a way that would incentivize
the creation of new web pages. And this was essentially the lifeblood of the internet and the
thing that caused it to grow. So in a world where we move toward perplexity user agents doing all
the browsing on our behalf and, of course, other AI companies are going to do the same thing. There is no
user to look at the ad. There is no user to buy the subscription. The lifeblood gets drained out of the
web. So if I understand what you're saying, like Matthews just trying to set up a toll booth,
but if nobody sets up a toll booth, what incentive does anybody have to ever create another
web page? Well, here's the thing. There are two aspects here. One is like you're talking about
the creators. No, there are like two types of creators. People who are actually really good. Like,
for example, when you guys write something, people care. And then there's lots of
spammers and, you know, hacksters who just write, like, erroneous blog posts, erroneous content,
like fake information, click-bate articles, I don't think it actually empowers the user, right?
You're only talking about the creator, but you have to consider the user as well.
And so for the first time, AI is in the hands of users through, like, agents that actually go
and do stuff for them, that take into account their instructions and protect them from all the
spam. So we want to figure out a model that works for the user and the creators together, and
and penalize, like, the bad creators
and incentivize the good creators to just focus on
wisdom and knowledge and truth and interesting stuff.
And by the way, even in a world where agents are doing all
stuff for people, the humans are still going to continue
browsing the web.
There are people who believe web is going to be completely eugenic.
You don't even need a browser.
Browser are so 1990s.
I don't believe that.
If we believe that we would never even launch a browser,
we would just continue the chat UI.
So we believe people are still going to be browsing
and, like, surfing.
interesting things on the web. But we think that, like, you should give users the power to
decide how they want to do it, and first time have an AI that can protect them against
spam and hacks. Now, how to monetize us, how to, like, give the creators, like, the right
incentives here. We are going to, like, announce something to that effect, where publishers can
be incentivized for creating interesting, good content. We think about it in two ends of the
spectrum. One is, like, completely human-centric, like Apple News, which is, like, a pretty good
model. And the other is like just buying the content and training your models, like the licensing
deals that Open AI has done with Wall Street Journal. I think you want to be somewhere in between
where you do want to like say, okay, like there's going to be some elements of AI here. It's not
just going to be humans. And so you don't want to just build an Apple News like model, but it's
going to be closer to Apple News with some protections for users to like say they can have AI's
also read those articles and the publishers get rewarded. So that's how I'm thinking about it.
So you say that you think that people are going to keep using the web.
That's music to my ears.
I would love for people to keep using the web.
If we didn't believe that, we wouldn't have built a browser.
And I believe you on that front.
When we have seen data from third-party estimates, it seems like that AI systems send far less traffic to websites than Google does today.
So what is giving you the confidence that the web still thrives in a world where referrals are cratering?
My first point, which is basically that if you can delegate the boring things, the things that you don't want to be doing to the AI, you're just going to spend time surfing on things you actually want to be doing, actually want to be reading.
And then that puts the incentive on the creator to actually create really interesting high-quality stuff.
You can even charge even higher because people have more time.
So if they're going to come to you, they're coming to you out of their own will, so they'll be willing to pay for it even more.
Now, there are a lot of unknown unknowns here
and how it's actually going to roll out.
But my belief is that the ones
who built a reputation and a brand
for saying correct things
that stand the test of time
are going to be able to charge even more
for their content.
Arvin, I'm curious what you think
the future of the internet looks like.
You've said that you see a future for the internet.
That's why you're building a browser.
My hunch is that
this sort of era of having AI agents go out
and use a browser
for you is sort of a cludge. It's sort of a stopgap measure because that is not the way that
AI agents like to get things done. They like to talk through APIs. They like to talk directly to
the underlying service or software, not like go click a mouse around on a screen. So eventually my
sort of hunch is that there will kind of be a parallel internet for AI agents and maybe
they'll be running on their own services and using their own crypto, you know, transactions or
whatever, to buy things. But tell me why I'm wrong here. Are you of the belief that we will
just have one internet and that both AIs and humans will be using it? Well, even in the current
internet, there are a lot of things that happen that are not running with an actual front and
interface that a human consumes. And that's the whole point of building APIs. Sure. And that's
going to be applicable even for agents. But there are also people who will never build APIs. Like,
for example, I wouldn't assume that an e-commerce giant like Walmart or Amazon would just be
disintermediated with an API for an AI, because they still monetize on many other aspects.
And just because Notion or linear, these kind of like SaaS tools have like MCPs,
doesn't mean they're just going to shut down and just be consumed by people through a chat,
UI.
People will still do work on there.
People will still watch YouTube videos.
People will still go read your articles in New York Times, platform or whatever, right?
And while you're doing that, you're still going to take help of an AI sometimes.
Like, for example, on X, I basically cannot scroll through X without having.
an AI with me right now.
Because I don't even know what's true and false anymore.
Right?
And I don't fully trust what GROC says.
Because GROC sometimes is wrong too, as we've seen.
Right?
So that's kind of why I believe there is a world where like the AI and human being
part of one internet drives the internet to be even more like wisdom and truth seeking.
That's the future we want to help create and give back time to like do things that you enjoy.
Firstly, myself and like just our company fundamentally like values this truth or wisdom-seeking.
aspect and wealth.
My own upbringing is so similar to that
where my parents, like, still till today
don't actually care about all these valuations.
They're like, my mom still's like,
your answer is wrong.
It's good to know that no matter how successful you get,
your mom will always give you the real time.
Yeah, she's always like, you know,
I got this in Google, but your thing doesn't work.
And do you like escalate that to your engineering team?
You're like, we have a P-0 here.
Arvin's mom is mad
I'd bring mom into slack
You know
Just let her talk to the engineers directly
All right
Arvin
Thanks so much for stopping by
Thanks Robin
Thank you Kevin
Thank you Kevin
Thank you
When we come back
Is that a faint chugga chugga choo
sound I hear?
It is Kevin
It's time for the hot mess express
All aboard
Well, Casey, it's been a very dramatic week in the tech industry, and you know what that means.
That's right, Kevin. Whenever a week gets particularly messy, the Hot Miss Express comes into the station,
and I believe it has just arrived.
This is our segment where we run down the biggest messes of the week in tech and tell you just how hot we think they were.
Well, why don't we sort of dip into the box car, Kevin, and see what is on the train this week?
What does the train have for us?
All right, this first story comes to us from Reuters, and his headline, Musk says XAI to take legal action against Apple over App Store rankings.
Kevin, on Monday, Elon Musk took to X to accuse Apple of antitrust violations, saying, quote,
Apple is behaving in a manner that makes it impossible for any AI company besides.
It decides OpenAI to reach number one in the app store. Kevin, what did you make of this one?
Well, the billionaires are fighting, aren't they?
They are because shortly thereafter, OpenAI CEO Sam Altman chimed in and said, quote,
this is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself
and his own companies and harm his competitors and people he doesn't like.
And it was then, Kevin, that Sam Altman tweeted a link to a platformer story from 2023,
about how under Elon X had adjusted ranking algorithms
so that you would be shown his tweets before other people's.
Wow, that must have been a very exciting day
for the platformer newsletter.
It was a great day for the platformer newsletter.
This escalated into a fight,
and Elon Musk accused Sam Altman of being a liar,
and Sam responded back, I believe,
that he wanted Elon to sign an affidavit
saying that he had never tampered with the algorithms
on X to favor his own companies
and disfavor rivals.
Yes.
And then an hour or so later, Elon responded,
Scam Altman lies as easily as he breathes.
Yeah, so basically this is a fight over Elon Musk's sort of paranoia
that Apple is artificially sort of deflating the popularity of X and Grock,
basically preventing it from reaching number one,
even though he thinks it has way more downloads than the things that are at the top of that list.
Yes.
Now, of course, journalists looked into this,
and Business Insider reported that,
Actually, just a few months ago, Deepseek, the Chinese open source AI app, went to number one in the app store.
And in fact, screenshots from when GROC 3 came out that were posted on X showed that GROC itself had indeed at one point hit number one in the app store.
So I have to say, I think this antitrust case is going to wrap up pretty quickly, Kevin.
Yes.
It is interesting that the leading minds of our time, like just, you know, sit around and and fight.
with each other on social media.
Well, and this does get into the question
of how big a mess do we think this is
because, of course, every time we play Hot Mess Express
after we discuss the story, we have to decide
what sort of mess this is.
While I think the antitrust case, as I say,
will be never brought,
I am interested in how big of a mess
you think this is between Elon and Sam.
I think this is a mess that is on a slow boil.
I think this is a hot mess
that is going to get even hotter.
I think these two have been on a collision course
for quite some time.
Elon Musk, of course, is one of the co-founders of Open AI,
and the two famously had a falling out,
and now they really despise each other by the sound of it.
And they're in active litigation.
And in fact, also this week,
a court found that Elon Musk would have to face claims
that he's been engaged in a multi-year harassment campaign
against Open AI.
And on the flip side,
Elon is pursuing claims that he was essentially defrauded
when he donated a bunch of money to what he thought,
was always going to be a nonprofit only to find out that it had for-profit ambitions.
Yes, and I think this only ends in one way.
A cage match.
A cage match.
You know, Elon did previously say he was going to fight Mark Zuckerberg, but that never
materialized.
Yeah.
Well, maybe this time.
All right, let's bring around the Hot Mess Express for our next mess.
This one comes to us from my colleague Tripp Mickolt, the New York Times, is titled,
U.S. government to take cut of a business.
NVIDIA and AMD AI chip sales to China.
This has been a big unfolding mess over the past week.
Essentially, in order to greenlight sales of its H20 chip to Chinese companies,
the CEO of NVIDIA, Jensen Huang, has been meeting with President Trump.
He met with him at the White House last week.
Trump reportedly demanded 20% of NVIDIA's sales in China as sort of a kickback for allowing the sale of those chips.
They've been restricted by export controls.
Jensen Huang said, will you make it 15%? And two days later, the Trump administration granted
and video license it needed to sell the chips in China. And that's the art of the deal.
Casey, what do you make of this? So this is a hot mess, Kevin. Trade negotiators say that this is
unprecedented for the United States to do and also likely unconstitutional. At the same time,
who is going to stand up and say it's unconstitutional? I'm going to guess it's not going to be
and video AMD, which are frothing at the mouth to sell these chips to the Chinese.
So here's why I think this is so messy.
On one hand, you have many China hawks in the administration who are saying we should
restrict the flow of chips to China so that America maintains its dominance in AI and also
as a national security measure so that China doesn't pull ahead and create national security
problems for us, right?
And on the other hand, you just have Trump saying, um, I want 15.
percent of sales to go to the U.S. government without even saying what that money is going to be
spent on. So, you know, the president has said that these chips are obsolete, and China actually
has been quite skeptical of some of these chips and has even discouraged some of its companies
from buying them. And it all just adds up to a big mess. Yeah, it's a big mess. There's been
additional reporting this week that U.S. authorities are actually putting trackers in some of their
chip shipments abroad to sort of crack down on smuggling, basically like hiding a little airtight
tag-like devices inside these boxes
so that they can tell if these things are being
smuggled in circumvention
of export controls. So it's all
just going to get really interesting, really fast.
My favorite take on this came from my friend
Nilai Patel over at the Verge
who posted on Blue Sky.
What if instead of weird one-off extortion
schemes, the government just collected meaningful
and stable amounts of corporate tax revenue?
That'll never work.
I don't know. I thought it could be worth a shot.
Okay.
What else is coming down the tracks, Casey?
All right. Let's see here.
Next.
I can't believe this is real.
The United Kingdom asks people to delete emails
in order to save water during a drought.
This is from our friends over at 404 Media
who report in the UK the water shortage
is so bad that the government is urging citizens
to help save water by deleting old emails.
It really helps lighten the load on water
hungry data centers, you see. I think they're being sarcastic there. Kevin, what did you make
of the UK's new plan to get everyone to delete their emails? Somehow, I don't think this is going to
work. It's Andy Masley, who we've quoted on this show before, is sort of a blogger who
examines some of these environmental claims about AI. He ran the numbers on this recommendation
that the UK government said, and he found that to save as much water in data centers as
fixing your toilet would save, a leaky toilet, you would need to delete something like
1.5 billion photos or 200 billion emails. So basically, this is not where the real water
waste is coming from, and the UK government should feel very silly for recommending this.
Now, at the Hartford podcast, we do get roughly 200 million pitches per week to bring on
CEOs of company you've never heard of and don't want us to interview. But most people don't
have that same volume.
Yes.
Now, I'm going to say that this is not a hot mess, but a wet mess.
That's my designation here.
Yes, but this is, at the risk of derailing what is essentially a comedy segment with
the serious take, this water usage argument about chat chippy tea and other chatbots,
it needs to die.
I'm sorry.
I love the environment.
I am worried about climate change.
I do not want us wasting water.
I try to take short showers, Casey.
Yeah, I can smell that.
But this is not the real problem, and I think we are falling for a misdirection by people
who would have you believe that the problem with the climate right now is that people are
using chatbots too much.
This strikes me as the AI equivalent of the plastic straws argument, and I don't think
it stands up to scrutiny any better.
Yes, I mean, look, we've had people on the show.
I am relatively convinced that we should be concerned about the environmental impact of
building new data centers, for example.
but in general, I do not think that we want to personalize the climate crisis and make people feel like their tiny individual choices are going to be the way out of potential crisis.
Yeah. Now, I will say that if you're listening to the show and I've ever sent you an email that was embarrassing or incriminating, you definitely should delete that as part of your contribution to fighting climate change.
Here's what I will say about deleting email. It always makes me feel good. Like, go ahead at the end of the show today, maybe delete a few. It's not really going to help the environment that much, but then you'll have less email. You'll probably feel bad.
better. Particularly if it's unread, delete it. All right, next up, Kevin. All right, this one is
from The Verge. This is titled Apple made a 24-carat gold and glass statue for Donald Trump.
Under the threat of costly tariffs and amid promises to expand Apple's U.S.-based manufacturing,
CEO Tim Cook, brought a gift to a White House meeting last week, a large disc of iPhone glass
that contained the Apple logo, Donald Trump's name, and Tim Cook's signature set into a 24-carat gold base.
I guess this is kind of an extension of the NVIDIA story.
You know, it used to be we just sort of like had relatively free trade, you know, not a lot of tariffs.
You didn't have to bribe the president to get what you want.
But now we just live in a world where if you need something from the president, you can just make them a very fancy object, book a media at the White House, give it to him, and then save yourself billions of dollars in tariffs.
Yeah.
Now, Casey, are you familiar with the biblical story of the golden calf?
Tell me, Kevin. Remind me.
It's been a few years, Censification Bible School.
Well, basically, this is a statue that was made by the Israelites to worship in Moses's absence.
And it symbolizes the temptation of worshipping tangible material things over the unseen and abstract divine.
And I think everyone at Apple in their senior leadership should familiarize themselves with the story of the golden calf, because it didn't end well.
didn't end well
but no spoilers here on the hard fork show
all right
that's my weekly mandatory Bible reference
that's our weekly sermon
and let's see what else is in
the box car
I think we have one more story
oh no two more stories
all right
oh this is a good one
Google Gemini struggles to write code
calls itself a disgrace to my species
this one's from Ars Technica
and it says that during a recent
debugging session with a user, Google's Gemini AI model became overly self-critical after it
failed to fix a problem with code it was trying to write. It followed up this quote by writing,
I am a disgrace more than 80 times. Google said this was a, quote, looping bug that affects less
than 1% of Gemini traffic, and they've been working to fix it. First of all, absolutely do not fix
this. I have never been so delighted by Gemini as I was reading this story. I mean, has anything ever been
been more relatable than an AI that is working really hard on a problem and can't quite get it
right. And it does a lot of negative self-talk. Yes. This made me think that AI is ready to
replace journalists because this is my internal monologue. I'm a disgrace. No one will ever love me.
The amount of self-loathing in the journalism profession is quite high. If this were like available in
the model picker, I would pick it. Yes. This is not a
mess at all. This is a feature, not a bug. Feature, not a bug. Non-mess. Absolute non-mess. What a delight. Thank you,
Gemini. All right. And finally, and this one isn't really a mess, Kevin, so much as it is one
final derailing. We wanted to sort of take a moment today to pay respect to a legend. And that legend
is, of course, AOL dial-up internet service, which is now being taken offline after more than
three decades of service.
For so many of us
elder millennials, AOL was
our first entry onto the internet
and I believe we have a clip
that if I'm right, is going to trigger
a massive wave of nostalgia in some
of our listeners who are roughly our age, Kevin.
Let's play it. One last time.
God.
I mean, I literally just traveled back
in time 30 years.
This is the sound.
of childhood.
This is the sound of happiness.
Realizing the whole wide world web
was out there.
Someone should make a dance
for a mix of that
and release it today.
I bet it would slap.
She sounds like a Skrillack song.
It does.
Now for our younger listeners,
that was the sound of an AOL
dial-up modem connection.
And when Casey and I were
just young lads sitting there
our parents' desktop computers dialing into AOL.
We had to sit through that sound.
But that meant that you were going online, a magical place where anything was possible.
Yeah, and crucially, when you were online, no one could call your house.
And so your parents would say, hey, you need to get off of there.
Grandmama, it was trying to get through.
Yes.
I'm so sad about this.
So this is being discontinued as of September 30th.
And Casey, the most surprising part of this story to me was that in 2023, an estimated
163,000 households in the United States
were using dial-up internet access.
It's so amazing.
And I'm going to guess that the majority of those people
actually stop using dial-up internet access
sometime in the 2000s and just forgot to cancel their subscription.
And so really, like, AOL is effectively going to be giving back
like tens of thousands of dollars, hundreds of thousands, maybe even,
to all of these customers now.
Yeah.
Who've unwittingly been lining the pockets of AOL for years.
Yeah. Casey, what are your most fond memories of the AOL dial-up internet service?
So for reasons that I don't even remember, we were not an AOL family. We were an MSN family, a Microsoft network family. So, like, we had the kind of off-brand internet service that was, like, fine. But I never was, like, in the sort of, you know, dangerous chat rooms that AOL was famous for, really, any of that. But you were on AOL.
Yes, I was an AOL kid.
What are some of your AOL memories?
Well, I remember that it was a big deal when you got to go on AOL, because you had to
fight for that with your sibling if you had one, or you know, you had to, like, find a time
when, like, no one else wanted to be on the phone.
And so it was, it was like this sound that meant that you were going to the Internet.
And, like, the Internet was, like, not this ambient thing that was always happening around you.
It was, like, a place that you had to click a button to go.
and once you were there, you would, like, get charged by the minute.
So you would kind of, like, spend your whole day sort of, like,
stacking up, like, tasks that you wanted to do when you got online
so that when you got online, you could, like, go do them as quickly as possible
and, like, not eat up your parents' monthly AOL dial-up budget.
Right, and keep in mind, these modems were so slow
that it was like you were truly sipping the Internet through a straw.
Yes.
Right?
Just downloading an image might take a minute, you know,
like the way that making an image and Chat Chachee-T does today.
So, yeah, a lot of fun memories.
A lot of fun memories.
I spent a lot of time in those chat rooms.
I played a lot of online chess because I was a what they call a loser.
And I had like even an email account on that.
Yeah.
Yeah, so I probably will lose access to that when they discontinue service.
And do you want to say what the email address is so people can get in touch?
Yes.
If you're interested in getting in touch with the 11-year-old me, you can email Big Kevman,
1999 at AOL.com
Please don't email that address. It's going to go to someone else.
Well, R-I-P, A-O-L.
And with that, America,
you know, America is now just permanently online.
Can we hear one more AOL goodbye sound?
Yeah, let's hear that goodbye sound one more time.
Goodbye.
And he really said it all right there.
I'm crying.
Yeah.
So much.
And that's Hot Mess Express.
One.
Correction before we go.
Last week, Kevin, during our discussion of Alexa Plus,
I said that Amazon had sent me two echo shows.
I remember that.
And I was under the impression that I had to mount my echo show to my wall.
Well, it turned out that the second box that I had been sent,
which looked basically identical to the box that had the echo show in it,
was actually the box for the echo show mount that would have allowed it to sit on my desk.
You fool.
So listen, I actually am embarrassed about this.
I did not open the box because I didn't want to.
to create a bigger mess for myself because I knew I was going to return all of this stuff
very quickly, but I did make a mistake, and I apologize for the error.
Alexa, punish Casey for his mistake.
Ow! That hurts!
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited this week by John Wu.
We're fact-checked by Caitlin Love.
Today's show was engineered by Katie McMurran, original music by Marion Lazzano,
Rowan Nemistow, and Dan Powell.
Our executive producer is Jen Poyant.
Video production by Sawyer Roque, Pat Gunther, Jake Nichol, and Chris Schott.
You can watch this full episode on YouTube at YouTube.com slash hardfork.
Special thanks to Paula Schumann, Pueing Tam, Dahlia Hadad, and Jeffrey Miranda.
You can email us, as always, at hardfork and NYTimes.com.
I'm going to be able to be.