Hard Fork - Character.AI’s Teen Chatbot Crackdown + Elon Musk Groks Wikipedia + 48 Hours Without A.I.
Episode Date: October 31, 2025This week, Character.AI announced that it would soon be taking its A.I. companions away from teens. We explain why this is a major development in the world of chatbots and child safety and how we expe...ct other companies to respond. Then, Elon Musk has built a Wikipedia clone called “Grokipedia.” We discuss what it means for the control of knowledge on the web. And finally, we’re joined by A.J. Jacobs to talk about his 48-hour experiment in which he tried to avoid all forms of artificial intelligence. We ask why that led him to collect rainwater and forage for food in Central Park.Guests:A.J. Jacobs, author, journalist and host of the podcast “The Puzzler” Additional Reading: Character.AI to Bar Children Under 18 From Using Its ChatbotsElon Musk Challenges Wikipedia With His Own A.I. Encyclopedia48 Hours Without A.I. We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Transcript
Discussion (0)
Well, I wonder if you saw this, you know, I keep very close tabs on celebrity news, Kevin.
I know you do.
And in particular, I'm always interested, has any hard-fork guest sort of entered the world of celebrity?
Because that's always very exciting for me.
It is.
And this week, it was officially confirmed that Katie Perry, the pop star, is dating one-time hard-fork guest and former Prime Minister of Canada, Justin Trudeau.
I was wondering where you were going with that.
I'm not sure I would put hard-fork guest on Justin Trudeau's at the top of his resume.
To my parents, that is the main way that Justin Trudeau is known, is as a hard-fork guest.
And now is Katie Perry's boyfriend.
Yes, he came in like a dark horse, and now they're dating.
And for all I know, she's living a teenage dream right now with the former Prime Minister of Canada.
Hey, she's not a teenager.
No, have you listened to the song?
Do you know what the song Teenage Dream is about?
It's about falling in love with someone that makes you feel like it's a teenage dream.
Oh, my God.
And let this be a lesson to other newsmakers, celebrities.
If you come on the Hard Fork podcast, 12 to 18 months later, you may find yourself dating a celebrity.
For all we know, that is how Katie Perry became aware of Justin Trudeau.
She was watching over at YouTube.com slash Hard Fork, and she saw this man talking about Canada, and she said, baby, you're a firework.
I'm going to be honest, I've run out of the song titles for Katie Perry's songs.
I was impressed that you kept it going this long.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Newton from Platformer.
And this is hard for it.
This week, the company that said chatbots aren't safe for kids,
why character AI is taking AI companions away from teens.
Then, Elon Musk built a Wikipedia clone.
Let's see what it says on Kevin's page.
And finally, journalist AJ Jacobs is here to talk about the two terrifying days
He spent without any artificial intelligence at all.
My God. I hope he's okay.
Well, Casey, this week we got some really surprising news about a company that we have talked about on this show before.
This is Character AI, the company that makes these sort of realistic chatbot companions.
We talked about it about a year ago on the show.
In the context of this very tragic story of Sewell Setzer, the third, a 14-year-old boy who took his own life after becoming emotionally attached to a Game of Thrones chatbot on Character AI.
We got a big update on that story this week, which is that Character AI is barring minors, people under 18, from having conversations with its chatbots.
It is basically saying, we are not going to offer this.
service to minors anymore. Yeah, and there is some nuance here that we'll get into. But at a high
level, Kevin, I think this is one of the most dramatic steps we have yet seen from a major AI
company to try to address the very real harms that these technologies pose, particularly to young
people. Yes. So let's get into the details. But before we do, let's make our AI disclosures.
Casey, what is yours? My boyfriend works at Anthropic. And I work at the New York Times,
which is suing OpenAI in Microsoft over alleged copyright violations.
Yeah.
So just to remind folks who may not remember the initial story,
Character AI is a company that was started several years ago
by leading AI researchers from Google who left that company.
Noam Shazir and Daniel DeFretas were their two co-founders.
They were frustrated that Google was not sort of releasing this chatbot they had worked on.
And so they said, we're going to go off and build our own startup
where we're going to release these chatbots
based on large language models.
You can make characters.
You can talk with them.
It's sort of a role-playing app experience.
And it became enormously popular with young people.
This was one of the first generative AI chatbot-based apps
that really took off.
And many of the users were teenagers or even younger.
And if you went on as I did, and I spent some time reporting on this,
there were just a lot of chatbots that seemed really aimed at young people.
People, chatbots that would sort of take the persona of your friend at school or a bully or your crush.
It was like a very young seeming app.
Yeah, or also characters from Game of Thrones or you name the franchise.
You know, I think the original idea animating character AI is, hey, what if you could chat with a lot of copyrighted material that did not belong to character AI?
And it turned out that that was hugely popular with a bunch of kids who wanted to talk to, you know, Pikachu or whoever.
Yeah, and the company when I was reporting on it a year ago,
wouldn't tell me how many of its users were under 18, but said that it was like a significant
number. And so when I hear things like that, I just assume that this is an app that is
predominantly used by young people. For that reason, it's a very big deal that they're making
these changes to basically wall themselves off from young people, at least for their central use
case. Yes. And I think that that comes after really sustained public pressure in the wake of
Sewell's death. There are other lawsuits against the company. And I have to imagine that at some
point, the lawyers at this company said that the legal risk to us is simply too great. We believe
Character AI has about 20 million monthly users is the figure that I have seen reported, and
they think that there's a better opportunity for them building for adults, at least in the
moment, than in making this technology available to kids. Yeah. So let's talk about the specifics
here about how this is going to work. Character AI put out a blog post this week, spelling out the
changes that they're making. They say that over the next month, they will identify users under 18.
and begin giving them time limits on their ability to chat with characters.
That limit initially, they say, will be two hours a day, and it will ramp down in the coming weeks.
And by November 25th, so roughly a month from now, under 18 users will not be able to converse in open-ended conversations with any character AI chatbots.
Basically, they are going to limit the length of the conversations, maybe the topics of the conversations,
and they are going to try to give teen users other ways to what they say,
be creative, for example, by creating video, stories, and streams with characters,
but they will not allow this kind of open-ended role-playing experience anymore.
Yeah, so if you just want to create a little bit of synthetic media featuring these characters,
that's okay.
But what's not okay is essentially the thing that seemed like it was really problematic in Sewell's case,
right?
Sewell had developed this very intense relationship with a chatbot that was called DeNaris Target.
as in, you know, the Game of Thrones character.
And I think there are a lot of concerns about kids getting into these very emotionally
heavy relationships with these synthetic characters.
It can kind of take them into a world of delusion.
It can separate them from their friends and family.
That's the sort of thing that's not going to be loud anymore.
Yes.
So it's unclear exactly what character AI is going to do now that it's giving up on what is
essentially its sort of entire core use case for young people.
they say that less than 10% of their current users are self-reporting as being under the age of 18.
That's according to their CEO.
But obviously, that is self-reporting.
And I think a lot of teenagers are lying about their age.
Do you know how many times I lied about my age on the Internet?
Yes.
So I have had an experience over the past couple of months where I have just started to feel like this is the most important and least understood topic in technology.
right now. I was recently at high school, and I often like to sort of poll students about how they're
using AI. And so I asked at this high school, like, raise your hand if you have an AI friend,
and about a third of them put their hands up. This is something that was, I think, a year or two ago,
considered kind of fringe, kind of unusual for young people to have these intimate relationships
with the chatbots. But the chatbots have gotten better and more compelling and more
persuasive, and it is just starting to become this, like, mass social phenomenon.
There's one study as a survey done by Common Sense Media recently that found that 52% of
American teenagers are regular users of AI companions, which is a startling figure and
represents just like how quickly this all is happening.
And another stat that I found very alarming from this survey was that nearly one third of
teens find AI conversations as satisfying or more satisfying.
than human conversations.
Absolutely.
And why is that?
We've talked about it so many times on the show.
These chatbots are designed to be agreeable,
to tell you that you're correct,
and to support you.
And that's not inherently a bad thing,
but if it becomes your primary mode of socialization,
it does seem like there is some real danger here.
And character is the first company that has said,
instead of trying to introduce these sort of,
you know, mealy-mouthed incremental tweaks and guardrails,
we're actually just going to shut the whole thing down
until we can figure out what's going on.
Yeah, and I think in the minds of a lot of parents
or people who, you know, understand
and are worried about this phenomenon
of AI companions for young people,
there is a sense of relief about these changes,
a sense that maybe this one company,
at least has decided to put people's health and well-being
above their own profits.
And I was texting a little bit this morning
with Megan Garcia, who is the mother
of Sewell Setzer, just sort of seeing how she felt about this. She filed this lawsuit. She's been
becoming more of an advocate for these issues. And she gave me permission to share this. She said,
I'm relieved for the children that will actually lose access to character AI because those are
lives that can be saved, even if it's one child. But I can't help but feel cheated. Why did it
take Sewell dying and me taking on this tech company to get them to do this? So I think for Meg,
and for, I'm sure, the rest of her family, there is some relief, but also some frustration with,
like, why did it take a lawsuit? Enormous public pressure, pressure from regulators around the world
to get this company to act. Yeah, so this announcement has not been universally praised. The Tech
Justice Law Project, which is one of the organizations that brought the lawsuit, sent us a note this
morning. They pointed out that Character AI had not really said how they were going to do age
assurance to make sure that all of the adult users who will continue to get access to these
chatbots actually are adults. They also noted that the company had not addressed, quote,
the possible psychological impact of suddenly disabling access to young users, given the emotional
dependencies that have been created. So I thought that is worth saying we have seen when other
companion bots have removed access, how jarring and painful it can be to the people who use
them. And so while I'm glad that Character AI is going to be sort of ramping these users down,
as opposed to simply just pulling out the plug, I do think it's worth saying. This may be painful
for some of their users. Yeah, I think that's a really good point. I think just because these are not
human relationships doesn't mean that they can't produce pain and grief when people lose their
connections to something that they have grown attached to. So yeah, I think we should be very
sympathetic and empathetic toward people who may be having a hard time now that their chatbots
aren't talking to them. So what kind of impact do you expect this move from Character to have
on the rest of the industry? Not much. I mean, I think Character was a sort of special case.
They actually had lost most of their founding executives and leadership. Nome Chazir,
and Daniel DeFritus went back to Google
and basically sort of left behind
the kind of shell of this company.
So I don't expect that Character AI
is going to recover from this.
I think they were probably already in a state
of losing users and just seeing the
sort of decline of their platform.
So this may be sort of a final nail in the coffin
for them. But I think what has
happened is that the rest of the industry
is now doing what Character AI
used to want to do. I think
the ones that we've talked about in the show,
Open AI and Meta, that are really pushing
into this use case, I don't know that they're sort of learning the right lessons from what has
been happening with Character AI. Yeah, I mean, I have to say, I think it's going to have maybe a
bigger impact than you do for this reason. Inevitably, there are going to be congressional hearings
about this, and I think Character AI will probably be there, and they're going to say why they did
this, and then they're going to go over to whoever is there from Meta or Open AI and say,
why do you guys think this is safer than they do, right?
Why do you, META, still have Nasty Nancy available on your platform?
And I think that that's just going to put a really interesting sort of pressure on them,
if to do nothing else than to have a response as to how they are justifying what they are doing, right?
And to try to put some sort of case behind it other than everyone else is doing it,
which is an argument I have heard meta make about why these chatbots are available.
Yeah, and I think one way that the bigger AI companies may represent,
respond to that is by saying, well, look, we are just so big now. We have so many users. And
some small percentage of those people are going to experience mental health crises in their
lives, maybe even while using our product. But that number is small relative to the number
of users of our products in total. This is what I would call, like, the prevalence argument,
which we heard a lot from social media companies a decade ago. They would say, oh, yeah,
there is like, you know, hate speech and toxicity on our platforms, but like, if you just look at, like,
the overall percentage, it's, like, quite small and, and meaningless. So I just think we'll
start to see a lot more of that kind of argument. You know, this whole discussion, Kevin, ties into
some really interesting research that Open AI released this week where they began to map out the scale
of the mental health crisis as it can be seen on chat GPT itself. There are now more than
800 million people a week using the platform. That's a pretty, you know, decent subset of the
population. And while the numbers of people who are having these kind of, you know, disturbing or,
you know, potentially dangerous conversations with chat GPT are low on a percentage basis,
by the company's own estimates, you have 560,000 people a week whose messages to chat GPT
indicate psychosis or mania, 1.2 million people a week who are,
potentially developing an unhealthy bond to a chatbot and 1.2 million people who are having
conversations that contain, quote, indicators of potential suicidal planning or intent.
So if you just want to be very cynical about this and think about it only from a legal
liability perspective, if you have more than a million people a week who are developing an
unhealthy bond to your chatbot, who are expressing thoughts of self-harm, think about
the lawsuits that are going to follow, right? I mean, that could be just hugely damage.
imaging. So I wonder if the other big labs will look at what character AI did this week and decide
maybe we actually should build some of these safeguards faster. Yeah, I don't know. I'm still not
that optimistic. I think that these companies are kind of trapped because they want the engagement
and the depth of connection that people are having with their products. Like any company that
makes technology, wants people to, if not fall in love with it, at least develop like a bond
with it and feel like very connected to it. So they want that, but they don't want the
responsibility for the emotional relationships that people are going to develop with these
systems already are developing in many cases. Yeah, I mean, the main conclusion that I have about
this whole story, Kevin, is just that this story reminds us that nothing is inevitable when it
comes to AI, right? You don't have to build it. You don't have to release it to everyone. You don't
have to make it free. You don't have to decline to build any meaningful guardrails. You can actually
just say, based on what we've seen, we don't think that this is safe, and we are going to take it
off the market. And at a time when everyone has their foot on the gas pedal, and everyone feels like
they're in this all-out existential race to a finish line called AGI. I've been so worried that
companies were going to cut corners when it comes to safety. We've seen it over and over again.
And so while you hate to give too much credit to a company, particularly one like character
AI, which did, after all, get itself into this mess in the first place, I do think there is
something to be said for saying, we are going to stop the bleeding here. We are actually going to
admit that we don't know how this affects people, and we're going to take it off
the market. Yeah. I mean, I think it's a really overdue but responsible thing that they did. I think
they probably had their hand forced by the lawsuits and the regulators breathing down their necks.
And so I hope that Mark Zuckerberg and Sam Altman and these folks who are building these very
persuasive, compelling chatbot companions are looking at this as a cautionary tale for what can
happen if you don't think about the consequences of what you're building. Kevin, maybe one more thing to
say about this. We're living through a dramatic contraction in the access that teenagers have
to technology online. In the same week that we are announcing this, YouTube, Snap, TikTok, and
meta have all said that they will abide by a law passed in Australia that will ban kids
under 16 from having a social media account. So you can still look at YouTube, but you cannot
have your own account, you're going to have to use someone else's, and you can't have an
Instagram account, you can't have a TikTok account. So that's something that there are many
American states that are trying to bring about with sort of mixed success based on legal
rulings. But I think you look at what character AI is doing, you look at what some of the
states here are doing, you look at what Australia is doing. And I think the social media and
AI companies have just lost this argument, right? There is no longer a consensus
that unfettered access to these technologies
is good or healthy or safe for teens
and people are finally starting to do something about that.
Yeah, I think that's right.
But I think the question of how to regulate
these chatbot companions
is only kind of part of what I'm thinking about these days
because I don't think this is actually something
that we're going to be able to regulate our way out of.
I think even if countries do what Australia has done
and ban social media use for kids under 16,
kids are going to find this stuff.
They're going to get access to it.
they're going to think it's compelling.
Some number of them are going to grow emotionally attached to it.
And so I think we do actually have to also address the real possibility that there's really
nothing we can do at a regulatory level to prevent every teenager from forming an emotional
attachment to a chatbot.
I don't know.
I think that sometimes passing rules like this can be the first step toward a society just
changing its relationship with these technologies overall.
You know, when I went to high school, there was still a smoking section indoors.
my school because it was just sort of taken as a given that, yeah, well, you know, you can't
stop the seniors from smoking. It's cigarette. Of course they're going to smoke their cigarettes.
But, you know, you go back to my high school today. There's no smoking section. And banning it
for teens, I think, was part of a larger movement of telling the adults, hey, this isn't actually
very good for you either, right? And eventually that sort of shifted. So I think there may be some
hope that if we can take the next generation and not have their primary relationships be with
character AI. There is some hope that maybe this doesn't actually change society quite as
radically as maybe it otherwise might. There was a smoking section at your high school. Can you
believe that? That's amazing. What year was that? This was the late 1990s. Now, it was the
1900s, but it was like the very end of them. When we come back, a look inside Elon Musk's new
Wikipedia clone and what it says about Casey's relationship.
Well, Kevin, I was reading about you recently.
Oh, yeah?
Yes.
Let me ask you if this is true, because I read someone say this about you, that when you were growing up, your upbringing fostered an outsider's perspective on religious and cultural fringes shaped by a family dynamic that prioritized open inquiry over doctrinal adherence.
How do you respond to these charges?
How did you get into my therapist's notes?
Believe it or not, Kevin, that's not your therapist.
That's from a little something called Grogapedia.
Oh, boy.
Grogapedia, of course, the Wikipedia competitor challenger that has been developed by Elon Musk and XAI as part of a huge culture war that has gone on over the world's most popular encyclopedia.
And I thought today we should take a look at this thing and talk about what we think.
So I need you to kind of walk me slowly through this because I'm coming to this.
you slowly through this. I am coming to this totally cold, and I want to get into
Grogapedia and all the details. But first, I must know, what else does Grogapedia say about me?
Oh, my goodness. Well, first of all, this article is long. There are more than a dozen
subsections about your life, including your books, your New York Times career, and my personal
favorite notable events and controversies. What are my notable events and controversies?
Well, you'll be happy to know that there is an extended section about your interactive.
with a certain Bing Sydney chatbot back in 2023.
Never heard of it.
And then there's also criticisms of your AI reporting.
Oh, boy.
Which, uh...
We can save those for...
Save those for our Patreon subscribers.
Yes, exactly.
Check the Patreon if you want to hear those.
So obviously, Grochapedia is part of Elon Musk's AI chatbot Grok.
But what is this project and why did he decide to make his own Wikipedia?
So for well over a year now,
Conservatives and the right wing have been fomenting this backlash against Wikipedia,
which they say is biased against conservatives.
That is, of course, a familiar conservative talking point about basically every popular tech
platform on the internet.
In the case of Wikipedia, they're particularly concerned that Wikipedia editors have labeled
a bunch of conservative media as unreliable and therefore ineligible for inclusion as
citations on articles about controversial subjects.
It's a little too Wocopedia, if you know what I'm saying.
I do know what you're saying, and I don't like it.
Because embedded in that critique is the idea that, for example, the Heritage Foundation's blog
posts or Breitbart or Fox News's political coverage deserve to be seen with the same
credibility and fact-checking journalistic rigor of, let's say, platformer. News.
Yeah, I feel like this happens kind of every few years where, like, a group of partisan
activists, like gets very mad at Wikipedia, like the crown jewel of the internet,
like every three years, people are just like, it's horrible, it's biased.
We have to destroy it.
Yes, and this particular backlash seemed to get a lot of fuel after the famous Elon Musk was
at a Nazi salute incident.
What happened there?
Well, you may remember.
I remember the incident, but what was the controversy surrounding it?
Well, Elon Musk says and has continued to argue across many, many, many posts on X that this was not a Nazi salute.
And now whenever any Democratic politician raises their hand in a vaguely Nazi salute-seeming way, he does a post about it.
But he's very mad about how this controversy was handled on Wikipedia, where there is an entire page devoted to it.
And that appears to be one of the main reasons why Elon Musk says,
we are going to build our own Wikipedia.
It is not going to have these same biases baked into it.
It is going to be maximally truth-seeking to use one of his favorite phrases.
And as of this week, it is now live.
So can I just ask some technical details about Grogoppedia?
So is it all written by AI?
Is that the premise here?
Well, certainly Grock seems to have played a starring role in this thing.
When you read it, it reads very much like,
GROC output. But as many writers have noted, including J. Peters at the Verge, when you do side-by-side
comparisons of Wikipedia and GROC, there appears to be just some pure plagiarism. And GROC does
acknowledge that it has used large chunks of Wikipedia under Wikipedia's license.
So it seems like maybe Elon Musk and his team have sort of ingested some or all of the sort of regular
Wikipedia and just given Grock a prompt that's like kind of rewrite.
this to be more grok-like?
Yes, and at launch,
Grocopedia has more than
800,000 articles
that compares to around
7 million on English language
Wikipedia. So it is a small subset,
but presumably that will grow over time.
Right. And can anyone edit the articles
on Grocopedia as on Wikipedia?
No, you cannot.
What you can do, if you see something
on Grocopedia that you think is wrong,
you can highlight it, and then
a little button will pop up
that lets you click it and you can say this is wrong
and you can sort of make your case.
And that seems like a great way to waste a lot of time
if you have nothing else to do in your life.
Hang on, give me a second.
I have some bones to pick with my Grogapedia entry.
You really haven't looked at yours yet.
No?
Oh, well, of course I had to look at mine.
Yeah, what are your saying?
Well, first of all, I'm very proud of us
that we made the first 800,000 articles
in this encyclopedia, right?
Not easy to do.
I was not one of the first 800,000 entries on Wikipedia.
I'll tell you that much.
So appreciate that.
Wait, I have to go see this for myself.
Go ahead and pull it.
it up. I want to be intellectually honest
here. There are parts about my
Grockapedia page that I like. It goes into
way more detail than my
Wikipedia page does.
And I think overall
presents like a pretty good picture of
like who I am and what I have done.
Oh my God, it's so long. It's incredible.
I'll say it. It's too long. Like nobody actually
wants that much information about me. Wait, can I
read the family and relationship section to you?
Yes, please do. Because this is breaking some news
here on this podcast. It says
Newton is married to a
lawyer. Congratulations. I thought your boyfriend worked in Anthropic. Well, he does, and so we have found
the first of many mistakes that you will find in Grogapedia. He maintains a low public profile regarding
his personal relationships. Not true. Can't stop talking about it. With no further details on
partnerships or children disclosed in available interviews or profiles. Yeah, so I guess I'll try to
say more about my boyfriend to try to help Groghapedia out this time. Goodreads profile here. Newton
exhibits a key in
in reading false he hasn't read a book in years evidence by his goodreads profile cataloging
112 books with ongoing reads including cahokia jazz by francis spufford the saint of bright
doors by vadra chandra secura and others spanning fiction and nonfiction let me give a shout
to the saint of bright doors by the way that's that is the best book i've read this year super super
super good so yeah so there i mean there's like kind of something a little creepy about it right it's
like we're going to like go in and look at all of your public profiles and kind of see what we can
scrape in. But I do think it like winds up putting together a kind of, you know, decent picture of
my life. Now, for the most part, I've been able to stay out of a lot of culture wars and political
controversies. And so, you know, I didn't see anything in there that made me, you know, really
roll my eyes and feel bad. If though you are an Elon Musk or a Donald Trump, you may find that
you're getting a much friendlier treatment on Grogapedia than you would on Wikipedia.
Okay, so let's talk about the politics of Grogapedia.
Where does it differ meaningfully from standard issue Wikipedia?
So because it is designed as a kind of right-wing alternative, when you pull up articles that have been the subject of a lot of culture warring, you will just find material that is much closer to the conservative or Republican view.
So, for example, if you pull up the article on Donald Trump and there is what I would say is a,
very friendly view of the events of January 6th, 2021, that sort of goes out of its way to talk about
how, you know, Democrats sort of overstated the risk to democracy. So I will say that in some
ways I expected Grogoppedia to go further to the right. And like, you know, do not get it wrong.
There is a lot of really racist stuff in Grogoppedia. There's a lot of anti-trans stuff in Grogapedia.
But, like, as somebody who has spent more than a share of time, like, reading 4chan and, like, are the Donald back in the day, the stuff that I'm seeing in Grockapedia is not as bad as that.
What is the strategy here?
Like, what is Elon Musk hoping to accomplish?
Is he hoping that people will, instead of going to Wikipedia to learn about stuff, go to Grogapedia and that we will sort of educate people differently in this country?
Yeah, I think, you know, I read a quote in the New York Times article about the, the Grockapes.
launch where they had some scholar who said, like, ever since people began to study things,
people have wanted to control knowledge and how it is distributed, right? And I think
Grogapedia is just a step in that direction. If you believe that Wikipedia has a chokehold
on the public imagination, if you're concerned that Wikipedia data is being used as a pillar
of most of the big, large language models that we're now using every day, if you wanted to
inject other views into the populace, you might want to create something like Grogpedia.
Yeah, I'm also wondering how much of it has to do with the actual training of GROC because one thing that we know is that Elon Musk has been frustrated in the past that despite his best efforts, it keeps sort of hoovering up all this data from the internet and that makes it, he thinks, too liberal.
And so I'm wondering if this is kind of like an effort to give Grock a new kind of substrate of knowledge that it can learn from so that it's not reliant on Wikipedia.
I mean, it's an interesting idea, but I'm not sure how much original work GROC is really doing.
Like, I think that it is almost certainly showing you a wider range of sources than you
might find on Wikipedia or at least a wider range of right-leaning sources.
But it's not as if there are a bunch of, like, you know, right-leaning Grockapedia editors
who are going out there, you know, doing original research or something.
Like, this is very much akin to, like, a deal.
research report that you might get a chat GPT or a Gemini to do for you, except that
this time it's GROC.
Right.
So how much of a big deal is this?
Like, are you seeing reactions from people who are scared that this actually will replace
Wikipedia?
Is this just kind of like one of Elon's many passion projects?
You know, so far, I don't know that Grockopedia has made much of a splash in the mainstream,
aside from just being a curiosity.
if you follow Elon Musk on X and you visit X a lot,
it is something that you've heard a lot about.
And I've seen conservatives and the tech right talking up certain
Gronkipedia pages is like, aha, this is so much better than Wikipedia.
So it is kind of having that moment right now.
How long does that last?
I don't know.
You know, Wikipedia is one of the very most popular sites on the internet.
And it's going to take a lot to displace that, right?
I think for a lot of people going to Wikipedia is just muscle memory.
So barring some sort of massive leverage that Elon Musk is able to get in distributing
Grogapedia to more people, I think it's probably going to remain more of a curiosity.
Yeah. And do you think this contributes to sort of the fears that people have about the decline
of Wikipedia? Because we've been, you know, talking for years now about how generative AI
chatbots are now, people's increasingly their first step toward learning about a new subject
where maybe before, you know, if you wanted to learn about the, I don't know, the Franco-Prussian war,
would have, like, gone to the Wikipedia page for it, but now you might pop open chat GPT and
just ask for information, and he would sort of go out and look at Wikipedia and other sources
for you and synthesize it. So is this, like, coming at a time where Wikipedia is already
pretty vulnerable as a result of AI? I think to the extent that Wikipedia is vulnerable,
Grogapedia doesn't really pose much additional threat. I think by far the larger threat to
Wikipedia is what you just said. It is that more and more people are accessing the information
via chatbots, via Google search results.
And the main consequence of that for Wikipedia
is that if Wikipedia can't get you to go to the site,
it also can't get you to contribute.
It can't get you to update articles.
It can't get you to become an editor.
And so the fear is that as traffic to Wikipedia declines,
the quality of the site will decline as well.
And in fact, just this month,
Wikipedia published a blog post in which they said
they are starting to see traffic declines
due to generative AI.
So that's a very real threat to the,
Encyclopedia, I think, isn't quite that.
Yeah, I mean, whether or not Grocopedia takes off as a product, I think that this larger
effort to delegitimize Wikipedia is quite possibly going to be successful because I think that
people on the right especially have identified that, you know, even though Wikipedia looks
like this sort of canonical thing that is just sort of appearing on the internet, like it is made
by people, a relatively small percentage of people on Wikipedia are actually contributing to it.
And so I think they have recognized that this is another set of refs that they can essentially
work, are the Wikipedia editors and the moderators who control the rules.
And maybe that is sort of the larger victory that they see as being possible here, is maybe
they can just kind of undermine the sort of longstanding traditions and norms of the Wikipedia
community and get it to behave more like Grogapedia.
We'll see.
I think Wikipedia has been really resilient so far.
You know, there actually have been hearings in Congress about this alleged bias in Wikipedia,
which I find outrageous because Wikipedia should be able to say whatever it wants about
vaccines or January 6th or whatever else, right?
It doesn't have any legal obligation to the federal government to provide one set of
views over another.
but that does get Kevin at why on balance,
I'm actually glad that Grogapedia exists.
Why?
Because I think if you see something online
and you get really mad
and you think that there is a better
and smarter view out there,
I think the best thing to do
is to just put it up on the web, right?
Not in every single case.
There's some horrible things
that I wish you wouldn't post on the web.
But look, if you want to have a debate
about January 6th,
go ahead and create a web page, right?
That is my preferred, you know,
resolution to political conversations, as opposed to we are going to start having hearings to
pressure Wikipedia into having one particular political view. So I view Grogapedia as silly and
bad and offensive as it can sometimes be as still a case of countering speech with more speech.
And I think that that is overall a better way to have a democracy. Yeah. So I understand that
take. But I'm also wondering if the fact that Grogapedia is AI generated makes it any different.
like is the answer to like speech that you don't like really having a chatbot go out there and
write a bunch of slop text for you? I think that is a good question. And maybe we should sort of
value Grogapedia less than we do Wikipedia for that reason. At the same time, humans are
involved in the creating and the shaping of Grogapedia. Right. Like it seems very unlikely to me that
what we're reading on some of these really high profile entries has not been edited or tweaked by
someone, right? So I think I still see a pretty strong human hand in here, and that's why on
balance, it still does feel like counter speech to me. Wow, that's a very pro-first-amendment
take from you. Yeah. Your lawyer-husband must be so proud. I just think like this inevitably
ends with like Elon Musk, like forcing a state government to like teach grocopedia in schools.
I mean, you joke, but like, that's probably only a half joke.
And I would be surprised if it does become like the curriculum in Texas in
2028.
So we should absolutely keep an eye on that.
And there are places it can go bad.
But like, at the end of the day, one thing about me is an elder millennial is I love
the web.
And so I'm generally in favor of people making websites.
Because look, at the end of the day, the truth is, most websites just get ignored.
But they wind up being useful for like,
some people, right? So look, I'm not going to be visiting
Grogapedia every day, but if the uncle who you're going to have your worst
conversation with at Thanksgiving this year wants to use it, that's fine.
Yeah. I mean, I just, my bigger question is like whether this whole
category of kind of the online encyclopedia is just obsolete, right?
Like, I love Wikipedia as an idea, as an expression of
collective
knowledge
as a
what I consider
like a true
gem of the internet
It's a miracle.
It's a miracle.
And I
cannot tell you
the last time
I went to Wikipedia.
Really?
Yes.
Oh, I'm there every day.
Really?
Yes.
So I feel like
you know,
I go there now
when I need to
check something
that a chatbot
has told me,
but I do not
really go there
as my first stop
on any
sort of given fact-finding mission because my consumption has shifted almost entirely to these
chatbots and to search engines. We should say, like, I still do use Google on occasion. So I just
wonder if, like, there is any future in which Grogapedia, Wikipedia, any of these sites have
a realistic hope of making it, or if they just sort of end up being kind of crammed into the chatbots,
and that becomes people's primary way of finding things out. Well, I think you're onto something real here,
because of all of the products that Elon Musk has launched in the past 10 years,
Grogapedia does seem like by far the least forward-looking, right?
Now, it is possible that you could take the contents of Grogapedia
and find other ways to distribute them,
and maybe that's what he will do.
You know, notably, he is not relying on human contributors
to bring in the knowledge because he just stole most of what was on Wikipedia.
Those humans already did the work for him.
But he can just take the material.
that he already has
and sort of find new things
to do with it
update it
via conservative media
of whichever flavor
he likes
and so maybe that's the way
that this thing
winds up being
a little bit more
forward-looking
than it seems today.
What's your
what's your
grocopedia take?
Well,
let me spend a little more time
on it.
Let me see
what it has to say
about the hard fork
podcast.
Oh,
do we have a page?
We don't have a page.
Okay,
I hate this thing.
Oh my God.
We could do tuning fork,
Clark Fork River
or pastry fork.
Let's see what it has to say about pastry fork.
When we come back,
why author A.J. Jacobs
collected rainwater and foraged food
while spending two days with no AI.
Well, Casey, you and I are AI maximalists.
We use this stuff all the time.
But today we're going to talk with someone on the show who went 48 hours without using AI at all.
An unthinkable idea to me.
Suffice to say, an experiment that would not have occurred to the two of us to try.
Yes.
So today on the show, our guest is A.J. Jacobs.
A.J. is a great writer and a former mentor of mine. He was actually my first boss in journalism. I helped him with a book many years ago, about 20 years ago now, God, about the Bible. He is known for these kind of immersive experiments where he throws himself deeply into a topic. He wrote a book about following the Constitution literally. He wrote a book about following the Bible literally. He's also the host of The Puzzler podcast. And this week, he published an article in the New York Times titled 48 Hours Without AI.
in which he sets out to live his life with as little AI contact as possible.
Yeah, and, you know, my expectation as I started to read this piece
was that it would be pretty easy to go 48 hours without using AI,
but for reasons that AJ gets into,
it actually winds up being quite difficult.
Yeah, you essentially have to time travel back to the 1800s
to avoid contacting anything that has any form of AI in it.
And I think it's a useful point,
in addition to being a very fun article,
because it does drive home just how intertwined all this stuff is with the way we live our lives today.
And it's not inevitable that it's going to continue getting more intertwined, but I think it's a pretty good bet.
Yeah, so this is a story that begins with AJ forswearing modern electricity and ends with him foraging for food in Central Park.
And I think it's time to bring him in and talk about it.
Yes, his life sort of resembled that show naked and afraid, where you have to just kind of find your way out of the forest.
That's what Living Without AI in the year 2025 is like, according to A.J. Jacobs.
Let's bring him in.
AJ Jacobs, welcome to Hard Fork.
Delighted to be here. Thank you, Kevin. Thank you, Casey.
So you just did this experiment where you went 48 hours without using AI or machine learning.
And I want to talk to you all about that.
But can we just start with the photos at the top of this story?
You are wearing what I would describe as a sort of very loud outfit that has some like red checkered pants and like a paisley flowered print shirt and these like glasses that look kind of like, you know, Elton John or like the ones you get after you get your pupils dilated.
Right. I brought them along.
So why the fit? Explain this.
Well, the premise of the article was, as you said, try not to interact with AI or machine learning for 48 hours.
And one thing I realized quite early on was it's everywhere.
It is everywhere, especially machine learning.
So clothing designers are experimenting with it in terms of designing.
But also, anything on the supply chain is totally machine learning optimized.
They figure out how to route it, how to pack it using machine learning.
So I'm like, well, anything in my closet that's 10 years old or less is probably off limits if I'm really being strict.
But I did have, and deep in my closet, my grandfather's 1970s, Paisley shirt and red and white checkered pants.
And he went through an Austin Power's face.
He was very much a dandy.
And I was like, all right, I got to do it, even though it made me very uncomfortable.
Although my wife said that it was the coolest I had looked since we've been married, which was insulting and also flattering.
I think you got the timing right.
I think it's been so long since those clothes were in fashion that they now actually do look fashionable again.
Yeah.
Yeah.
AJ, let me ask you this.
As you headed into this experiment, what is or was your relationship with AI?
Are you the sort of person who was using generative AI tools like ChatGBT every day?
was it a more occasional thing, or where were you on that spectrum?
Yeah, I would say I'm in the middle.
I'm not a Luddite, but I don't like have it controlling my life.
I did use it for research, and it was actually not bad.
I was impressed.
Got it.
So you were not trying to prove that, like, AI is bad.
I just remember some of your other experiments that you've done over the years have included
following the rules of the Bible.
And that was sort of at a time when people were talking about taking the Bible very
literally. And I knew that part of why you were doing this was like a kind of attempt to
say, well, here's what would happen if you just went all the way toward your stated belief.
There was sort of a point in there about the dangers of literalism.
Were you trying to make a similar point about the dangers of AI here?
I did not go in with an axe to grind on this one. It was more the thesis says, where is
AI hiding? Because I don't believe AI is all good or all bad. I didn't believe that before and
I don't believe it now. I think in some cases, it's awesome. Thank God that machine learning checks
whether there's credit card fraud. But on the other hand, it has huge risks and has divided our
country. So I was not coming in saying it's all good or all bad, just where is it? And also,
any lessons can I learn from spending two days without it? So let's get into the experiment. I feel like
the biggest choice you had to make at the outset was, how am I going to define what counts
as AI? And you decided to include machine learning. Talk to us a bit about kind of how you set
the boundaries for how you were going to run this thing. Right. All the experts I talked to said
AI is a big umbrella. And you've got generative AI, like chat GPT, and that's getting all the
heat now. But AI has been around for decades because the umbrella also covers machine.
learning. It basically covers these machines that can evolve, that can look at new data and
change. The way I explained it in a paragraph that was cut was that I see. This is why I love
freelance writers. They harbor so many grudges, and it's great to be able to have a podcast to
air them out. I bet you had a better idea for the headline, too, didn't you, AJ? I know, what
is this crap.
No, he was liable.
It was just a matter of space.
But I said traditional programs are input A yields output B, whereas machine learning is more like a recipe that changes.
So you have a recipe, but then there's data that comes in and the recipe says, oh, people really like sugar.
I'm going to add sugar.
So it evolves.
And the reason I thought it was important to put the both in,
is because I feel they both have this great potential and great risk.
They both have these unintended consequences.
When you have machines that can change and you can't predict what they're going to do,
that is, as I said, sometimes wonderful.
Sometimes you end up with YouTube algorithms that turn us all into flat earthers.
Right.
So let's talk about some of the things you did on this experiment.
And my sort of characterization of this up top would be that you had to basically become Amish for 48 hours.
You were...
That was a line again.
Did that also get cut out of the story?
That was in there.
I said Amish cosplay was one or Laura Engel Wilder.
I was another comparison.
Yes, because pretty much anything that's electric or electronic includes machine learning, including electricity itself.
Because Con Edison uses tons of machine learning.
to figure out where is the demand going to be.
So, yeah, I had to go Amish.
I did have a solar-powered generator,
so I could plug in a lamp for a while.
But, yeah, it started the moment I woke up.
My iPhone uses facial recognition.
That's AI.
But even goes further than that.
You know, the iPhone camera uses AI.
The Gmail uses machine learning,
even without the new AI features of Google.
And water, that was a surprise because the New York Reservoir system uses machine learning to help.
They want to make clear that humans make the final decisions because they don't want people to freak out.
But the machine learning helps them figure out where is the demand and when should we make repairs.
So how did you stay hydrated for two days?
Well, I did plan ahead, which maybe was a little bit of cheating, but I put a bowl out or several bowls.
on my window sill in the weeks before to collect rainwater.
And I didn't get Giardia or anything, so I feel lucky.
How much rainwater were you able to collect before you started the experiment?
Well, that's why I had several bowls, and it was weeks.
So, yeah, it was not ideal.
It was not ideal.
So there's another piece of this experiment that I wanted to ask you about,
which is that you had to forage for a meal in Central Park.
Now, I was not under the impression that food itself was generated by AI,
but what am I missing here?
Well, of course, it all depends how you define it.
I mean, food is really intertwined with AI.
Industrial farms use AI and machine learning for figuring out whether to water,
the crops, when to plant them.
Food is, of course, shipped along the supply chain, which is AI optimized.
So if I'm being really strict, which, of course, I was,
I was like, well, maybe I can't eat anything from the grocery.
So to be super safe, I found a video where a man named Wild Man Steve Brill is his name,
and he teaches you how to find edible food in Central Park.
So I took them up on that, and I went foraging, and I got some what are called plantain weeds in Central Park,
which I ate, and not great, not great, they taste like dirt, but they didn't kill me.
They didn't kill me.
That's that park-to-table cuisine that is so popular in New York these days.
They're so ahead of the curve over there.
I ate some plantain weeds from a guy in Central Park once, and I saw the fifth dimension.
That's right. I've seen those plantate weed dispensaries that are popping up all over the city lately.
Now, Kevin, as AJ is describing his experience of depriving himself of so much AI and so much technology,
I'm wondering how that lands on you
Because I think if I were to come to you
And to say, hey, all like generative AI services
Are going to be down for the next 10 minutes
I think, you know, your heart would seize up
You start having palpitations
I'd see a cold sweat running down your forehead
So was this a challenging story for you to read?
You know, it was and I was thinking
How would I do at this?
And then I was thinking I would probably
This would probably be good for me
I should probably do this like every couple months
I should do 48 hours of no technology at all
But, I mean, we've talked on the show.
You've gotten to the point where you'll put your phone in a prison, you know, overnight if you feel like you're developing too strong of an attachment to it.
But, you know, at the same time, I have never seen you run an experiment quite like this.
Yeah, this is true.
I have taken breaks from technology.
There was a period where I was doing this, like, digital Sabbath thing where, like, one day a week, I would try not to use my phone or send any emails.
But that was too hard, so I quit that.
But, AJ, did this experiment, we should continue talking about this experiment,
because I want to get to the takeaways here.
So you have this 48-hour period
where you're not using AI,
and then, like,
what is your emotion upon reaching the end of this period
and being able to use this stuff again?
Well, I had a lot of mixed emotions.
I mean, on the one hand,
there was the relief of being cut off.
Like, you've talked about it, the digital detox.
There was also annoyance.
I mean, it is super annoying to try to...
You know, you can't goof.
Google. You can't Google. My Encyclopedia Britannica is not up to date. On the other hand, it was terrifying. That was another feeling I had because I realized how omnipresent AI is. And as I've said, it's not all bad, but it has these huge risks. So it was really a mixture of emotions, which maybe is the right way to react to AI. It's not a monolith. It's not black and white. It's super.
super confusing. How much easier would this experiment have been if you had limited it to generative AI?
If he said, okay, machine learning is fine. That's like pretty well just kind of integrated into all
services as you, you know, reported out here. But what if you just sort of said, well, I'm not
going to use chat GPT and other sort of generative AI services? I think it would have been easier
for now in five years. I think that line will be erased. I also think it would have been hard
to research because as I confess in the article, I used a ton of chat GPT to research this article.
And that was another takeaway was how to use chat GPT because chat GPT sensed the thesis of my article.
It knew I wanted to find machine learning and AI everywhere.
So it was like serving me up these half truths.
And I had to give it some tough love and say,
chat GPT, pretend I've got the opposite thesis.
I don't want AI and ML to be anywhere.
Tell me now, what are the reliable sources?
Because, yeah, as you know, it's just, you know, an obsequious machine.
Yeah.
I mean, I think if there's one takeaway from your piece, for me,
it's that like the line between sort of classical AI or machine learning and generative AI
is like thin and getting thinner.
Agreed.
Right now, people are very angry at,
generative AI, they say, oh, it takes all this electricity, it uses all this water, these companies
are sort of foisting it on to us. So I think there will be people who read this article and say,
well, he's making sort of this inevitableist argument that, like, there's nothing we can do,
and we have to, like, live in this world, and it's, you know, it's too late to sort of turn back
the tide. And I think what I came away feeling from your article was that, yeah, in five
years, the sort of difference between generative AI and classical AI may be so small as to
be invisible. And we will just sort of think of this stuff as being on a continuum that starts with
like Netflix recommendations and image recognition and self-driving cars and like goes through
chat bots and all that other stuff. Well, I mean, there's like an old joke in the tech industry that
AI is just whatever we call, whatever the computer can't do yet. Right. Right. Like we've just sort of
been on this like advancing frontier forever and yeah it keeps being able to do more stuff right can i
just add one thing about the inevitableness part because i i don't want that to be the takeaway i don't want
people to give up i feel we need more transparency i love the law that you were talking about the
california law about watermarking ai images we need more transparency and what's ai and what is
AI generated. I'm in favor of more regulation. I mean, it is such a powerful technology.
And I also want more control over my algorithms. I hate that Facebook has so much control.
Maybe there's a way to go in and make it show me articles that I disagree with. Maybe,
but it would take me days to figure it out. So there are things we could do. It's not inevitable.
We got to take action because we are somewhat.
in control of where AI and ML are going to take us.
I mean, I guess I'm curious what you think, AJ,
as someone who has spent a lot of time thinking
and writing about religion as well as AI,
how far that comparison holds.
Because I often tell people that being in San Francisco
in the AI world in 2025 feels a little like being in the Protestant Reformation.
You know, you've got all these cults and these groups
and these people, you know, handing out pamphlets,
declaring that the end is near.
and trying to recruit you to their movement.
And it just feels like kind of this great blossoming of these ideas about the future and the end of the world.
I also know that the last time we had a big industrial revolution, there were like a lot of weird cults and utopian communities and sort of people who opted out of the new technology.
I'm curious, like I understand this was a sort of a joke or a,
stunt or a piece meant to illuminate some larger point.
But I'm wondering if you think there will be people who actually choose to live like this
because they will just see all of this AI.
And instead of having the reaction that you had, which is like, I want to dive in and
like investigate this, they will just be like, screw it, I'm out.
I'm going to my haven in the woods and I'm going to turn off all my devices and I'm
going to live like it's 1870.
Yeah, I think you will have that.
You will have some, a big Luddite movement.
and I can understand it because it is scary.
As to the religion metaphor, I think is a good one.
I think there is a lot of overlap and this sense of this destiny that AI is destined to create heaven on Earth or even replace us.
I would say one difference between religion and science is the idea that science can be falsified.
So my hope is that people in the AI industry keep an open mind and look for falsification, look for examples of where AI is actually not doing good and trying to adjust to that so that it doesn't become a religion.
Well, AJ, thanks so much for coming. Thanks for doing this experiment. The piece is called 48 hours without AI, and you can read it at the New York Times.
Very scary story to read on Halloween.
Oh, good point.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poyant.
This episode was fact-checked by Will Pyshal
and was engineered by Katie McMurrant.
Original music by Marian Lizano,
Rowan Nemo, and Dan Powell.
Video production by Soya Roque, Pat Gunther, and Chris Schott.
You can watch this whole episode on YouTube at YouTube.com
slash Hard Fork.
Special thanks to Paula Schumann, Puiwinktam,
Dahlia Hadad, and Jeffrey Miranda.
You can email us at hardfork at NYTimes.com with the error you found on your Brockapedia page.
Thank you.
