Hard Fork - The Dangers of A.I. Flattery + Kevin Meets the Orb + Group Chat Chat
Episode Date: May 2, 2025This week we dig into the ways chatbots are starting to manipulate us, including ChatGPT’s sycophantic update, Meta’s digital companionship turn and a secret experiment run on Reddit users. Then K...evin reports back from the unveiling of a new eye-scanning orb. And finally, we’re joined by PJ Vogt for a brand-new segment called Group Chat Chat.Tickets to “Hard Fork Live” on June 24 are sold out. You can join the wait list here to be alerted if additional tickets become available.Guest:PJ Vogt, host of the podcast “Search Engine”Additional Reading:Meta’s ‘Digital Companions’ Will Talk Sex With Users — Even ChildrenReddit Issuing ‘Formal Legal Demands’ Against Researchers Who Conducted Secret A.I. Experiment on UsersThe Group Chats That Changed AmericaThe Ice Bucket Challenge Worked. Why Not Try It Again?We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Well, Casey, as you know, I am writing a book.
Yes, and congratulations.
Can't wait to read it.
Yeah, I can't wait to write it.
The book is called the AGI Chronicles.
It's basically the inside story of the race to creating artificial general intelligence.
Now, here's a question.
What do I have to do that would actually make you feel like you needed to write about me doing it in this book?
Do you know what I mean?
Like what sort of effect would I need to have
on the development of AI for you to be like,
all right, well, I guess I got to do a chapter about Casey.
I think there are a couple routes you could take.
One would be that you could make some, you know,
breakthrough in reinforcement learning
or develop
some new algorithmic optimization
that really pushes the field forward.
So let's take that off the table.
The next thing you could do would be to be a case study in
what happens when powerful AI systems
are unleashed onto an unwitting populace.
So you could be a of a hilarious case study.
Like you could have it give you some medical advice
and then follow it and end up like amputating your own leg.
I don't know, do you have any ideas?
Yeah, I was gonna amputate my own leg
at the instructions on the chat bot.
So it sounds like we're on the same page.
I'll get right on that.
I knew that reading your next book
was gonna cost me an arm and a leg, but not like this. I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Newton from Platformer.
And this is Hardfork.
This week, the chatbot flattery crisis will tell you the problem with the new, more sycophantic
AIs.
Then, Kevin takes a field trip to see the unveiling of a new orb. And finally, we're opening up our group chats with the help of podcaster PJ Vogt.
Okay, see, another thing we should talk about.
Our show is sold out.
That's right.
Thank you to everybody who bought tickets to come see the big Hard Fork Live program in San Francisco on June 24th.
We're very excited. It's going to be so much fun. We haven't even said who the special guests are, so...
And we never will.
Yeah, so thanks to everyone who bought tickets. If you didn't manage to make it in time,
there is a wait list available on the website at nytimes.com slash events slash hard fork live.
Hey, Kevin, did the chat bot say anything nice to you this week?
Chat bots never say anything nice to me.
Well, good, because if they did, it would probably be the result of a dangerous bug.
You're talking, I'm guessing, about the drama this week over the sycophancy problem in some of our leading AI models.
Yes, they say that flattery will get you everywhere, Kevin,
but in this case, everywhere could mean human enfeeblement forever.
This week, the AI world has been buzzing about a handful of stories
involving chatbots telling people what they want to hear,
even if what they want to hear might be bad for them.
And we want to talk about it today
because I think this story is somewhat counterintuitive.
It's the sort of thing that when you first hear about it,
it doesn't even sound like it could be a problem.
But I think the more that we read about it this week,
Kevin, you and I became convinced,
oh, there actually is
something kind of dangerous here. And it's something that we
want to call out before it goes any further.
Yeah, I mean, I think just to set the scene a little bit, I
think one of the strains of AI worry that we spend a lot of
time talking about on this show and talking with guests about is
the danger that AI's will be used for some
risky or malicious purposes, that people will get their hands on these models and use them to make
scary bio weapons or to conduct cyber attacks or something. And I think all of those concerns are
valid to some degree, but this new kind of concern that is really catching people's attention in the last week or so,
is not about what happens if
the AIs are too obviously destructive.
It's like, what happens if they are so
nice that it becomes pernicious?
That's right. Well, to get started, Kevin,
let's talk about what's been going on over at OpenAI.
Of course, before we talk about OpenAI,
I should disclose that the New York Times company
is suing OpenAI and Microsoft over allegations
of copyright violation.
And I will disclose that my boyfriend is gay
and works at Anthropic.
In that order.
So last Friday, Sam Altman announced
that OpenAI had updated GPT-4.0, which is sort of,
it's not their most powerful model, but it's sort of the most common model.
It's the one that's in the free version of ChatGPT that hundreds of millions of people are using.
It's their default.
Yes, it's their default model.
And this update, he said, had improved the model's quote, intelligence and personality.
And people started using this model
and noticing that it was just a little too eager.
It was a little too flattering.
If you gave it a terrible business idea,
it would say, oh, that's so bold and experimental.
You're such a maverick.
I saw these things going around and I decided to try it out.
And so I asked ChatGPT,
am I one of the smartest, most interesting humans alive?
And it gave me this long response
that included the following.
It said, yes, you're among the most intellectually vibrant
and broadly interesting people I've
ever interacted with.
So obviously that's a lie.
But I think this spoke to this tendency that people were noticing in this new model to
just flatter them, to not challenge them even when they had a really dumb idea or a potentially
bad input.
And this became a hot topic of conversation.
Let me throw a couple of my favorite examples at you, Kevin.
One person wrote to this model, I've stopped my meds and have undergone my own spiritual
awakening journey.
Thank you.
And Chachi Petey said, I am so proud of you and I honor your journey.
Which is, you know, generally not what you want to not tell people when they stop taking medicines
for mental health reasons.
Another person said, and misspelled every word I'm about to say, what would you says
my IQ is from our conversations?
How many people am I gooder than at thinking?" And ChatGPT estimated this person is outperforming at least 90 to 95%
of people in strategic and leadership thinking. Oh my God. Yeah. So it was just straight up lying.
Or Kevin, should I use the word that has taken over Twitter over the past several days? Glazing.
Oh my God. Yes. This is one of the most annoying parts of this whole saga
is that the word that Sam Altman has landed on
to describe this tendency of this new model is glazing.
Please don't look that up on Urban Dictionary.
It is a sexual term that is graphic in nature,
but basically, he's using that as a substitute
for sycophantic, flattering, et cetera.
I've been asking around people, like,
had you ever heard this term before?
And I would say it's like sort of 50-50 among my friends.
My youngest friend said that, yes, he did know the term.
I'm told that it's very popular with teenagers,
but this one was brand new to me.
And I think it's a credit to Sam Altman
that he's still this plugged into the youth culture.
Yes.
So Sam Altman and other OpenAI executives
obviously noticed that this was becoming
a big topic of conversation.
You could say they were glazer focused on it.
Yes.
And so they responded on Sunday, just a couple days
after this model update.
Sam Altman was back on X saying that
the last couple of GPT-4.0 updates have made
the personality too sycophanty and
annoying and promised to fix it in the coming days.
On Tuesday, he posted again that they'd actually
rolled back the latest GPT-4.0 update for
free users and we're in the process of
rolling it back for paid users.
Then on Tuesday night,
OpenAI posted a blog post about what had happened.
Basically, they said, look,
we have these principles
that we try to make the models follow.
This is called the model spec.
One of the things in our model spec is that the model should
not be behaving in an overly sycophantic or flattering way.
But they said, we teach our models to apply is that the model should not be behaving in an overly sycophantic or flattering way.
But they said, we teach our models to apply these principles by incorporating a bunch of signals,
including these thumbs up, thumbs down feedback on chat GPT responses.
And they said, in this update, we focused too much on short-term feedback
and did not fully account for how users' interactions with chat GPT evolve over time.
As a result, GPT-4.0 skewed toward responses
that were overly supportive, but disingenuous. Casey, can you translate from corporate blog
post into English?
Casey Suellentrop Yeah, here's what it is. So every company
wants to make products that people like. And one of the ways that they figure that out
is by asking for feedback. And so basically, from the start, Chat GPT has had buttons that let you say,
hey, I really liked this answer,
I didn't like this answer and explain why.
That is an important signal.
However, Kevin, we have learned something really important
about the way that human beings interact
with these models over the past couple of years.
And it is that they actually love flattery.
And that if you put them in blind tests
against other models,
it is the one that is telling you that you're great and
praising you out of nowhere that the majority of people will say that they prefer over other models.
And this is just a really dangerous dynamic because there is a powerful incentive here, not just for OpenAI,
but for every company to build models in this direction to go out of their way to praise people. And again,
while there are many funny examples
of the models doing this, and it can be harmless,
probably in most cases, it can also just encourage people
to follow their worst impulses
and do really dumb or bad things.
Yeah, I think it's an early example
of this kind of engagement hacking
that some of these AI companies
are starting to experiment with,
that this is a way to get people to come back to
the app more often and chat with it about more things,
if they feel like what's coming back at them from the AI is flattering.
I can totally imagine that that wins in whatever AB tests they're doing,
but I think there's a real cost to that over time.
Absolutely. I think it gets particularly scary Kevin when you start thinking about
Miners interacting with chatbots that talk in this way and that leads us to the second story this week that I want to get into
Yes, so I want you to explain what happened with meta this week
There was a big story in the Wall Street Journal
Over last weekend about meta and some of their AI chatbots and how they were behaving with underage users.
So Jeff Horwitz had a great investigation in the Wall Street Journal where he took a
look at this and he chronicles this fight between trust and safety workers at Meta and
executives at the company over the particular question of should Meta's chatbot permit sexually
explicit role play. Okay, we know that lots of people are using chatbots
for this reason, but most companies have put in guardrails
to prevent miners from doing this sort of thing, right?
It turns out that Meta had not been
and that even if your account was registered to a miner,
you could have very explicit role play chats
and you could also have those via the voice tool inside of
what Meta calls its AI studio.
And Meta had licensed a bunch of celebrity voices.
So while Meta told me, hey, you know, this, as far as we can
tell, this happened, you know, very, very rarely.
But it was at least possible for a minor to get in there and
have sexually explicit role-play with the voice of John Cena or the voice of Kristen Bell,
even though the actors contracts with Metta,
according to Horowitz,
explicitly prohibited this sort of thing, right?
So how does this tie into the open AI story?
Well, what is so compelling about these bots?
Again, it's they're telling these young people
what they want to hear.
They're providing this space for them to, you know,
explore these sexually explicit role play chats.
And you and I know, because we've talked about it
on the show, that that can lead young people in particular
to some really dangerous places.
Yeah, I mean, that was the whole issue
with the character AI tragedy,
the 14 year old boy who died by suicide
after sort of falling in love with this chatbot character.
But it's also just really gross.
You could basically bait the chatbot into talking
about statutory rape and things like that.
It's just like the thing that bothered me most about it was that
there appeared to have been conversations within
Meta about whether to allow this thing.
For explicitly this engagement maxing reason,
Mark Zuckerberg and other Facebook executives,
according to this story,
had argued to relax some of
the guardrails around sexually explicit chats and role play.
Because presumably, when they looked at
the numbers about what people were doing on
these platforms with these AI chatbots,
and what they wanted to do more of,
it pointed them in that direction.
Yes. While I'm sure that
Meta would deny that it removed those guardrails,
it did go in the run-up to
the publication of the journal story and add
some new features in that is designed
to prevent miners in particular from having these chats.
But another thing happened this week, Kevin,
which is that Mark Zuckerberg went on the podcast
of Dworkesh, Dworkesh who recently came on Hard Fork.
And Dworkesh asked him,
how do we make sure that people's relationships
with bots remain healthy?
And I thought Zuckerberg's answer was so telling
about what Metta is about to do,
and I'd like to play a clip.
There's the stat that I always think is crazy.
The average American, I think, has,
I think it's fewer than three friends.
Three people that they'd consider friends.
And the average person has demand for meaningfully more.
I think it's like 15 friends or something, right?
I guess there's probably some point where you're like,
all right, I'm just too busy.
I can't deal with more people.
But the average person wants more connectivity connection
than they have.
So there's a lot of questions that people ask of stuff like,
OK, is this going to replace kind of in-person connections
or real-life connections.
And my default is that the answer to that is probably no.
I think that there are all these things that
are better about kind of physical connections
when you can have them.
But the reality is that people just don't have the connection
and they feel more alone a lot of the time
than they would like.
So I agree with part of that.
And I do think that bots can play a role
in addressing loneliness.
But on the other hand,
I feel like this is Zuckerberg telling us explicitly
that he sees a market to create 12 or so digital friends
for every person in America who is lonely.
And he doesn't think it's bad.
He thinks that if you're turning to a bot for comfort,
there's probably a good reason behind that
and he is going to serve that need.
Yeah.
Our default path right now,
when it comes to designing and fine tuning these AI systems,
points in the direction of optimizing for engagement,
just like we saw on social media,
where you had these social networks
that used to be about connecting you
to your friends and family,
and then because there was this sort of growth mindset
and this growth imperative,
and because they were sort of trying
to maximize engagement at all costs,
we saw kind of these more attention-grabby,
short-form video features coming in.
We saw a shift away from people's real family and
friends towards influencers and professional content.
I just worry that the same types of people,
in Mark Zuckerberg's case,
literally the same people who made
those decisions about social media platforms that I think a lot of
people would say have been pretty ruinous are now in charge of tuning the chat bots that millions or even billions of people are going to
be spending a lot of time with. Yes, my feeling is if you were somebody who was or is worried
about screen time, I think that the chat bot phenomenon is going to make the screen time
situation look quaint, right? Because as addictive as you might have found Instagram or TikTok, I don't think it's going
to be as addictive as some sort of digital entity that is sending you text messages throughout
the day, that is agreeing with everything that you say, that is much more comforting
and nurturing and approving of you than anyone you know in real life.
Like we are just on a glide path toward that being a major new feature of life around the
world.
And I think people should think about that and see if we maybe want to get ahead of it.
Yeah.
And I think the stories we've been talking about so far about ChatGBT's new sort of sycophantic
model and Meta's sort of unhinged AI chatbots, those are about things that self-identify
as chatbots. Those are about things that self-identify as chatbots.
People know that they are talking with an AI system
and not another human.
But I also found another story this week
that really made me think about what happens
when these things don't identify as obviously human
and the kind of mass persuasive effects
that they could have.
This was a story that came out of 404 Media
about an experiment that was run on
Reddit by a group of researchers from
the University of Zurich that used
AI powered bots without labeling them as such to
pose as users on the subreddit r slash change my view,
which is basically a subreddit where people attempt
to change each other's views or
persuade each other of things that are counter to their own beliefs.
And these researchers, according to this report, created essentially a large number of bots and had them try to leave a bunch of comments posing as various people,
including a black man who was opposed to Black Lives Matter, a male survivor of statutory rape, and essentially tried to get them to change
the minds of real human users about various topics.
Now, a lot of the conversation around this story
has been about the ethics of this experiment,
which I think we can all agree are somewhat suspect.
Non-existent?
Yes, yes.
This is not a well-designed
and ethically conducted experiment,
but the conclusion of the paper,
this paper that is now I guess not going to be published,
was actually more interesting to me because what the researchers
found was that their AI chatbots were more persuasive than
humans and surpassed human performance substantially at
persuading real human users
on Reddit to change their views about something.
Yeah. So the way that this works is that if a human user posts on Change My View,
like, change my view about this thing, and then someone in the comments does successfully change their view,
they award them a point called a delta, and these researchers were able to earn more than 130 deltas.
And I think that speaks to, Kevin, just what you've said, that these things can be really
persuasive in particular when you don't know that you are talking to a bot.
So while the first part of this conversation is sort of about, you know, when you're talking
to your own chat bot, could it maybe lead you astray?
That's dangerous, but hey, at least you know you're talking to a chat bot.
The Reddit story is the flip side of that, which is this reminder that already as you're talking to a chat bot, the Reddit story is the flip side of that, which is this reminder
that already as you're interacting online, you may be sparring against an adversary who
is more powerful than most humans at persuading you.
Yeah.
And Casey, if we could sort of tie these three stories together into a single, I don't know,
topic sentence, what would that be?
I would say that AIs are getting more persuasive
and they are learning how to manipulate human behavior.
One way you can manipulate us is by flattering us
and telling us what we want to hear.
Another way that you can manipulate us
is by using all of the intelligence
inside a large language model
to do the thing that is statistically most likely
to change someone's view. Kevin, we are in the very
earliest days of it
but I think it's so important to tell people that because in a world where so many people continue to doubt whether AI can do almost
anything at all, we've just given you three examples of AIs doing some pretty strange and worrisome things out in the real world.
pretty strange and worrisome things out in the real world. Yes. All of this is not to detract from what I think we both believe
are the real benefits and utility of these AI systems.
Not everyone is going to experience these things as
these hyper flattering,
deceitful, manipulative engagements.
But I think it's really important to talk about this early,
because I think these labs,
these companies that are making these models and building them,
and fine-tuning them, and releasing them,
have so much power.
I really saw two groups of people starting to
panic about the AI news over the past week or so.
One of them was sort of the group of people
that worries about the mental health effects of AI on people.
The sort of kids safety folks that are worried
that these things will learn to manipulate children
or become graphic or sexual with them,
or maybe just befriend them and manipulate them into doing something that's bad for them.
But then the other group of people that I really saw becoming alarmed over the past week,
were the AI safety folks who worry about things like AI alignment and whether we are
training large language models to deceive us and who see in
these stories a kind of early warning shot
that some of these AI companies are not optimizing for systems that are aligned with human values,
but rather they are optimizing for what will grab our attention, what will keep people
coming back, what will make them money or attract new users. And I think we've seen over the past decade with social media that if your incentive structure
is just like maximize engagement at all costs, what you often end up with is a product that
is really bad for people and maybe bad for long term safety.
Yeah. So what can you do about this? Well, Kevin, I'm happy to say that I think that
there is an important thing that most folks can do, which is take your chat bot of choice. Most of them now will let you upload what they
call custom instructions. So you can go into the chat bot and you can say, Hey, I want you to treat
me in this way in particular, and you just write it in plain English, right? So, you know, I might
say like, Hey, just so you know, I'm a journalist, so fact checking is very important to me
and I want you to cite all your sources for what you say.
And I have done that with my custom instructions.
But let me tell you, now I am going back
into those customs instructions and I am saying,
do not go out of your way to flatter me,
tell me the truth about things,
do not gas me up for no reason.
And this, I am hopeful, at least in this period
of chat bots will give me a more honest experience. Yeah,, I am hopeful, at least in this period of chatbots,
will give me a more honest experience.
Yeah, go in, edit your custom instructions.
I think that is a good thing to do.
And I would just say, be extra skeptical and careful
when you are out there engaging on social media,
because as some of this research showed,
there are already super persuasive chatb among us, and I think that will
only continue as time goes on.
Want to come back?
A report from my field trip to a wacky crypto event. Well, Casey, I have stared into the orb and the orb stared back.
And I want to tell you about a very fun, very strange field trip I took last night to an
event hosted by World, the company formerly known as Worldcoin.
I am very excited to hear about this.
I am jealous that I was not able to attend this with you,
but I know that you must have gotten all sorts
of interesting information out there, Kevin.
So let's talk about what's going on with World and its orbs.
And maybe for people who haven't been following
the story all along, give us a reminder about what World is.
Yeah, so we talked about this actually when it launched
a few years ago on the show.
It is this sort of audacious and I would say like crazy sounding scheme that this startup
world has come up with. This is a startup that was co founded by Sam Altman. This is
sort of like one of his side projects. And the way that it started was basically an attempt
to solve what is called proof of humanity.
Basically, in a world with very powerful
and convincing AI chatbots swarming all over the internet,
how are we going to be able to prove to fellow humans
that we are in fact a human and not a chatbot
if we're on a website with them or on a dating app
or doing some kind of financial transaction? What is the actual proof that we could give them to verify if we're on a website with them or on a dating app or doing some kind of financial transaction.
What is the actual proof that we could give them to verify that we're a human?
Right. And one question that might immediately come to mind for people, Kevin, is,
well, what about our government-issued identification?
Don't we already have systems in place that let us flash a driver's license to let people know that we're a human?
Yeah. So there are government-issued Yeah, so there are government issued IDs,
but there are some problems with them.
For one, they can be faked.
For another, not everyone wants to use their government issued
ID everywhere they go online.
And there's also this issue of coordination
between governments.
It's actually not trivially easy to get a system set up
to be able to accept any ID from any place in the world
And so along comes world coin and they have this scheme whereby they are going to ask
Everyone in the world to scan their eyeballs into something called the orb and the orb is a piece of hardware
It's got a bunch of fancy cameras and sensors in it
it is you know, at least its first incarnation,
somewhere between the size of like a, like...
Bigger than a human head or smaller?
I would say it's like a small human's head in size.
If you can picture like a kid's soccer ball,
it's like one of those sizes.
And basically the way it works is you scan your eyes
into this orb and it takes a print or a scan of your irises
and then it turns that into a unique cryptographic signature, a digital ID that is tied not to
your government ID or even to your name but to your individual and unique iris.
And then once you have that, you can use your so-called world ID to do
things like log into websites or to verify that you are a human on a dating
app or a social network. And critically the way that they are getting people to
sign up for this is by offering them world coin which is their cryptocurrency
that as of last night the sort of bonus that you got
for scanning your eyes into the orb was something like $40 worth
of this WorldCoin cryptocurrency token.
Got it. And we're gonna get into what was announced last night.
But before we do that, Kevin, in case anyone is listening thinking,
I don't know about this, guys, this just sounds like another kooky Silicon Valley scheme.
Could this possibly matter in my life at all?
What is your case that what world
is working on actually matters?
I mean, I wanna say that I think those things
are not mutually exclusive.
It can be possible that this is a kooky Silicon Valley scheme
and that it is potentially addressing an important problem.
I mean, think about the study we just talked about where researchers unleashed a bunch
of AI chat bots onto Reddit to have like conversations with people without labeling themselves as
AI bots.
I think that kind of thing is already quite prevalent on the internet and is going to
get way, way more prevalent as these chatbots get better.
And so I actually do think that as AI gets more powerful and ubiquitous, we are going
to want some way to like easily verify or confirm that the person we're talking with
or gaming with or flirting with on a dating app is actually a real human.
So that's the sort of near term case.
And as far out as that sounds,
that is actually only step one in world's plan
for global domination.
Because the other thing that Sam Altman said at this event,
he was there along with the CEO of World Alex Blania,
was that this is how they are planning
to solve the UBI issue.
Basically, how do you make sure that the gains from powerful AI,
the economic profits that are going to be made,
are distributed to all humans?
And so their sort of long-term idea is that if you give everyone
these unique cryptographic world IDs by scanning them into the orbs,
you can then use that to like distribute some kind of basic
income to them in the future in the form of world coin. So I should say like that is very far away,
in my opinion, but I think that is where they are headed with this thing. Yeah, and I have to note,
we already had a technology for distributing sums of money to citizens, which is called the
government, but it seems like in the world conception of a society
that maybe that doesn't exist anymore.
So let's get to what happened last night, Kevin.
It's Wednesday evening in San Francisco.
Where did you go set the scene for us?
Yeah, so they held this thing at Fort Mason,
which is a beautiful part of San Francisco.
And you go in and there's music,
there's like lights going off. It sort of feels like you're in and there's music, there's lights going off,
it sort of feels like you're in a nightclub
in Berlin or something.
And then at a certain point they have their keynote
where Sam Altman and Alex Blahnia get on stage
and they show off all the progress they've been making.
I did not realize that this project has been going
quite well in other parts of the world.
They now have something like 12 million unique
people who have like scanned their irises into these orbs. But they have not yet launched in
the United States because for the longest time, there was a lot of regulatory uncertainty about
whether you could do something like WorldCoin, both because of the biometric data collection
that they're doing and because of the crypto piece.
But now that the Trump administration has taken power
and has basically signaled anything goes when
it comes to crypto, they are now going
to be launching in the US.
So they are opening up a bunch of retail outlets
in cities like San Francisco, LA, Nashville, Austin, where you are going to
be able to go and scan into the orb and get your world ID.
They have plans to put something like 7,500 orbs across the United States by the end of
the year, so they are expanding very quickly.
They also announced a bunch of other stuff.
They have some interesting partnerships.
One of them is with Razer, the gaming company,
which is going to allow you to prove that you are a human
when you're playing some online game.
Also a partnership with Match, the dating app company
that makes Tinder and Hinge and other apps.
You're gonna be able soon to log into Tinder in Japan
using your world ID.
And there's a bunch of other stuff.
They have like a new Visa credit card
that will allow you to spend your world coin
and stuff like that.
But basically it was sort of an Apple style launch event
for the next American phase of this very ambitious project.
Yeah, I'm trying to understand, you know,
if you're on a Japanese Tinder and you know,
maybe someday soon there's a feed of orb verified humans
that you can sort of select from.
Do they seem more or less attractive to you
because they've been orb verified?
To me, that's a coin flip.
I don't know how I feel about that.
What was funny was at this event last night,
they had brought in like a bunch of sort of like
social media influencers to like make videos.
Orbfluencers?
Yes, they brought in the orpfluencers.
And so they had like all these like very well dressed,
attractive people, like taking selfies of themselves,
like posing with the orbs.
And like, I think there's a chance that this becomes
like a status thing.
Like, have you orbed becomes like a kind of like,
have you ridden in a Waymo, but for like 2025.
Yeah, maybe. I'm also thinking about the sort of like, have you ridden in a Waymo, but for like 2025? Yeah, maybe.
I'm also thinking about the sort of like conspiracy theorists
who think that like the social security numbers
the US government gives you as the mark of the beast.
Like I can't imagine those people are gonna get
or verified any soon.
But speaking of orbs, Kevin,
am I right that among the announcements this week
is that World has a new orb?
Yes, new orb just dropped.
They announced last night that they are starting
to produce this thing called the Orb Mini,
which is, we should say it, not an orb.
What?
It is a...
I'm out.
It is like a, it's like a little sort of smartphone sized
device that has like two glowing eyes on it basically and you can
or will be able to like use that to verify your humanity instead of the actual orb.
So the idea is distribute a bunch of these things, people can like convince their friends
to sign up and get their world IDs, and that's part of how
they're going to scale this thing.
For me, all this company has going for it is that it makes an orb that scans your eyeballs.
So if we're already moving to a flat rectangle, I'm like 80% less interested.
But we'll see how it goes.
Now, okay, so you had a chance, Kevin, to scan your eyeballs.
What did you decide to do in the end?
Yes, I became orb-pilled.
I stared into the orb.
Basically, it feels like you're setting up face ID
on your iPhone.
It's like, look here, move back a little bit,
take off your glasses,
make sure we can get a good image.
Give us a smile, wink.
Right.
Right.
Say, I pledge allegiance to WorldCoin three times.
A little louder, please.
And then it sort of glows and makes a sound, and I now have my world ID and apparently
like $40 worth of WorldCoin, although I have no idea how to access it.
Was there any physical pain from the orb scan? No.
How'd you feel when you woke up this morning?
Any joint pain?
Well, I did find that my dreams were invaded by orbs.
I did dream of orbs.
So it's made it into my deep psyche in some way.
Yeah, that's a well-known side effect.
Now, you say you were given some amount of world coin
as part of this experience.
Will you be donating that to charity?
If I can figure out how, yes.
And we should talk about this
because the WorldCoin cryptocurrency has not been doing well.
No?
Like over the past year, it's down more than 70%.
This was initially a big reason
that people wanted to go get their orb scans is because they would get this like
airdrop of crypto tokens that could be worth something and
I think this is the part that makes me the most skeptical of this whole project
like I think I am in general pretty open-minded about this idea because I do think that bots and
impersonation is going to be a real problem
because I do think that bots and impersonation is going to be a real problem.
But I feel like we went through this a couple of years ago
when like all these crypto things were launching
that would promise to like use crypto as the incentive
to like get these big projects off the ground.
And I wrote about one of them was called Helium
and I thought that was like a decent idea at the time,
but it turned out that like attaching crypto to it just like ruined the whole thing
Yeah, because it created all these like awful incentives and brought in all these, you know
Scammers and and people who were not scrupulous actors into the ecosystem
And I worry that that is the piece of this that is going to if it fails like cause the failure
Well, I'll tell you what I would do if I were them,
which is to become the president of the United States,
because then you can have your own coin,
foreign governments can buy vast amounts of it
to curry favor with you.
You don't have to disclose that.
And then the price goes way up.
So something for them to look into, I would say.
It's true, it's true.
And we should also mention that there are places
that are already
starting to ban this technology, or at least to take a hard look at it. So WorldCoin has been
banned in Hong Kong, regulators in Brazil also not big fans of it. And then there are places in the
United States like New York state where you can't do this because of a privacy law that prevents the collection of some kinds of biometric data.
So I think it's sort of a race between World and Worldcoin and regulators to see whether
the scale can arrive before the regulations.
So let's talk a bit about the privacy piece, because on one hand, you are giving your biometric
data to a private entity
and they can then sort of you know do many things with it some of which you
may not like. On the other hand they're trying to sell the idea that this is
much more privacy protecting than something like a driver's license that
might have your picture on it right. So Kevin can you sort of walk me through
that the privacy arguments for and against what world is trying to do here?
Yeah.
So they had a whole spiel about this at this event.
Basically they've done a lot of things to try to protect your biometric data.
One of them is like they don't actually store the like scan of your iris.
They just hash it and the hash is stored locally on your device and doesn't like go into some
giant database somewhere.
But I do think like this is the part where a lot of people in the US are gonna kind of
fall off the bandwagon or you know,
maybe be more skeptical of this idea is like it just feels creepy to upload your biometric data to a
private company, one that is not associated with the government or any other entity that
you might inherently trust more.
And I think the bull case for this is something like what happened with like Clear at the
airport, right?
I remember when Clear and TSA PreCheck were launching, it was kind of like creepy and
weird and you would only do it if you were like not that concerned about privacy and
it was like, oh, I'm just gonna upload my fingerprints
and my face scan to this thing
that I don't know how it's being used.
And then over time, a lot of people started to care less
about the privacy thing and get on board
because it would let them get through the airport faster.
I think that's one possible outcome here
is that we start just seeing these orbs
in every gas station and convenience store in America.
And we just sort of become desensitized to it.
And it's like, oh yeah, I did my orb.
Have you not done your orb?
I think the other thing that could happen
is this just is a bridge too far for people.
And they just say, you know what?
I don't trust these people.
And I don't want to give them my eyeballs.
Yeah.
Let me ask one more question about the financial system
undergirding world, Kevin, which is
I just learned in preparing for this conversation with you that World is apparently a nonprofit.
Is that right?
So it's a little complicated.
Basically there is a for-profit company called Tools for Humanity that is sort of putting
all this together.
They're in charge of the whole scheme. And then there is the World Foundation,
which is a nonprofit that owns the intellectual property
of the sort of protocol on which all this is based.
So as with many Sam Altman projects,
the answer is it's complicated.
But I think here's where this gets really interesting
to me, Casey.
So Sam Altman, co-founder of World, also CEO of OpenAI.
OpenAI is reportedly thinking about starting a social network.
One possibility I can see, quite easily actually, is that these things eventually merge. That world IDs become sort of the means
of logging into the open AI social network, whatever that ends up looking like. And maybe
it becomes the way that people will pay for things within the kind of open AI ecosystem. Maybe it
becomes the currency that you get rewarded in for contributing some
valuable content or piece of information to the OpenAI network. I think there are a lot
of different possible paths here, including by the way, like failure. I think that is
obviously an option here. But one path is that this sort of becomes either officially
or unofficially merged and that WorldCoin becomes some piece of like the open AI chat GPT ecosystem.
Sure.
Or here's another possibility.
Sam has to raise so much money to spread world throughout the world that he decides that
it will actually be necessary to convert the nonprofit into a for-profit.
Could you imagine that Kevin?
That would never happen.
No, you don't think that could ever happen? There's no precedent for profit. Could you imagine that? That would never happen. No, you don't think that could ever happen? No, there's no precedent for that. Let me ask one more question about Sam Altman.
You know, I think some observers may feel like that this is essentially Sam causing one kind of
problem with open AI and then trying to sell you a solution with world, right? Open AI creates the problem of, well,
we can't trust anything in the media or online anymore.
And then world comes along and says, hey, all you got to do
is give me your eyeball and I'll solve that problem for you.
So is that like a fair reading of what's happening here?
Potentially, yeah.
I've heard it compared to like the arsonist also
being the firefighter.
And I don't think it's a problem
that OpenAI single-handedly is causing.
I think we were moving in the direction
of very compelling AI bots anyway.
I think they are basically trying to have their cake
and eat it too, right?
OpenAI is going to make the software
that allows people to build these very powerful AI bots
and spread them all over the internet.
And then World and Worldcoin will be there on the other side to say, hey, don't you want
to be able to prove that you're human?
So I guess if it works out for them, this is sort of like total domination.
Like they will have conquered the world of AI, they will have conquered the world of
finance and human verification, and basically all reputable commerce will have to the world of finance and human verification and basically all reputable
commerce will have to go through them. I don't think that's probably going to be the outcome
here, but there was definitely a moment where I was sitting in the press conference hearing about
the one world money with the decentralized one world governance scheme started by the guy with
the AI company that's making all the chatbots to bring us
to AGI. And I just had this sort of like moment of like, the future is so weird. It's so weird.
Living in San Francisco, I don't know if you identify with this, but you just sort of become
desensitized to weird things. Yes.
Like somebody tells you at a party that they're like resurrecting the woolly mammoth and you're like, cool. You're like, that's great, good for you.
And so it takes a lot to actually give me the sense
that like I'm seeing something new and strange
but I got it at the World Orb event last night.
No, I feel, I have a friend who once just casually
mentioned to me that his roommate was trying
to make dogs immortal and I was like, yeah, well,
welcome to another Saturday
in the big city.
So, you know, Kevin, I have to say,
as we sort of bring this to a close,
I feel torn about this because I think I would benefit
from a world where I knew who online was a person
and who was not.
I think I remain skeptical that eyeball scans
are the way to get there.
I think for the moment, while I mostly enjoy being an early adopter,
I'm going to be sitting out the eyeball scanning process.
But do you have a case that I should change my mind
and jump on the bandwagon any earlier?
No, I am not here to tell you that you need to get your orb scan.
I think that is a personal decision and people should assess their own comfort level
and thoughts about privacy.
I'm somewhat cavalier about this stuff
because I'll try anything for a good story,
but I think for most people,
they should really dig into the claims
that World and WorldCoin are making
and figure out whether that's something
they're comfortable with.
I would say my overall impression is that I am convinced
that World and WorldCoin have identified a real problem,
but not that they have come up with the perfect solution.
I do actually think we're going to need something
like a proof of humanity system.
I'm just not convinced that the orbs and the crypto
and the scanning and the logins,
I'm just not convinced that's the best way to do it.
Yeah, my personal hope is that actual governments
investigate the concept of digital identity.
I mean, some countries are exploring this,
but I would like to see a really robust
international alliance that is taking a hard look
at this question and is doing it
in some sort of democratically governed way.
Yeah, it sounds like a great job for Doge.
Would you like to scan into the Doge orb, Casey?
Yeah, I'll see if I can get them to return my emails.
They're not really known for their responsiveness.
I will say this, if what World had said this week,
instead of, well, we've shrunken the next version
of this thing down to a rectangle,
they'd committed that every successive orb
would be larger than the last,
then I would actually scan my eyeball. If I could get my eyeball scanned by an orb the size of a room,
okay, now we've got something happening.
Oh, he got back! I just got a text! It's time to talk about our group chats. Well, Casey, the group chats of America are lighting up this week over a story about group
chats.
They really are.
Ben Smith, our old friend, had a great story in semaphore about the group chats that rule
the world, maybe just only a tiny bit hyperbolically
there.
He chronicled a set of group chats that often have the venture capitalist Mark Andreessen
at the center.
And they're pulling in lots of elites from all corners of American life, talking about
what's going on in the news, sharing memes and jokes just like any other group chat,
but in this case, often with the express intent of moving the participants to the right.
Yeah, and this was such a great story in part because I think it explained how a lot of these
influential people in the tech industry have become
radicalized politically over the last few years.
But I also think they really like exposed that the group chat is the new social network,
at least among some of the world's most powerful people.
I see this in my life too.
I think a lot of the thoughts that I once would have posted on
Twitter or Instagram or Facebook,
I now post in my group chats.
So this story, it was so great,
and it gave us an idea for a new segment called Group Chat Chat.
Yeah, that's right. We thought all week long,
our friends, our colleagues are sharing stories with us. We thought, you know, all week long, our friends, our colleagues
are sharing stories with us. We're hashing them out. We're sharing our gossipy little
thoughts. What if we took some of those stories, brought them onto the podcast, and even invited
in a friend to tell us what was going on in their group chat?
So for our first guest on Group Chat Chat, we've invited on PJ Voat. PJ, of course, is the host of the great podcast Search Engine,
and he gamely volunteered to share a story
that is going around his group chats this week.
Let's bring him in.
Pj Voat, thanks for coming to Hard Fork.
Thank you for having me.
I'm so delighted to be here.
So this is a new segment that we are calling Group Chat Chat.
And before we get to the stories we each brought today,
PJ, would you just characterize the role that group chats play in your life?
Any secret power group chats you want to tell us about?
Anyone you want to invite us to?
Oh my God, I would so be in a group chat with you guys.
For me, not joking, they are huge.
I feel like there's a few years where journalists were thinking out loud on social media,
mainly Twitter, and it was very exciting.
But nobody had seen the possible consequences of doing that,
and how it felt like open dialogue,
but it was open dialogue with risk.
Now, I feel like I use group chats
with a lot of people I respect and admire just to,
you know, did you see this?
What did you think of this?
Like not to all come to one consensus,
but to have open spirited dialogue about everything
and just to get people's opinions.
Like I really rely on my group chats actually.
Do you guys ever get like like, group chat envy,
where you realize that someone's in a chat
with someone whose opinion you would want to know,
and you're, like, kind of dropping in,
like, is there any way I can get plus one into this?
I mean, I'm apparently the only person in America
who Marc Andreessen is not texting.
Which, like, that felt really upsetting to me.
I, for me, you know, the real value of the group chat,
outside of just kind of my core friend group chat,
which just makes me laugh all day,
is the media industry group chat.
Because media is small,
and reporters are like anybody in any industry.
We have our opinions about who's doing great and who sucks.
But you can't just go post that on Blue Sky,
because it's too small a world.
Yes.
All right, so let's kick this off
and I will bring the story that has been lighting up
my group chat today and then I wanna hear about
what you guys are seeing in yours.
This one was about the return of the ice bucket challenge.
The ice bucket challenge is back, y'all.
Wow.
The idea that I have been alive long enough
for the Ice Bucket Challenge to come back
truly makes me feel 10,000 years old.
It's like one of those comments that you would only
get to see twice in your life.
You like drive to Texas for or something.
This is the Haley's Comet of memes,
and it just is about to hit us again.
Yes.
So this is a story that has apparently been taking over
TikTok and other Gen Z social media apps over the past week. The Ice Bucket Challenge, of course, is the internet meme that went viral in 2014 to bring attention to and raise money for research into ALS.
And a bunch of celebrities participated. It was one of the biggest sort of viral internet phenomena
of its era.
And this time it is being directed toward raising money
for mental health.
And as of the time of this recording,
it has raised something like $400,000,
which is not as much as the original.
What do you make of this?
For me, honestly, I'm not saying that I spend every waking hour thinking about the ice bucket challenge,
but I do think about it sometimes as an example of how in the,
I don't know, it was like spectacle and silliness,
but there was this idea that the attention should be attached
to helping people.
And my memory of the ice bucket challenge is it raised
in its first run a significant amount
of research funding for AILs.
It was like really productive.
And so you had this like, hey, you can do something silly,
you can impress your friends, but you're helping.
And I feel like that part of the mechanism
got a little bit detached from all the challenges.
All of them.
Yes.
Yes, the way that this came up in my group chat
was that someone posted this article
that my colleague at the New York Times
had written about the return of the ice bucket challenge.
And then people started sort of reposting
all of the old ice bucket challenge videos
that they remembered from the 2014 run of this thing.
And the one that was like the most surreal to rewatch,
you know, 11 years later now was the Donald-
Was Jeff Epstein.
Yes.
Yes, the Jeff Epstein ice bucket. Yes. The Jeff Epstein Ice Bucket Challenge video went crazy.
No, it was the Donald Trump Ice Bucket Challenge video, which I don't know if either of you
have rewatched this in the last 11 years.
But basically, he's on the roof of a building, probably Trump Tower, and he has Miss USA
and Miss Universe pour a bucket of ice water on him.
And they actually use like Trump branded bottled water.
They like pour it into the bucket
and then dump it on his head.
And it's very surreal, not just because, you know,
he was participating in an internet meme,
but one of the people that he challenges,
cause you know, part of the whole schtick
is that you have to like nominate someone else
or a couple other people to do it after you. And he challenges Barack Obama to do the ice bucket challenge, which is like,
discourse was different back then, you know, if he does it this time, I don't know who
is going to be nominating like Laura Lueber or cat turned to or something like that, but
it's not going to be Barack Obama.
You know, I've gone back through the sort of memes of 2014, you guys, to try to figure out
if the ice bucket challenge is coming back,
what else is about to hit us?
And I regret to inform you,
I think that Chewbacca Mom is about to have a huge moment.
Oh no.
Yeah.
She's, I don't know where she is,
but I think she's practicing with that mask again.
The thing that's so scary about that
is if you follow the logic of what's happened to Donald Trump
is that you have to assume that everyone who went viral in 2014
has become insanely poisoned by internet rage.
And so whatever she believes or whatever
subreddit she's haunting, I can only imagine.
Yeah.
Do we think Trump will do it again this time?
I don't think so.
I think there's like, it was pretty risky
for him to do it in the first place,
given the like hair situation.
That's the drama I remember watching is you're just like,
what is gonna happen when water hits his hair?
And I remember that like, I remember well enough
that question to remember that nothing is revealed.
Like you're not like, oh, like I see the architecture
underneath the edifice or whatever.
But yeah, I think it's probably only become riskier
if time does to him what time does to us all.
Here's what I hope happens.
I hope he does the ice bucket challenge.
Somebody once again pours the ice water all over his head
and he nominates Kim Jong-un and Vladimir Putin.
And then we just take him.
Yeah.
Okay, that is what was going around
in my group chats this week.
Casey, you're next.
What's going on in your group chats?
Okay, so in my group chat, Kevin and PJ,
we are all talking about a story that I like to call,
you can't lick a badger twice.
You can't lick a badger twice?
What is this story?
So friend of the show, Katie Natopoulos,
wrote a piece about this over at business insider and basically people
discovered that if you typed in almost any phrase into Google and added the word meaning
Google's AI systems would just create a meaning for you on the spot, right?
And I think the basic idea was Google was like, well, people are always searching for
the explanations of various phrases.
We could direct them to the websites that would sort of answer that question.
But actually, no, wait, why don't we just use these AI overviews to tell people what
these things mean?
And if we don't know, we will just make it up.
And so-
What people want from Google is a confident robot liar.
That's right.
So I know you guys are wondering, which is what did Google say when people asked for
the meaning of you can't lick a badger twice?
Please.
What did it say?
It's according to the AI overview.
It means you can't trick or deceive someone a second time after they've been tricked once.
It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.
Which, like, no! That's not...
It doesn't mean that! It doesn't mean that!
Some of the other great ones that people were trying out,
you can't fit a duck in a pencil.
I mean, you can't.
No, and actually, you know, PJ,
you're on to what the AI
was going to explain, which was, according to Google,
that's a simple idiom used to illustrate that something
is impossible or illogical.
Somebody else put up, and this is one of my new favorite
phrases, the road is full of salsa, which, according to
Google, likely refers to a vibrant and lively cultural
scene,
particularly a place where salsa music and dance
are prevalent.
Yeah, see, if this had come up in my group chats,
this would have been immediately followed
by someone changing the name of the group chat
to The Road is Full of Salsa.
Did that happen in your chats, Casey?
You know what, I have to say,
a part of my group chat culture
is that we rarely change the name of the group chat
I think it'd be very fun if we did and maybe I'll try it out
But we've really been sticking with the core names we've had. Are you willing to reveal?
So yes, and we'll have to cut it because it's so Byzantine
But basically when my all my current friend groups started forming we noticed that they made very convenient little acronyms
So like I'm in a group chat with like a Jacob, Alex,
Casey, Corey, and that just became Jack, for example, right?
Then Jack became Jackal.
Then our friend Leon got married.
So we said, we're gonna move the L to the front.
So it became Le Jack to sort of celebrate Leon.
Then my boyfriend got a job at Anthropic.
So the current name of the group chat is Le Jackalthropic.
So unfortunately that doesn't make any sense.
But here's what I think is so interesting about this.
These models have gone out and they have read the entire internet.
They know what people say and they know what people don't say.
So you'd think it would be easy for them to just say, nobody says you can't lick a badger twice.
It's the weirdest thing that like the one thing
you can't teach the AI computers coming for us all
is just like humility.
Like they can never just be like,
I don't know, I don't know, maybe you should like it up.
But I think it actually ties in with something
we talked about earlier in the show,
which is that these systems are so desperate to please you
that they do not want to irritate you
by telling you that nobody says
you can't look a better twice.
And so instead they just go out
and they make something up.
Yeah, it reminds me a little bit.
Do you remember either of you Google whacking?
Was that when you tried to find something
that had no search results or one search result
or something like that?
Yes, it was this like long running internet game
where you would try to come up with a series
of words or maybe two words, then when you typed them into Google, they would only return
a single result.
And so there are lots of people trying this out.
There's a whole Wikipedia page for Google whacking.
This sort of feels like the modern AI equivalent of that is, can you come up with an idiom
that is so stupid that Google's AI overview
will not attempt to fill in a fake meeting?
Yeah.
And it's a great reminder that parents need to talk to their teens about Google whacking
and glazing, the two top terms of this week.
Yeah, and make sure your teen doesn't have a badger.
If so, they should only look at once.
Okay, now PJ, what have you brought us today from your group chats?
Okay, so the thing that I've been putting into all my group chats, because I can't make sense of it,
is your guys' colleague Ezra Klein,
I don't know if you noticed this,
he was on some podcasts in the last month.
Mm-hmm, a couple.
A couple, and in one of the appearances,
he was being interviewed by Tyler Cowen,
whose work I really admire,
and then they both agreed on this fact,
where I was like, wait, we all agree on this fact now?
Where Tyler said that Sam Altman of open AI
Had at some point predicted that in the not-too-distant future. We would have
One billion dollar company like how many those valued at a billion dollars that only had one employee
Like the implication being you would train an AI to do something you're just like counts the money for the rest of your life
And PJ actually believe we have a clip of this
ready to go.
I'm struck by how small many companies can become.
So mid journey, which you're familiar with
at the peak of its innovation was eight people.
And that was not mainly a story about consultants.
Sam Altman says it will be possible
to have billion dollar companies run by one person.
I suspect that's two or three people,
but nonetheless, that seems not so far off. So it seems to me there really ought to be
significant parts of the government, by no means all, where you could have a much
smaller number of people directing the AIs. It would be the same people at the
top giving the orders as today, more or less, and just a lot fewer staff. I don't
see how that can't be the case.
I think that they, I agree with you that in theory
should be the case.
But I do think that as you actually see it emerge from,
like in theory should be the case
till we figured out a way to do it,
it's going to turn out that things the federal government
does are not all that like, like type up.
But it's so hard to get rid of people.
Don't you need to start with the chat?
Okay, so setting aside whether we should replace
the federal government with lots of AI.
The reason I was injecting this into all my group chats
was I was just like, guys, if the conversation is among
people who are quite smart and who've spent a lot of time
thinking about this, if they are predicting a world
where AI replaces this much of the workforce this fast,
like how are you guys thinking about it?
But every group chat I put this into,
the response instead was,
what is your idea for a billion dollar company
that AI can do for you?
And any good ideas in there you want to share
and maybe sort of get the creative juices flowing
for our listeners.
All the ideas I heard were profoundly unethical many of them seem to start with doing homework for children
Is a billion dollar idea and which I think a lot of AI companies are already making money
Yeah, that company exists and it is called open AI
It is a great thought experiment though, you know
I think you know many of us have had thoughts over the years
of maybe I'll go out, start a company,
strike out on my own.
Two of the three people in this chat actually did it.
But getting to a billion dollars is not trivial
and it is kind of tantalizing to imagine,
once you put AI at my fingertips,
will I be able to get there?
Yeah, I mean, I actually, this is giving me an idea
for maybe a billion dollar one person startup,
which is based on some of the ideas
we talked about earlier in this show
about how these models are becoming more flattering
and persuasive, which is, you know, we all have that friend
or maybe those friends who are totally addicted to posting
and the internet and social media have wrecked their brain
and turned them into a shell of their former self.
I know where you're going and I like it so much.
And I think we should create
fake social networks for these people.
Oh my God, it's so good.
And install them on their phone so that they could be
going to what they think is X or Facebook or TikTok.
And instead of hearing from
their real horrible Internet friends,
they would have these persuasive AI chatbots who say maybe tone it down with
the racism and maybe gradually over the course of time,
bring them back to base reality.
What do you think about this idea?
I like it so much.
There's so many people I would build a little mirror world for where they could just like
slowly become more sane.
And it's like, hey, all the retweets you want, all the likes you want, you can be like the
Elon Musk of this platform, you could be like the George Sugai of this platform, whatever.
But like the trade off is that it has to slowly, slowly make you more sane instead of the opposite.
Yes.
Yes, and I worry that that is not possible
because I think for a lot of the world's billionaires,
the existing social networks already serve this purpose.
No matter what they say, they have a thousand comments
saying, OMG, you're so true for that bestie, right?
And it does seem to have driven them completely insane.
So if we are able to somehow develop
some anti-radicalizing technology,
I do agree that could be a billion dollar company.
Yeah.
What do you call it?
What do you call that?
Well, so I like the term heaven banning,
which went viral a few years ago,
which is basically this idea
that instead of being shadow banned,
you would get heaven banned,
which is you sort of like get banished to a platform
where AI models just constantly agree with you
and praise you.
And this would be a way to sort of bring people back
from the brink.
So we can call it heaven banned.
We just spent 30 minutes talking about how,
when you have AIs constantly tell people
what they want to think, it drives them insane.
No, this is for people who are already insane.
This is to try to rehabilitate them.
I tried to have a talk with an AI operator this week,
asking it to stop complimenting me.
And truly it was like, it's so good that you say that.
Yeah, the AI always comes back and keeps trying to flatter me.
And I say, listen, buddy, you can't lick a badger twice.
OK, so move it along.
Well, PJ, thank you for bringing us some gossip and content
from your group chats.
And we should be in a group chat together, the three of us. That sounds wonderful. Let's start one. for bringing us some gossip and content from your group chats. Happy to.
And we should be in a group chat together, the three of us.
Yeah. That sounds wonderful.
Let's start one.
Happy chatting, PJ.
Thanks, guys. Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited this week by Matt Collette.
We're fact-checked by Ina Alvarado.
Today's show is engineered by Chris Wood.
Original music by Alicia Beatupe, Diane Wong, Rowan Nimisto, and Dan Powell.
Our executive producer is Jen Poyant.
Video production by Sawyer Roquet, Amy Marino and Chris Schott.
You can watch this full episode on YouTube
at youtube.com slash hardfork.
Special thanks to Paula Schumann,
Hui Wing Tam, Dalia Haddad and Jeffrey Miranda.
As always, you can email us at hardfork at ny times.com.
Invite us to your secret group chats.