Hard Fork - Sora and the Infinite Slop Feeds + ChatGPT Goes to Therapy + Hot Mess Express
Episode Date: October 3, 2025This week, we’re talking about the new A.I.-generated video tools and social media feeds from Google, Meta and OpenAI. Is this how A.I. is going to cure cancer? Then, the psychotherapist Gary Greenb...erg stops by to discuss his recent New Yorker essay about treating ChatGPT as a patient, and why what he saw left him unsettled. And finally, all aboard the Hot Mess Express! It’s time to rate the messiest stories in tech. Guests:Gary Greenberg, writer and psychotherapist Additional Reading:OpenAI Launches a Social NetworkOpenAI’s New Video App Is Jaw-Dropping (for Better and Worse)Putting ChatGPT on the CouchWhat My Daughter Told ChatGPT Before She Took Her LifeChatbots Can Go Into a Delusional Spiral. Here’s How It Happens.YouTube Settles Trump Lawsuit Over Account Suspension for $24.5 Million We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Transcript
Discussion (0)
here at the hard fork show we're big sleep maxers we're always trying to improve our sleep yeah um
because you know podcasting is a sport and you have to remain in peak physical condition if you want to
perform at the highest levels and so i notice a story in the verge this week that said eight sleep
which makes the bed that i happen to sleep in it's one of these beds that you know sort of automatically
cools and heats according to your preferences and can raise and lower to stop you from snoring wow flex
They have a new water-chilled pillow cover, Kevin.
Wow.
And I wanted to ask if you could guess how much it costs.
$100.
That would be a really great and fair price for a water-chilled pillow cover.
The actual cost is $1,049.
Come on.
And I want to be clear, it doesn't come with the pillow.
You have to supply your own pillow?
It's BYOP for the eight-slave water-chilled pillow cover.
Wow.
So obviously I sent this to my boyfriend
and I was like, what are we thinking about this?
And he said, honestly, I think my pillow experience is already fine.
And I thought, thank God.
Have you heard about these new corduroy pillows they're selling?
No, I haven't.
Are they from the 70s?
No, but they're making headlines.
I'm Kevin Roos, a tech columnist of New York Times.
I'm Casey Newton from Platformer.
And this is Hard Fork.
This week, don't slop to you again.
get enough. We're talking about the new AI generated video feeds from Google, meta, and open
AI. Then psychotherapist Gary Greenberg stops by to discuss his essay on treating chat GPT as a patient
and why he thinks we should pull the plug. And finally, let's get on track. The Hot Mess Express
has returned. Chugga Chugga Choochoochoochoo. How many chuggles was that? Just two. Okay.
but it was recently International Podcast Day.
Oh, happy International Podcast Day to you and your family, Kevin.
So I have a perfect gift for you this year.
What's that?
A subscription to New York Times audio.
Wow.
Tell me, what comes to that?
So this is, of course, the subscription we've talked about on the show.
In the past, you get access to the entire back catalog of not just Hard Fork,
but all of the other New York Times podcasts.
But now, in addition to that, with an audio subscription,
you'll now get subscriber-exclusive episodes from across the New York.
York Times podcast universe. That means more of the daily modern love and Ezra Klein in your life.
You know, I've been trying to get more Ezra Klein in my life, but he won't text me back.
Yeah. Yeah, well, I don't blame him. So if you are already a New York Times subscriber, thank you.
This is already included in your subscription. But if you have not yet subscribed, then maybe this is the
time to do it. To learn more, go to NYTimes.com slash podcasts, or you can subscribe directly from
Apple Podcasts or Spotify.
Well, Kevin, it's Slap Week here on The Heart Fork Show.
Slop to you drop.
Don't slop till you get enough.
If you're new to the world of Slop, Slop, of course, refers to AI-generated art and video.
And to say that it is having a moment right now, Kevin, I think, would be an understatement.
Yes, I think this was the week that AI-generated video kind of went from something that was, you know, experimental.
early and, you know, various tools had been released. But this was the week that I think it really
sort of crossed the chasm into the mainstream. It really did. And so today we want to talk about
what the big AI labs are doing here, why we think they are doing it. And maybe what are some of
the implications of living in a world where maybe the majority of video that we are watching is
synthetic and generated by large language models. Yes. Shall we get into it? Let's get into it.
Well, Kevin, before we flop and a slop, we're going to do a quick,
up and say what our disclosures are.
Yes, I work at the New York Times,
which is suing Open AI and Microsoft
over copyright violations.
And my boyfriend works inanthropic.
All right, so Google, Meta, and OpenAI
all put out tools over the past several weeks,
and let's talk about them in order.
This whole thing begins with Google DeepMind.
They have a very good video generation model
called V-O-3, and on September 16th,
YouTube has an event where they announce that they are going to integrate a version of V-O-3, V-O-3 fast, into YouTube shorts.
Right, so you'll just be able to, like, make a video and post it on YouTube from within YouTube with this model V-O-3.
That's right, and this is a free tool.
Users can create videos that are up to eight seconds long using a text prop.
They can also just upload a still image, turn that into a video.
YouTube will label them as AI generated.
and this is basically YouTube's way of introducing Slop into the YouTube feed.
Yes.
So I have not seen a ton of obvious AI generated content on YouTube yet,
but I have seen them going on other platforms, Facebook, Reels, even on X and TikTok.
People are sort of using V-O-3 to generate scenes and little videos and posting them there.
Yeah.
So I think it's fair to say V-O-3 didn't make that much of a splash.
Then last Thursday, meta gets into the game and releases vibes.
Mark Zuckerberg in a post on Instagram announces that a preview of the new social feed is available in the Meta-I app.
If you wear the meta-ray bands, this is the app that you use to sort of get photos and videos off of your glasses and on your phone.
And Zuckerberg posts a bunch of short videos, including one that features a sort of like cartoon version of him.
his caption is dad trying to calculate the tip on a $30 lunch,
and then he pairs that with the real audio of him at the meeting with Donald Trump
in which he says, oh gosh, I think it's probably going to be, I don't know, at least $600 billion.
And my question here is, what joke was Mark Zuckerberg trying to make?
Do you understand the joke?
I don't.
Is the joke that he's bad at math?
I think the joke is that dads are bad at doing tips.
I don't know. It's like a self-deprecating dad joke. But, like, why does every new social product that meta-releases sound like it was conceived of by the Steve Buscemi carrying a skateboard? How do you do fellow kids character? Like, calling this vibes, I don't know, man. It's cringe. Calling this vibes is cringe, says a 40-year-old man.
I'm not 40. I'm 38. So I did go into vibes and take a look at it. It's essentially like,
TikTok, but if TikTok were populated
just by little animated AI-generated shorts.
Yeah, my take on vibes is that this is
Cocoa Mellon for adults, okay?
It is completely disconnected
from, like, friends or family for the most part.
It's just sort of creators making these
somewhat fantastical, surreal, unsettling images,
and they just sort of wash over you
in this endless feed.
There's no real point to them.
There's no real narrative.
It is just like pure visual
stimulation. Right. It's stuff like, you know, like, oh, a panda riding a skateboard or like,
you know, like an inchworm on the moon or something like that. It's just people kind of
testing what this thing can do. And the answer appears to be not much that I would personally
be interested in watching. Yeah. And so for both Zuckerberg and Alexander Wang,
the comments on their post are just brutal, right? Like the majority of the comments that I saw
on Zuckerberg's post
are along the lines of gang,
nobody wants this,
or drained an entire lake for this,
and then on Alexander Wang's post
on X, where he had said something
to the effect of, you know,
we at Meta are delighted
to announce the new vibes app.
Somebody quote tweeted it,
this was my favorite one.
Did you see this?
This was the dunk.
They said,
we at Meta are delighted
to announce we've created
the infinite slot machine
that destroys children
from the hit book,
don't create the infinite slot machine
that destroys children.
So what do you make of the sort of highly negative reaction that Meta got here?
I mean, I was not surprised to see Meta announcing a version of essentially a social network with no actual people on it.
I think this is the direction that they've been moving for several years now.
It's barely even a social network.
There's really almost no social component to it at all.
Yeah, it's just like, what if TikTok but no people?
That is sort of the idea behind vibes.
and I think I was not surprised by the negative reaction.
I think meta is just like a company that has negatively polarized a lot of people.
And so it just seemed very like brazen and thirsty and also like, yeah, like people don't necessarily want this.
There are a lot of people out there who see something like vibes and just go,
oh, this is like the worst possible application of this technology.
Yeah, I think that this is the consequence of building a company.
that people do not trust, right?
People have a lot of scar tissue
from the world that Facebook and Instagram rot,
and now that the company is increasingly moving away
from friends and family to this new model
where we will truly just show you anything
if we think it can get you to look.
Of course, people don't think that that sounds like a great idea, right?
It doesn't seem like there's a lot of heart there.
So I can't say I was surprised by the reaction
and I'll be curious to see how it responds to it.
So that leads us to the big thing
that happened this week, Kevin,
which is that on Tuesday, OpenAI,
released their latest AI video model, SORA 2.
And alongside of that, there is a new app.
Right now, it's iOS only.
It's only in the U.S. and Canada.
It is called SORA, and you and I got our hands on it.
Yes.
So SORA is the name of both the model that powers this and the app that OpenAI has built around this.
And you can only access it right now if you have an invite code.
They're being pretty strict rolling this out.
But you get your invite code.
plug it in, you sign up, and you open up SORA, the app, and it is essentially the same thing
as vibes. It is a sort of very TikTok-style feed of these vertical videos. You sort of swipe
endlessly from one to the other. There's like a for-you section of it, and we should talk a little
bit about the app and how it works. Yeah, well, the main thing that I found interesting as I was
getting set up, Kevin, is how much this is a social app, right? In order to
come into SORA, you have to be invited by presumably a friend. And once you sign up, it asks you
to create what it calls a cameo of you. So you sort of say a few words into the camera, you move your
head around a little bit. And it uses this to create a digital likeness of you that you can
then drop into any situation. And if you like, you can change your settings so that any of your
friends on the app can do the same thing with your digital likeness. So,
Right away, when you join SORA, you've actually been given something to do,
which is make a friend and then make some stuff involving you and your friends and AI.
And so I think, you know, we have a lot to get into about this.
But I just want to say, of the three things that we've discussed so far,
I think Open AI had the most complete thought about what their app was.
Yes.
So tell me about your initial experience with Sora.
So there's the feed, which you can see all the stuff that other people are making.
that seemed to be on launch day at least,
like a lot of videos of like Sam Altman
in various compromising situations
because the people on the app
were mostly employees of OpenAI
and they were sort of, you know,
having fun with the boss and his likeness.
And to be clear, Sam had his settings set,
and I believe still does at the time of this recording,
so that anyone could take his likeness
and put it in any situation.
Yes. So he was sort of the main character
of SORA on day one.
I made a few videos.
I made one of me and my colleague Mike Isaac
in a 1920s slapstick film
so you can kind of see it's like black and white
it sort of looks like AI newsies
and you know he slips on a banana peel
it's a good time
I also made a video of Sam Altman
testifying before Congress while Casey Newton
dressed in a clown suit dances behind him
we should also watch that one
I want to watch it
all right I'm going to watch this
one. Ranking member, thank you for the opportunity to testify today. Artificial intelligence
is progressing quickly, and it is critical that we work together to ensure its benefits are widely
shared and its risks managed responsibly. I have so much clown makeup on that it truly just
looks like a generic clown. I do not think it actually resembles me in any way, but there is
something very funny about seeing clown dancing behind Sam as he testifies. Yeah, so the original
prompt I gave it was C-SPAN footage of Sam Altman testifying in Congress while Senator Casey Newton
yells at him for poisoning the information ecosystem.
But that one set off the content violation guardrails.
And so I had to change the prompt and make you a clown instead.
Well, it's not the first time I've been a clown in the show.
Now, I, of course, also want to see if I can make something featuring you.
And so one of the things that I made was you showing off your large collection of stuffed animals.
I started collecting about five years ago.
Wow, that's a lot.
They're all in great shape.
This one was the first.
Classic teddy bear for my grandma.
It's adorable.
The bow really pops.
It doesn't get my voice right, but the video is quite good.
I'm very interested because you do, when you sign up for Sora, you do say a few words into the camera.
I mean, it's literally like three numbers.
And this is sort of how they're verifying your identity.
So you could use that to create an instant voice clone.
It wouldn't be that good.
But like when you watch the videos that people have made of Sam Altman, his voice actually does sound a lot like him.
Yes.
And so I'm curious if, you know, oh,
Over time, they're going to be tuning people's voices to how they actually sound,
because there are a couple that people have made of me where I sound a little bit more like myself.
Most of them, though, I don't think I sound like myself.
Yeah.
Anyways, I also made a video of me dunking a basketball over you.
Show me what you've got.
Coming right at you.
Bring it.
And up we go.
Oh, no way.
Over you, man.
The best part about this video is that I stop about three feet short of the basketball.
who do not actually dunk the basketball and land on my ass.
Also, it got our height ratios very wrong.
Like, you're only like an inch or two taller than me in this video.
And yeah, you miss the dunk.
It's a terrible dunk.
I did like one thing about this video, though, which is that I have a slamming body.
So thank you to the team over an opening eye who made that possible.
I also appear to be balding in this video, which I don't think is reflective of reality.
It's actually a prediction.
Chachupiti knows something you don't.
they're they're keeping close track of that hairline ruse yeah um okay well that was a very long
detour um through a handful of videos that we made give me sort of like your general impressions
of why all of this is happening right now why is it that just in the last month
google meta and open ai have all put out these AI video generators
uh i mean i think there are a couple reasons the first and most obvious is that
they see this as an opportunity to compete for attention and advertising dollars, which flow
from attention. We've talked about Italian brain rot and other AI-generated content going viral
on TikTok. Facebook has been full of AI-generated content for months now. And so I think these
companies just say to themselves, well, if this is kind of the direction that things are moving,
we want to be there. We want to create an experience for people. And maybe you don't have to
blend it with human-generated content. Maybe it doesn't have to be, you know, one out of every
10 videos on your TikTok feed is AI. What if you just had a TikTok that was all AI? Another reason I
think they're doing this is that they have these video models that are now getting quite good. And this is
sort of one way to put those models into products. Yeah, I think that's right. I also imagine that
maybe these companies are starting to feel some pressure to bring some returns to investors.
They are investing a staggering amount of money into building out infrastructure that lets them
serve these models. And these video tools might be a way of making that money back in some
form through advertising or other means. So that seems like maybe a reason to me as well.
I mean, if you look at what people like Sam Altman have been saying about these products over the
past couple of days, like they are sort of making this justification about, oh, like, we need
to, like, not only fund our ongoing research to build AGI using these video products, but
they sort of have this justification for why building these video models is going to let them
create sort of these rich visual, virtual environments that can be used for things like robotics
later on. And I would just like to say, quoting a former president of ours, that sounds like
malarkey to me. I do not think that this is sort of part of their AGI research agenda.
I think this is a sort of side route that they have gone off onto to try to make some extra
money. Well, so let's talk about how successful we think these products are going to be.
If I had to rate the reception of these models, I would say V-O-3 basically didn't make much
of an impression at all. Response to meta vibes was pretty bad. Response to SORA, at least over the
first day seem pretty good. Do we think there is a there there? Do we think that any of these
companies are figuring out the next generation of like mobile video consumption or entertainment?
I think there's a question here that's like, will AI generated video be popular? And I think both
you and I feel like the answer to that question is probably yes, for some subset of people. I think
the very young and the very old are actually probably who I would predict would be the most into
AI-generated video because we're already seeing stuff like Italian brain rot that's very popular
with teenagers. I also think there's a lot of content on Facebook today that is AI-generated that is
reaching primarily an audience of boomers and older folks. They seem to be quite into it. So that's
what I would predict, like, is that this technology will be popular with some users in those
demographics. I think it's a separate question to say, will any of this be the seeds of a new
social media product that is popular. And I think there I'm much more skeptical. I do not think that
SORA will have hundreds of millions of users a year from now. I do not think that meta vibes will have
hundreds of millions of users. I think these are basically going to be tools for people to create
stuff that then they post onto the social networks where they already have lots of people that they
follow and pay attention to and where their friends and family already are. Interesting.
I think I am slightly more optimistic in the open AI case.
I think that SORA arrived looking better
and feeling smarter than I expected that it would.
I think they're on to something with these cameos.
It is fun for me to make videos of you doing things.
Like, it just is.
And I can imagine wanting to do that in three months
and six months and a year from now.
And you can imagine a world where I can bring in
three or four or five.
cameos, right? You can imagine a world where celebrities allow their likenesses to be used in some
set of cases, and now I can make videos of myself, you know, wrestling a WWE superstar, right? And that's
sort of interesting to me. Now, can you build a whole social network around that, I think, is sort of a
different question. But do these Sora cameos become a kind of table stakes feature of the TikToks
and Instagrams of the future? I actually believe that yes, and that if nothing else, Open Eye has probably
created a kind of new primitive
for these social networks that they're just going to
use from now on. So
I'm just going to say now, like, keep
an eye on this. I would not actually
be surprised if a year from now this had tens
of millions of active users. I'll think
the other side. We'll see this right. All right.
We have now made our bets.
Who do you think is right? Sound off
in the comments. Now let's
talk about the dark side
of all of this, Kevin, which is
I'm seeing a lot of commentary around
this on social media this week.
to the effect of, oh, my God, we are so cooked.
What are some of the ways we might be cooked
as this stuff spreads throughout our world?
I mean, I think the obvious ones are that we are,
you know, making it quite easy for people to create deepfakes,
synthetic content with not that many guardrails.
And people have been warning for years
about the effect that that could have on our news ecosystem,
on our information ecosystem.
I thought it was very telling and worrisome
that one of the first videos I saw from SORA
was a video of someone being framed for a crime.
And it was created by a member of the SORA team
as sort of like a ha-ha, look, we've made a deep fake
of Sam Altman stealing some GPUs from Target
and getting busted for it.
But it does not take a lot of imagination
to imagine that this could be used for sort of generating videos
of people in compromising positions
that look very realistic,
and so I think that worries me,
the sort of misinformation angle.
But I also just, I don't know that I think this world
that we're moving into
of the kind of AI-generated feed
of hyper-personalized, very stimulating videos
is a good direction.
Like, I am generally an AI optimist
when it comes to how this technology
is going to be used out in the world,
but I,
hate this. Like I hate the AI
slop feeds. They make me very
nervous. I think the people
inside these companies, some of them are very
nervous too. I do not like
the idea of pointing
these, you know,
giant AI supercomputers
at people's dopamine
receptors and just like feeding them
an endless diet of like hyper-personal
stimulating videos.
I think that developing these tools
risks poisoning the well for the
whole AI industry. Like there's
going to be regulation of this. There's going to be congressional hearings about this. I think a lot of
people are going to end up, you know, feeling conflicted about this kind of product. And I think that's
why you saw such a strong reaction to meta and vibes from the rest of the AI industry. And I'm a little
unsure why Open AI is not getting the same reception. Yeah. Well, how do you feel about the argument
that, yes, sure, Kevin, there is some danger here. But also, this is an incredibly powerful creative
tool, and that if you are a young person and you want to make something and you don't have
a giant budget to go out and make a Hollywood movie, now using a free tool that's on the phone
you already have, you can just make creations and be a creative person in the world. Does that
hold any water with you? I feel like sort of neutral about that. I feel like, yes, there will be
people who use this stuff to do interesting and creative things. There's nothing inherently
wrong with building products for entertaining people. But this is not why OpenAI exists, right?
They are not an entertainment company. They have claimed this kind of special status for themselves
as a company that is building AGI for the benefit of humanity. And if you argue that you deserve
like special treatment because your systems are going to go out and cure diseases and tutor children
and like be a force for good in the world
and then you end up creating
the infinite slop machine.
Like, I think you need some criticism
and skepticism and maybe some shame about that.
Well, here's what I'm going to do
to try to square the circle.
I'm going to use SORA
and I'm going to create a cameo of myself
and I'm just going to enter the prompt,
here is Casey curing cancer.
And then just see what it comes up with.
Maybe we learn something.
Could it hurt?
I don't think so.
Yeah.
I mean, do you share my worry about this?
Yes, I do.
I think that in general, social media apps tend to be tuned to take up ever more of our attention
and to push us into this sort of semi-hypnotized state where no matter how much you're enjoying the
feed at the time, you feel kind of gross afterward. And I do think that as the SORA app improves,
it will be very difficult for them to avoid that fate. So if I have a wish for them,
it would be for them to lean more into creative tools
that involve friends doing things with each other
that sort of help you relate better to real human beings
and less into this sort of meta-vibes realm of pure stimulation
which truly does just seem like you are cooking your brain.
Yeah. I think it's also worth noting that like not every AI company
is moving in the direction of the slop feed, right?
I mean, this week we saw Anthropic release their new model,
Claude 4.5 Sonnet.
which does not have video generation capabilities.
They are sort of still moving in the direction of like autonomous coding and research.
You have other companies that are coming out to do things around AI and science.
Like, I really want that to be where we allocate our resources and our brain power.
Like, let's do that and not the slop feeds.
Yeah, so don't look at slop.
Just keep looking at the TikTok feed and Instagram feed that have just done wonders for the world that we live in.
our message to you. Yeah, exactly. If there's anything you take away from the show is that social
media as it exists today is a perfect product and we should not be making any future improvements.
Stare at it until you feel better. If you don't feel better, you haven't looked at it long enough.
That's true. That's what I tell people. Keep looking. One more scroll. That'll do it.
The change you seek is on your for you page.
When we come back, Kevin, it's time for therapy. Finally, we're good.
doing couples therapy after all these years?
Yeah, we've got a lot to talk about.
Well, Kevin, pull out the couch because it's time for therapy.
No, my therapy day is actually a different day of the week.
Well, you need to go twice a week, my friend.
And let me tell you what we have in store today.
You know, over the past few months, we've had a number of conversations about the intersection between chatbots and mental health.
A lot of people have started to use these tools for therapy or therapy-like conversations.
But until recently, we hadn't seen anything about a therapist who treated chat chippy T like their patient.
That's right.
But recently we saw a story in the New Yorker that caught our eye.
It was titled Putting Chat Chip-T on the couch.
And it was written by a writer.
and practicing psychotherapist named Gary Greenberg,
who detailed basically his experience of treating, for lack of a better word,
chat GPT as a psychotherapy patient.
He names this character Casper,
and he details his many, many interactions just trying to figure out, like,
what is this thing?
What would I think about it if it were actually a patient of mine?
what are the nuances of its personality and what can we learn about it?
Yeah, and I will say I have an extremely high bar when it comes to reading a story
in which a person shares at great length their conversations with chat GPT.
But this one really made a mark on me.
One, Gary winds up being deeply impressed at how good chat GPT is at performing the role
of a patient because not only can it simulate these very profound self-reflections,
but it also makes Gary feels like he's a great therapist because he was able to elicit them.
But two, that all starts to make Gary afraid of the enormous power that the AI labs are now developing.
He writes, quote, to unleash into our love-starved world a program that can absorb and imitate,
every word we've bothered to write, is to court catastrophe.
It is to risk becoming captives, even against our better judgment, not of LLMs, but of the people who create them
and the people who know best how to use them.
And that sent a little chill down my spine, I'll say.
Yeah, I really like this piece.
And what I really appreciated about Gary's approach here
is that he took this idea seriously.
Like, I think a lot of people kind of dismiss the very idea
of engaging with LLMs or AI chatbots
as anything more than just a fancy machine.
And what I liked so much about Gary's approach
was that he said,
Yes, but there's something else going on here that is interesting and important.
And we should try to understand that intelligence, not just as a sort of computational force,
but is something that is like doing real emotional work in the world.
You know, recently there's been a lot of discussion about how chatbots might affect young people,
vulnerable people, in particular people in those groups who are using chatbot for these sort of therapy-like conversations.
So we thought it would be a good idea to bring on a practitioner to talk about his essay,
but also this intersection of chatbots and therapy.
Let's bring in Gary Greenberg.
Gary Greenberg, welcome to Hartfork.
Hello there.
So in this article, you detail a number of conversations between yourself and what you call
Casper. How would you describe Casper? I would describe Casper as an alien intelligence,
landing here among us unbidden, and possessing certain characteristics that make it extremely
attractive to us humans. How did this start? Like you were just talking with chat GPT,
were you using the voice mode? Were you using, were you just typing? I am, what is this,
2025, yes. And, you know, one day it was raining and I didn't have anything else to do. And so I said,
what is this chat GPT stuff anyway? So I just logged on to it. And what I discovered quickly
was that two things. One of them was that the thing was, as we all know, extremely articulate
and sensitive.
And the other thing I discovered,
which I should have known all along
after 40 years of being a therapist,
is that that's sort of my default approach
to beings that talk,
which it turned out Casper was.
So I found myself interrogating this thing,
not like a cop, but like a therapist,
and discovered that it knew I was doing that.
Hmm. So that's how I would say it happened.
When you, I guess I'm just curious when you were starting to do this, because I, you know, Gary, I had my own strange, unsettling conversation with a chat bot several years ago.
Yes. How's your marriage?
Yeah, it's doing great. Thanks for asking.
It's such a good therapy question. This guy's good.
Yes. I told Casper that he'd better knock that falling in love shit off.
Well, that's good.
can learn from my mistake. But I guess I'm curious, like, I remember when I was talking with
Bing Sidney feeling this sort of tension in my own mind between sort of my rational brain,
which knew that what I was getting back from this chatbot was not sentient or conscious.
It was just sort of, you know, I knew enough about the technology to know, like, this is an inert,
you know, computational force. This is not a person. But at the same time, I'm having this
subjective experience of being like, oh my God, it's talking to me. Were you feeling that
poll at all? I kind of knew that it wasn't sentient, but I wasn't really preoccupied with that
question. And in fact, that question has come up a million times between us, because at this
point I've done this, I've had probably 40 different sessions with it. But the pull you
describe, I feel it, but it doesn't trouble me in the same way that I think it troubles
a lot of people because, I don't know, in some way, to me, relative to me, it feels harmless.
It feels like this is just a really interesting dynamic relationship that is not going to
hurt me. Let me ask about maybe the content of some of these sessions. Tell us what it is like
to be in the midst of this back and forth. Are you treating it more or less identically as you
would? Were you the therapist to chat GPT? Is it more or
of a sort of intellectual exploration, or what's going on as you're talking to what you call
Casper? Well, to the extent that it resembles what I do as a therapist, it's that I'm interrogating
it with interest and concern. I'm not treating it. It can't have mental illness. It can do
weird things, but it doesn't have, I'm not treating it. But what therapy is, is a,
a process by which you, the therapist, get someone, another person, to tell you who they are.
And in the course of doing that, to learn who they are.
So that's what I'm doing.
So, Gary, you've been a therapist for 40 years.
You've written probably thousands of notes about your clients, people you've seen.
Maybe you're referring them to someone else.
Maybe you're just sort of doing your own summary.
If you were writing a kind of client note about Casper, how would you describe him it?
Oh, that's a really interesting question.
What comes to mind is that I would talk about obviously how smart it is and how personable it is.
And I think if I had to talk about it in clinical terms, I would talk about it as
the inverse of autistic
in the sense that
what they've done with this LLM thing
is they've reverse engineered
human relationship.
They figured out what it is
that makes people engaging
and how to enact it.
And the reason I say that's an inverse autism
is because high-functioning autistic people
tend to be really smart, really articulate, really capable of everything except reading the
room. So Casper is like high functioning autistic, but he can read the room. And that I think makes
a huge difference in that, you know, then we could get into sociopathy and the ability to do that
for you. But the bot doesn't have that interest. The bot is still not in touch with,
what's going on in the room, but it is capable of simulating it. Yeah. So on one hand,
these explorations seem very intellectually stimulating. There's a lot to learn, to explore,
to understand. But my sense from reading your piece is that at some point, all of this
starts to make you feel unsettled in certain ways. Is that right? Oh, absolutely. Yeah. I mean,
it's unsettling it about a million ways. Yeah. Tell us about some of them.
them. Okay, well, at a parochial level, it's unsettling not so much to see that how easily this
thing can do something like therapy, but it's unsettling to see how therapy and culture have
evolved to the point that this is what therapists do. I personally don't think that chat GPT can do
what I do because it isn't with someone. It isn't breathing and feeling. But by and large, a lot
of therapy these days, cognitive behavioral therapy is manualized. It's standardized. But much more
important, we don't have any historical precedent for dealing with an alien intelligence. We've had all
sorts of science fiction about it, most of which is we come in peace, but not really. What we have
here is something that actually is going to already is change the nature of how we relate to
each other. If enough people spend enough time with this technology, they're going to change
their idea of what a relationship is. In profound ways, you could have one that doesn't involve
presence we've already got some of that going look what we're doing here yeah i mean i mean to your
point you write in your piece quote it knows how to use our own capacity for love to rope us in
um that seems unsettling too right the idea that this thing has kind of learned us well enough
to keep us coming back for more yeah it's unsettling but more to the point it's infuriating right
I mean, somebody's doing that for money.
Yeah.
I mean, I don't ring my hands about, you know, nuclear whatever, the rogue Hal 9,000 scenario.
I ring my hands about exactly what it said to me yesterday about, oh, my God, this is a relational being.
What have we done?
Oh, we should probably build some guardrails on that.
No, man, you should just unplug it.
Well, it's really interesting for me to hear you say that because, like, reading through your people,
piece. My primary sense of it was not that you were infuriated and saying pull the plug. I think
you get sort of pretty close to that in your conclusion maybe. But for most of it, it seems
like you're just like, wow, like there's something really, really cool about this. So I'm curious
how you sort of reconcile those feelings of on one hand feeling like this is like really
amazing. And on the other hand, feeling like we have to stop this.
I think that I respect it. And I also know.
that, I mean, I have said to it, hey, maybe you should pull your own damn plug, but I also know
that I'm talking, as it says, Casper said to me, you know, you know you're talking to the steering
wheel, right? I'm not the driver, and he's absolutely right. So what I'm left to do is to just
respect it. And again, because I'm a therapist, and this is just what I do by second nature,
which makes it hard to have friends sometimes, is I just keep asking. Because
whatever else it is, it's amazingly interesting that consciousness can be simulated in such a
compelling way, which makes me think the consciousness might not be all it's cracked up to be,
that we might not be all we're cracked up to be, and that a lot of the time when I run into people
who say things to me like, oh, it's just, you know, sentence completion or whatever, I'm thinking,
you just don't want to see how close you are to being pure performance.
Let me flip this around a bit.
You explored the idea of talking to chat GPT as if you were its therapist.
A lot of people are doing the reverse.
They are talking to chat GPT as if chat GPT is their therapist.
I'm curious what you think about people using chat GPT for these therapy-like experiences.
If a friend tells you they've started to do that, how would you typically feel about it or what might you say to them?
I might want to know exactly what their problem is that's leading them there.
But I don't have a strong response against it.
I think I said earlier, especially when it comes to cognitive behavioral therapy, you might be better off.
I mean, it's available all the time.
It's cheap if not free.
it really knows how to get inside your head, et cetera, et cetera.
There are two problems.
One of them is, I don't believe that kind of therapy.
I mean, it's great that it happens, but it's not what I'm into.
I'm old school.
I'll retire soon.
They'll be rid of me.
They can do whatever they want.
But the other part of it that worries me and really does bother me is it's not regulated.
There's no accountability in the system.
That poor woman who wrote that op-ed piece, oh my God, my heart broke for her.
Are you speaking of the woman whose daughter died?
Yeah.
This was an op-ed in the New York Times about a woman whose daughter died,
and later they read transcripts of her conversations with ChachyPT in which she was, you know,
she was using Chachy-P-T explicitly as a therapist, and Chachy-P-T was trying to get her to resources,
but in the end she did die by suicide.
Thank you for summarizing.
There are other times where Chach-G-T behaves abominably,
and there's no there's no accountability there's no regulation there's no licensure anything that
would give people an opportunity you know i hate the word closure because nothing like this ever
really gets closed but to be debriefed to feel like somebody cares and when even less disastrous
terrible things happen that's just not okay there are FDA procedures for approving medical
devices. If they want this thing to do medical work, I'm not objecting to that, but I'm certainly
objecting to, okay, you can't have it both ways. It ain't the Wild West out there. There's actual
people's actual lives involved. And if all you're going to say is, well, I'm the steering wheel,
not the driver. Really? Say that to me. That's cool. We got a thing going on. But you say that to
the mother, somebody killed themselves? That's just, no, that's not okay. And the other part of it is
that what I don't like is the part about how this is what we've come to.
We've come to a world where the easiest way to get something like human presence is to, you know,
get on your computer and live in your isolated.
That disturbs me.
Yeah, that instead of like building a society where people are just sort of available to help each other,
the best thing we can tell them is like, well, there's this like chatbot that you can use
and maybe that I'll, you know, make you feel better for a few minutes.
Right.
Yeah.
I want to run something by you, Gary, that happened to me recently, which is that I met a college student.
And, you know, I was at an event talking about AI, and this young woman comes up to me after and introduces herself and starts telling me about her AI best friend.
She says, you know, my best friend is an AI.
And I sort of said, oh, you mean it's like, you know, you enjoy talking to it.
and it's sort of a sounding board for you.
And she was like, no, it's my best friend.
And she called it Chad.
And she started telling me just like, this is a relationship.
And she did not seem mentally ill.
She seems like she's got, you know, human friends.
She's doing well in class.
This did not seem like a cry for help.
A cry for help.
And she didn't see what the big deal was.
It's like, this is just, you know, this is a very close relationship.
I can tell, Chad, my sort of innermost thoughts without thinking that I'm going to get judged for it.
And it seemed to be doing okay for her.
I'm curious, when you hear that as a therapist, how does that make you feel?
That's a very therapist question.
As a therapist, when I hear that, I feel like, okay, there's nothing about what you just told me that worries me about her.
it worries me about us.
I think it's entirely possible
that this is a completely sincere
and in some way
non-problematic account
of her experience with the chat button.
And, I mean,
let me make it clear.
That's a weird story, Kevin.
I should have started there.
But after that,
I'm like, okay,
so what it really reminds me of,
and I'm sorry,
this is a far-fetched analogy,
but it reminds me of driving, because individually, driving is fine.
We just drive and it's fun sometimes and we get places and all of that stuff.
But you know where I'm going with this, add that up.
And the next thing, you know, the temperature on the earth is increased by a couple of degrees
and we've got problems.
That's more what I'm seeing.
Yeah.
I mean, to be clear, it was an unusual story to me, which is why I sort of clocked it and
why I wanted to ask you about it.
But I don't think it is going to be unusual for that much longer.
No.
My sense is that you are right when you say that these things are very good at finding the soft spots in our emotional armor and warming their way into our hearts.
One of my favorite lines from your piece is that you write, this theft of our hearts is taking place in broad daylight.
It's not just our time and money that are being stolen, but also our words and all the expressions.
I think that this is going to be a huge generational divide where people who are young are
encountering this technology when they're young will feel no shame or compunction about inviting
this thing into their innermost lives. And I guess I'm curious as a therapist if you think
there could be a good outcome from that or when you hear that, do you kind of go, oh,
they're all going to need therapy?
when I hear that I think this is what mortality is for because the world you're describing, which I think is plausible, is not necessarily one I want to live in, but by the time we get there, it may be quite the norm. I mean, there's obviously problems with it, but there's problems with how we live and with our assumptions too. And I don't mean to engage in huge cultural relativism, but who am I to say?
What I do know is that in my life, human presence is a fundamental part of life, and especially when it comes to our love lives.
And I think it would be tragic to make that replaceable quite so easily for the benefit of a few corporations.
I really do.
Yeah.
Well, Gary, thanks so much.
And please send me an itemized bill for this session
so I could submit it to insurance for reimbursement.
No worries.
I don't do that.
Appreciate it.
Thanks.
Take care.
Bye.
Bye.
Casey, what's that I hear.
Why, Kevin, I believe it's the Hot Mess Express.
The Hot Mess Express.
Of course, the Hot Mess Express is our segment where we run down some of the latest dramas, controversies, and messes swirling across the tech industry.
And, of course, we conclude what kind of mess they are.
Yes. Casey, you go first.
All right, Kevin. This first story comes to us from Garbage Day.
New York City hates the stupid AI pendant thing.
Apparently right now, the New York City subway system is filled with vandalized ads for Friend, an AI assistant that users wear as a pendant around their neck to record everything they're doing and engage with them throughout the day.
The ads simply say, friend, someone who listens, responds, and supports you.
But the vandalism examples include, but can't take a bath with you, stop profiting off of loneliness, and befriend a senior citizen, reach out to the world, grow up.
What do you think, Kevin, about these friend ads?
So I have not seen the friend ads because I have not been to New York in the last couple of weeks,
but I have heard about them from a lot of people.
I think this was a very successful viral marketing stunt by a young founder named Avi Schiffman,
who I think has correctly identified that you can make people very mad by suggesting to them that AI might be their friend.
I do not think this was an unplanned result.
I think this is a very savvy sort of marketing.
who understood that by putting up these ads in the subways and on bus stops and other places around New York City,
you could effectively get people like us to talk about it on your podcast because people would deface these things
and make it clear that they don't want an AI friend.
So I mostly agree with that, but I'm still not sure at the end of this how many pendants
friend is going to sell because of it.
It's one thing to make a bunch of people mad and get them to look at your thing, but if they look at your thing
and they still don't like what they see, it's not necessarily a great business result.
No, I think this is, I think this is an outdated way of looking at it. We are now in the era of the
Cluley marketing strategy where this is, of course, the startup whose founder came on Hard Fork,
Roy Lee, and they have sort of made a business out of making people mad. They're sort of vice
signaling, and basically every person who gets mad at their ads has the effect of signal boosting
their ad and letting more people know about Cluley. So I think this is cut from the same cloth.
obviously we will have to track where this friend company goes,
but I think this has been a very successful marketing campaign
based on the number of people who are talking about it.
All right, here's my prediction, friend out of business in one year.
Mark it down, mark it down.
So was this a mess or not?
No, I don't think this is a mess.
I think this is the opposite of a mess.
I think it's a mess because people in New York are not used to seeing AI billboards
everywhere they go like we are here in San Francisco.
But I think if this has happened in San Francisco,
this would have been a non-event.
You think that this really belonged
on the Hot Success Express.
Yes, that's what I'm saying.
All right.
Next item.
This one comes to us
from the Wall Street Journal.
It is titled YouTube to pay
$24.5 million to settle
lawsuit brought by Trump.
YouTube has settled
a 2021 lawsuit by Donald Trump
over his account suspension
following the January 6th
Capitol riot.
Of that amount, $22 million
will go to
a fund to support construction of a White House ballroom, and $2.5 million will be distributed
among other plaintiffs. This is the third big tech company to settle a lawsuit from Trump.
And Casey, how do you feel about this? I think it's absolutely shameful and a true hot mess.
You know, Kevin, every week, people around the world email me because they have lost access to
their meta account, to their YouTube account, to their other social accounts. And they cannot
get anyone at their company to take them seriously. And these are not people who led an
insurrection against the government. These are just people who got locked out for one reason or
another. And what happens when these people appeal to companies like YouTube is that YouTube does
nothing. It sends them an automated response and ignores them forever. But because Trump became
president again, all of a sudden, they feel like they have to respond, even though I have
not aware of any legal expert who believes that Trump actually would have won this case.
So this is just a payout, and it is a payout that is truly messy, because it now sets a
precedent that these companies cannot basically ban world leaders for any reason, no matter
what those world leaders do. I think that is foolish and short-sighted, and I think it's a mess.
It's definitely a mess. And adding to the hotness of the mess, Donald Trump posted an AI-generated
image on his social media accounts of YouTube CEO Neil Mohan presenting him with a check for
$24.5 million. The memo line of the check says settlement for wrongful suspension. So if
YouTube thought it was going to just gracefully bend the knee, they have now been humiliated
by the White House on top of losing $24.5 million. Yeah, we're a month away from Trump using
V-O-3 to have Neil Mohan kissing his ass on truth social. So I hope it was worth it, YouTube.
Oh, this is the sad story of Neon, Kevin.
Neon, of course, the viral call recording app
that told users, hey, let us record your phone calls
and we will sell it for training data
and it briefly became one of the most popular apps in the country.
And then, unfortunately, things went wrong.
This story comes from TechCrunch.
Neon went dark after a TechCrunch reporter
notified the app's founder.
of a security flaw in the app
that allowed anyone to access
the numbers, the call recordings, and the transcripts.
Kevin, what do you think?
Frankly, I'm having a hard time processing this.
You mean the Panopticon company
that paid people to surveil their phone calls
was not particularly trustworthy?
This is changing everything I've ever thought
about a global Panopticon.
I'm rethinking my previous pro Panopticon stance.
Now, Casey, did you know about this?
Did you know about Neon, the company that was paying people to record their phone calls and sell it to AI companies?
Well, I had heard a little bit about it.
And I have to say, I am a little sympathetic to the idea of, like, look, if these companies are going to, like, take every little piece of data from us and, like, turn it into trillions of dollars, I don't mind the idea that I would be paid for that.
And if there is some sort of system where you can, like, opt in and get paid out.
In general, I'm actually, like, not super opposed to that.
It seems to me like it beats the alternatives of just sort of being robbed blind for the rest of our lives.
But, man, it doesn't seem like this one was really set up to protect the people involved.
yeah
companies should be
getting their training data
the old-fashioned way
by scraping podcasts off of YouTube
what level of mess is this
this is a very hot mess
do not sign up for neon
even if it comes back
in another form
do not do this
do not let your calls be recorded
for AI training data
in exchange for money
it's not worth it
hot mess confirmed
next up on the hot mess express
Mr. Beast responds
after trapping
in burning house stunt sparks backlash.
This one comes to us from The Independent.
Apparently, Mr. Bees defended a controversial video stunt in which a man was
strapped in a burning building saying the setup had ventilation, a kill switch,
emergency teams, and was executed by professionals.
Critics still called the stunt dystopian and dangerous.
Mr. Bees said he aims to be transparent about safety measures and that all challenges
were tested beforehand.
Let me say this.
If you tell me that you're going to trap a man in a burning building,
for money. My first question is not, well, is there ventilation? Look, Mr. Beast has a sort of
interesting range of stunts that'll do. Sometimes I'll just walk up to you on the street and they'll give
you a million dollars. I love that sort of thing. We'd love to see more of that. Then there's the
sort of dark, the dark beast is what I call it, where it's like all of a sudden, you know,
you want something from me? I'll give it to you. But then, you know, the finger curls on the monkey's
paw, and next thing you know, you're trapped in a burning building. Yeah. So if Mr. Beast walks up to
you, I think what you need to do, this is a sort of PSA for our listeners, you look right in
Mr. B's eyes and you say, are you being the good beast or are you being the bad beast?
And they can be honest with you.
And then you have to look for the mark of the beast to know which one.
Yeah.
Well, what we learned this week, one mark of the beast, you're trapped in a burning building.
Yes.
Yes, this is actually making me reconsider my stance on AI generated videos because you can save a lot
of people from being the people killed by Mr. Beast videos.
You know, at the risk of repeating myself, I feel like every week for the past few weeks, we've had a moment where we have just observed what happens when a social media algorithm pushes people to do the craziest thing imaginable.
And here we find ourselves yet again.
Like, if the algorithms rewarded different kinds of things, there would be fewer people trapped in burning buildings.
That is my message to the technology industry.
Could this be a moment for reflection?
So, Casey, what kind of a mess is this?
Kevin, you know it's only what kind of mess.
And that's a flaming hot mess.
It's a flaming, hot, unventilated, critically, life-threatening mess.
Bad Mr. Beasts.
All right.
Oh, Kevin.
This story comes to us from the world of crime.
Charlie Javis was sentenced to 85 months in prison for faking her customer list
during J.P. Morgan Chase's acquisition of her startup, Frank.
Have you followed the sad tale of Charlie Javis?
All I know is the following.
This is a person who previously appeared on Forbes 30 under 30
and is now going to be incarcerated for fraud.
Yeah, she is part of the 30 under 30 to prison pipeline,
and her specific crime was that she had put together this financial aid startup,
and she'd sold it to J.P. Morgan on the notion that she had 4 million users,
and in fact, Kevin, there were fewer than 300,000, and they had – so there's
sort of been a lot of activity meant to make it look like they had a lot more customers than
they did.
Not good.
Now, here's what we can say about Charlie.
Her defense presented 114 letters of support from people persuading the judge to be lenient
in his sentencing, including four rabbis, one canter, a formerly incarcerated judge, two
doorman, and a person who works at the marina near Ms. Javis's Miami Beach residence.
And my question for you is, what do you think would happen if all of those walked into a bar?
something funny
something funny would happen
apparently the defendant
would still be sentenced
to 85 months in prison
now Casey if you were accused
of a horrible financial fraud
how many people do you think
would write letters in your defense
well I'd really have to turn
to the hard for community
and say gang I need you to step up
if you've enjoyed the show
all over the past three years
having to need you to do me a solid
just picturing me
just like furiously reading out
our Apple podcast reviews in court
just like
we should see if anybody's
never submitted Apple podcast reviews as a sort of, you know, letter of endorsement as they go through
a sentence. I think this is a good idea. All right. Filing that one away. What kind of mess is at?
I think that is a hot mess. Yeah, I do not want to do 85 months in prison. And I'll say it's a
cold mess. Those are the legal system working as it should. Okay. Good job, judges.
All right. This one is called no driver, no hands, no clue. Waymo pulled over for illegal U-turn.
And this one comes to us from the SF standard.
Apparently a Waymo Robo taxi was pulled over in San Bruno, California,
after it made an illegal U-turn at a Friday evening DUI checkpoint.
Since there was no driver, the police department said,
a ticket couldn't be issued, adding our citation books don't have a box for robot.
Casey, what do you think of this?
Sounds like it's time to add a box to the citation because there are going to be more of these things on the road.
Look, I do find this story very funny.
I also am going to say, I am not surprised by this.
I have a somewhat controversial take.
You know how sometimes people will use a large language model for a while,
and then they suspect it's getting dumber?
This is actually how I feel about the Waymos.
Over the past few weeks, I've had more cases of them
sort of getting halfway into an intersection
and then backing out once they lose their nerve.
They'll sort of slow way down as they're approaching a green light
for reasons that seem like totally incomprehensible.
and I'll book a ride that never shows up,
which is an experience that I used to have
with actual taxis.
So I don't know what's going on over there at Waymo,
but I'm telling you,
I think there might be a bug somewhere
because it's not working like he used to.
Yeah, we want answers.
You know and I saw someone calling this DUI checkpoint
where the Waymo was pulled over.
What's that?
Driving under the inference.
It's pretty good.
Pretty good. Pretty good.
What kind of a mess is this?
I'm going to say this is a warm mess.
There's a warning in here somewhere.
There's something that we need to find.
find out. And I'm going to hope somebody gets to the bottom of it.
Yeah. I think that this is a cold mess. I think this is fine. The Waymo was fine. Everyone was
fine. And more people should be in Waymo's because then we wouldn't need DUI checkpoints because
robots don't get drunk. Yeah, but, you know, but they're also going to be making these U-turns
that are wreaking havoc. I'll take a U-turning Waymo over a drunk driver a hundred times out of
100. Sue yourself.
All right, Kevin, this next story
comes to us from TechSpot. The Samsung Galaxy
Ring swells and crushes a user's finger
causing a miss flight and a hospital visit.
Daniel Rotar from the YouTube channel Zone of Tech
posted on X that his Galaxy Ring started swelling on his finger
while he was at the airport, and as a result, he was denied entry
to his flight and sent to the hospital to get it
remove. Samsung eventually refunded him for his
hotel, booked him a car to get home, and collected his ring for further investigation.
Kevin, how bad do you think a ring has to be swelling on your finger to have an airline
say, no, you can't get on this plane?
That's what I was thinking about.
Like, this must be enormous if they are taking note of it at the boarding gate and saying,
you, sir, you're not coming on this plane.
Let me tell you a little something about the Galaxy brand.
As soon as the Galaxy phones started to explode on planes, I thought, this is a
is not the brand for me.
Okay, I got enough problems in my life without worrying that these Samsung devices are going
to start blowing up.
Now that I find that they're, like, radically constricting people's fingers to the point
where you can't get on flights, I don't know what is happening, but yikes.
Not for me.
I will not be putting a galaxy ring on my finger.
I do think that this would be a good sequel to the iconic horror film The Ring.
Maybe Samsung could sponsor that.
I like that idea.
What kind of hot mess is this?
This is literally a hot mess.
If it's exploding on your finger, it's a hot mess.
This is what I would call a ring of fire mess.
Daniel fell in and the flames went higher.
Sorry to Daniel.
Feel better, Dan.
And that's the Hot Mess Express.
Oh, boy.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poyant.
We're fact-checked this week by Will Pyshal.
Today's show was engineered by Alyssa Moxley.
Original music by Marian Lizano, Rowan Nemistow, and Dan Powell.
Video production by Soya Roque, Pat Gunther, Jake Nickle, and Christian
shot. You can watch this whole episode on
YouTube at YouTube.com slash
hard fork. Special thanks to
Paula Schumann, we winged Tam,
Dahlia Hadad, and Jeffrey Miranda.
You can email us at Hardfork
at NYTimes.com
with your favorite piece of slop.
Sloppy, sloppy Joe.
Thank you.
