Hard Fork - Ed Helms Answers Your Hard Questions
Episode Date: May 16, 2025The actor, comedian and author Ed Helms has a new book out about historical blunders. He swings by Hard Fork to tell us about it and answer your moral quandaries, ethical dilemmas and etiquette questi...ons about technology: How do I tell my mom she can’t post about her grandkids on Facebook? Am I being an A.I. hypocrite at work? And is it OK to troll the scammers who blow up my cellphone?Guest:Ed Helms, actor, podcaster and author of “Snafu: The Definitive Guide to History’s Greatest Screwups”Additional Reading:Interview: Ed Helms on Historical Snafus and His Reading Life7 Months Inside an Online Scam Labor CampWe want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
What's going on with you?
Well, I'll tell you what's going on with me is allergies.
It is allergy season, the sneezing.
It's allergy sneezing.
It's allergy sneezing season, the congestion.
I'm taking multiple pills.
I feel better today than I felt at any point
in the past three days.
And thank goodness, it's a recorded day.
Are you a Zyrtec guy or a Claritin guy?
I am, I...
Or a Flonase guy.
So here's the best allergy medicine.
Singulair is the brand name for it.
Active ingredient is something called Montelucast.
This thing changed my life, like seriously.
Like I was a runny nose monster for like years,
and then Montelucast came into my life,
and it really helped me breathe through my nose.
Wow, have you ever tried like dissolving it
in like a tea of any kind?
No, why would I do that?
Because then you'd be getting the singularity.
How did I walk right into that?
All right, start the show. That was great.
I'm Kevin Roos. I'm a tech columnist at the New York Times.
I'm Casey Nunes from Platformer. And this is Hardfork.
This week, Ed Helms from the office comes to our office to talk about his new book and
answer your hard questions about tech. I hope this episode wins a dundee, Kevin.
I have a hangover. Well, we are very excited about today's episode.
We told you this was coming a few weeks ago, and today we are going to be joined by Ed
Helms for some conversation and some hard questions submitted by you, our listeners.
I am so excited about this, Kevin.
Me too.
So this came about because of a chance encounter
that we had in Austin, Texas, when you were down there
for the iHeart Podcast Awards.
We are both big Ed Helms fans, and we
were excited to bump into Ed in the backstage area,
and here to our surprise that he listened to Hard Fork.
Yeah. And how did that happen?
Was that a mistake on your part, or were you
sort of subscriber to the Kara Swisher feed? How did you start listening to Hard Fork? Yeah, and how did that happen? Was that a mistake on your part, or were you sort of subscribing to the Karras Whisher feed?
How did you start listening to Hard Fork?
That's a great question.
I have no recollection how I started, but I don't know.
You guys are pretty good.
Oh, thanks.
You're good at what you do.
Thanks.
So Ed has a new book out called Snafu.
It's what he calls the definitive guide
to history's greatest screw ups.
Should have a chapter or two in there about Casey.
Yeah, I sent them all of your screw-ups, but they didn't make the cut.
And Ed is here with us today in San Francisco, and we said, come on down to the studio and
let's hang out and answer some questions.
So without further ado, Ed Helms, welcome to Heart Fork.
I am very excited to be here, gentlemen.
So what are you doing here?
Why are you in San Francisco?
I'm on my book tour.
Yeah.
I'm just, I'm bouncing all around the country.
I went to New York, Philly, DC, Atlanta, Chicago, Boston, and now I'm here.
Tomorrow I go to LA and I'm hallucinating.
I have no idea what's happening anymore.
That's common in San Francisco.
Did you get the little baggies in Dolores Park?
Right.
So Ed, your book is all about history's greatest screw ups.
Do you have a favorite tech related
snafu that you could share?
Oh, wow.
Well, do nuclear warheads count as tech?
Yeah, sure.
Okay, great.
This one's amazing.
This was the Cold War, the 1950s.
Cold War did not bring out the best in us, as it turns out.
A plan was hatched to shoot a nuclear warhead at the moon.
And, uh, you might be wondering why, why would anyone think this is a good idea?
The thinking was that, you know, we're in fierce competition with the Soviets.
Everyone's sort of living in a nuclear existential dread.
If we can hit the moon with a nuclear warhead, the Soviets will be so terrified
that we'll win the Cold War. And the research showed that it was very likely that it could easily
miss the moon, slingshot around the gravitational field, come right back and hit us. And...
Which frankly would have served us right. Yeah, you know what? I think that's a fair punishment.
And so eventually it was not followed through on, thank God, but a lot of time, money, and
resources went into it.
Carl Sagan was part of that research team.
It sounds like an idea that Elon Musk would come up with after like a ketamine bender.
He's like, let's nuke the moon.
There's an episode of The Simpsons where Mr. Burns
wants to blot out the sun.
Oh yeah, of course.
This kind of reminds me of that, except it really happened.
Yeah.
There's a fun section in your book
about the top five computer viruses of the 80s.
We don't have to go through all of them,
but I'm curious if you have a favorite virus
from the viruses that you've covered.
I like the guy that just wrote a dumb poem
that pops up on everybody's computer.
Like what, I mean.
Was this the message of peace to Mac users
or was this a different one?
No, this was the Elk Cloner virus.
Yeah.
God, Casey.
Oh, sorry.
I'm still catching up on my 80s viruses over here.
It's one of those things that a programmer thinks
is sort of cute and a little cheeky,
but then it just is disastrous.
Right.
Yeah, I feel like the hackers aren't so cheeky anymore.
Like you don't see like the good time prank hacks,
you just see the like stealing, you know,
five million dollars in Bitcoin pranks.
Yeah, bring back fun hacks.
Yeah, yeah.
Yeah, the ones that make us chuckle. So Ed, whenever we get a notable guest like you on the show,
we always ask them about their relationship with technology.
So what's your relationship with technology like these days?
Well, it's really evolved.
I used to be very much on the ball and I would say very,
almost like ahead of the curve.
I was like an early adopter of technology as it came out. I was getting,
you know, the latest laptop. You know, I jumped on the iPhones as soon as they came out.
I've always been a Mac guy, so it was fun. And I sort of was like charged up by it. I liked
technology somewhere. I don't know, in the last five, 10 years, it has just blown past
me.
And now I feel I'm just that old guy mad that I can't log into my Citibank account or whatever.
It just is a-
We'll help you, but just share your account name and the routing number if you would.
But I feel like there is a language to there is a language to or a visual language to iconography
within apps and a sort of like assumption that you speak that language and sometimes I'll open a new app or I'll upgrade an app and
Like I'm just useless all of a sudden
It's like going to the grocery store and they've changed the location of all of the food. No, they've changed the food. Like avocados have turned into something
you've never seen before.
Yeah, the broccoli's blue, yes.
Do you have a problem app or an app
that you spend way more time on than you would like to?
Probably just like Instagram, like everybody else, right?
Or I mean, does that make me old?
No, no, no, not at all, not at all.
That's very millennial coded actually.
Yeah, if you said TikTok, it'd skew you a bit younger,
but you know, look,
there's billions of people using Instagram.
Have you ever deleted Instagram
or like set a screen time limit on it?
Cause you thought I just,
I can't be looking at it these days.
No, I've never put a limitation on it.
You think that's for quitters?
Yeah.
It's not ruining my life.
I don't feel like controlled by it.
It just keeps me up too late some nights.
Honestly, I've laughed harder watching Instagram
than I have like a lot of movies and TV shows lately.
Like I get bit, it drives my wife crazy
cause like I'm shaking the bed, you know,
and she's trying to fall asleep
and she's just like, turn it off.
She's not concerned about my screen time.
She's just like, stop shaking.
And I'm trying to laugh quietly.
It'd be funny if like the sort of like sleeping
in separate beds thing from the 1950s came back
among couples, but just because of TikTok.
I can see it happening. Well, like what makes you laugh on Instagram the most? from the 1950s came back among couples, but just because of TikTok.
I can see it happening.
What makes you laugh on Instagram the most?
Is it sort of the native creators
who are just kind of doing fun bits and sketches,
or is it like people have sliced up every movie
and TV show known to existence
and they know how to serve you your very favorite ones?
It's usually prank related stuff.
And I have a pretty keen radar, but Instagram has hacked it.
So I hate anything that hurts someone physically.
Like I just can't get into that.
I don't like watching people fall or bonk into things.
That's very unnerving to me.
But I also don't like scare pranks that are clearly traumatizing to somebody.
And there's tons of that stuff.
Yeah, I feel like you just described 80% of all pranks.
So what's left over?
But what's left over are the ones
are just like the jump scare pranks
and they'll just be like a compilation of jump scares
and I'm like crying, laughing.
Or just like weird scare pranks
where there was a trend for a little while
of people that would lie on the floor
with just their head sticking out of a door on the floor
and someone would walk in and see the head on the floor.
Just like, apparently it's terrifying
because the reaction is amazing.
Well, if you ever come back to Hard Fork,
I know how I'm greeting you.
Perfect.
Exactly, exactly like that. What about AI? Well, if you ever come back to hard fork, I know how I'm greeting you. Perfect. Exactly.
Perfect.
Exactly like that.
What about AI?
Are you using any AI stuff in your life?
How do you feel about it?
Yeah, I definitely use chat GPT.
For what?
Mostly just queries, like just picking its brain for research or I want to know about
something.
It's kind of my new Google, I guess.
Do you feel like you're Googling less
because you're using ChatGPT?
Yes, for sure.
Although I still Google and I'm definitely,
I think you guys did a whole,
didn't you guys do a whole episode about this,
about how like now with Google's AI,
like no one goes to the links anymore. Exactly.
And there are all these industries that have like built their entire business models on the Google
links. And that's, yeah, that's a bummer. That's just one of the many AI fallouts. I use it
occasionally to help like think of a, you know, an email that I'm having trouble with,
especially if it's something kind of loaded and I just.
You have to fire somebody.
Yeah.
Yeah.
The relationship between Hollywood actors, writers,
people who work on films and TV shows has
been pretty strained toward technology,
I would say, over the past few years, especially.
First, it was the streaming platforms coming in
and sort of disrupting that part of the business.
Now, actors and writers have been some of the loudest critics
of AI talking about how it steals work from people.
What do you think about the film industry's relationship to tech?
Well, do tech or just AI in particular?
Both.
I mean, with tech, the relationship has long been,
it's like so integral to filmmaking.
Obviously, the technology going back to the dawn of
photography and all the evolution of camera operation
and movement and manipulation and then digital
manipulation of the image, that grew into digital
animation and it's just, it's so powerful and exciting
on many levels. I do think that ChatGBT introduces
like a completely different paradigm of tech in
entertainment and, you know, I feel like sometimes
when you read notes on a project and you know I feel like like sometimes when you read notes on a
project and you're like did this executive just like feed this into like
maybe like it's thinly veiled chat GPT like you think you're pretty confident
you've seen some AI notes at this point I do yeah I think so but I also I'm also
you know stunned by like chat GPTT's facility with creative tasks that if you
give it, you know, an assignment, it's totally stunning.
And I think the creative community is terrified and
for good reason. And I'm also terrified not even
from a business standpoint, but just from like a
human, like when the need for human creativity disappears,
will we still be creative?
Will we still, you know, what will drive us to sort of like create beautiful things and
amazing things and just whether it's a, you know, a movie for the movie business or it's
just a piece of art for art's sake, that does terrify me.
Yeah.
What, what, uh, obviously that's a very difficult question,
but do you have any thoughts on how creative people
should be approaching this moment?
It sounds like you've landed in a place similar to Kevin,
which is this can be a useful tool,
maybe a creative partner, but there's like probably some
part of your creativity that you wanna reserve for yourself
and not give over to a machine.
Yeah, that's a good characterization of my relationship with it now. part of your creativity that you want to reserve for yourself and not give over to a machine?
Yeah, that's a good characterization
of my relationship with it now,
but it's only gonna get better and more powerful,
and I feel like also more intoxicating to use
and more sort of like exciting and affirming, but yeah.
Is there any tech product or service out there right now
that you think will sort of become
a historical snafu that you might write about in a future volume of a book?
What out there today in the field of technology do you look at and think, oh, that can't possibly
be good?
Wow.
So I actually have kind of a macro take on this, which is that we're at a point with
device usage where it's clearly bad for us.
Like it's clearly harming us.
And so it feels like cigarettes in the nineties, right?
Where it was everybody knows, but like, come on, we're still doing it right.
And all the, all the sort of corporate interest in it is still just sort of dumping money
and more advancement into
it.
And in the 90s, like we're releasing better cigarettes and healthier cigarettes.
But everyone still knew it was bad.
And it feels like we will, I hope, at some point recognize that, or it will come to that
inflection point in the same way that cigarette smoking did, or it will come to that inflection point
in the same way that cigarette smoking did,
where it just like, no,
where we're just culture as a society,
we're like, this is actually terrible.
The only difference are, I used to say,
there's a lot of differences between phones and cigarettes,
but like one of the most scary differences is that,
it's like if someone told you the only way
to do your banking or to do your shopping
is to smoke your cigarettes.
Right.
And so you have, it becomes so integrated into your life
that you have no alternative to this thing
that is also toxic to you.
So I don't know, I use my phone a ton
and I still feel like I use it too much.
It's a conundrum. Here feel like I use it too much.
It's a conundrum.
Here's how I say you solve it.
You can, you actually sort of borrow a lot of ideas
from the cigarette example and you make it
so that you can only use Instagram outside
and you have to be 15 feet away from the door.
Maybe that does something.
Also you stink when you come back.
Yes, exactly.
It makes you stink. Exactly. It gives you terrible breath. All you stink when you come back. Yes, exactly. It makes you stink. Exactly.
It gives you terrible breath.
All right, when we come back,
we're gonna answer your hard questions. So, Ed, for the last few weeks, we have been collecting questions from our listeners about
their moral and ethical dilemmas involving technology, and we're excited to get your
take on some of these too.
Fantastic, let's do it.
Yeah, well, I would just mention this really has become
one of our favorite segments to do,
not only because our listeners are great
and they ask such interesting questions of us,
but it really lets us get a peek
into people's relationship with technology,
which I think is at the heart of what we love
to talk about on the show.
Yeah, so every time we've done this,
we've gotten tons and tons of questions about AI
and the ethics around AI use.
That will be a trend that continues today.
We got a ton of questions, the most popular topic by far.
But for this installment,
I thought we could start off with some of the non-AI
questions from listeners,
and then work our way to the AI pile.
All right, ready to go?
Yep.
So first up, we have a video from a listener
asking that age old question, what do I do about my mom?
Hi, Kevin and Casey.
My name's Charlene.
I love your podcast.
Thank you so much for doing it.
I listen every week.
I am listening up in Canada.
My question for Ed is a little internet privacy related.
How do I break it very gently to my Gen X mom
that I do not want her posting any pictures
of her future grandkids onto her Facebook wall?
Oh, interesting.
Oh, yeah.
What do you think, Ed?
That's a good one.
Like I'm a Gen X dad and I'm like,
no way am I putting pictures of my kids.
You don't put any pictures of your kids online.
No, nowhere.
And I have had that conversation with family members,
like please, you know, pull back on this or that.
Cause sometimes you're just off at like your cousin's house
and they're taking pictures and it's all great.
It's like a family barbecue,
but you just have to be a little careful.
I mean, there's just no substitute
for direct conversation about this.
Although, you could consult Chachi P.T.
on how to approach the subject
in a tactful and tender way.
That would work.
The thought that comes to mind for me
is like maybe the way to have the conversation
is to present an alternative, right?
And say like, hey, like the baby's coming next month.
We're all very excited.
We would like for you to not post these photos
sort of in public forums like Facebook,
but we would love you to post them
to like our shared iPhoto library
where only our family members can see it.
And we're gonna get it started for you,
and I'm gonna add it to your phone manually,
and now you're gonna get an endless stream of photos,
and you can share all of them this way.
That might, I think, help the situation.
Yeah, I know a lot of families with young kids
who do this either on the Photos app,
you can sort of create this shared album
and post photos there, and people can comment on them.
It sort of feels like social media,
but without the publicness of it.
Or you can do like a WhatsApp thread
or a group text or something.
But yeah, I think that's a really key thing is like,
you do not wanna deprive grandparents
of photos of their grandkids,
or else they will just start taking them
and posting them on their own.
All right, well, I think we solved that one.
Yeah, so next up we have a voice memo from a listener
who wonders what's going on
with her friend's online calendars.
And this listener asked us not to use her name
in case the friend listens to the show.
I've recently started doing some admin work for a really good friend.
Most things are going really well, but when we need to meet virtually or do a call,
I contact my friend to find a time, tell her I'm flexible,
as my calendar is a lot less
slim than hers.
Just send me a calendar invite at a time that works for you and I'll make it work.
The first time I did this, I waited and there was no calendar invite a few hours later.
So I was like, do you want me to send you an invite?
And she said, no, I put it on my calendar. But without adding me, she wants me to
make my own calendar invite on my calendar. So like just two calendars off in space, not collaborating
and with no visibility between them. I think this is insane and very weird as well as unproductive,
self-defeating and frankly inconsiderate. She definitely doesn't think it's weird at all.
I've worked in offices for many, many years,
and I've never encountered this.
Am I being hypercritical about this,
or is it as weird as I think it is?
What do you think, Ed?
Well, I want to make sure I understand correctly.
So she's really upset that her friend just feels like
she wants to have separate calendars. So she's really upset that her friend just feels like
she wants to have separate calendars.
And is not sort of adding her on the calendar invite.
It'd be like if I were like wanting to hang out with Kevin
and I added onto my calendar,
hang out with Kevin at noon,
and then I didn't add Kevin,
so now he has to go make his own calendar entry
being like hanging out with Casey.
Right, right, right.
I mean, what occurs to me is like,
I'm not sure if this friend of hers understands
that you can like do a shared calendar invite
without giving the other person like access
to your entire calendar.
Yeah, it feels a little bit like generational
because that's my take is like,
I just manage my own calendar.
I find myself accepting calendar invites here and there,
but I rarely send them out.
That's a flex. Yeah.
Well, I don't know, I just, I don't think that way.
I'm just like, it's an old school way of doing it.
Everyone manages their own calendar.
You agree on a date and time in an email
or a text or whatever,
and then you add it to your own calendar.
But yeah, I don't know.
Now here's a question for you both.
Do you let other people see your calendar?
Because you can give people access
to be able to see all the appointments on your calendar.
And I'm curious if either of you do that.
Ed?
I have someone that I work closely with
who has full access to my calendar.
But of course I have multiple calendars.
My wife has access to another calendar.
And I don't know if anybody other than me
has access to all of them.
Yeah, I'm basically the same.
Like I have an assistant who can see my calendar
and I have a boyfriend who can see my calendar,
but that's it.
How about you?
My wife can see my calendar,
but because there's some like security settings around it,
it only says like busy or not busy.
Yeah.
It doesn't say like what I'm doing.
Interesting.
So that's an interesting way to do it.
But also what is the working relationship
of these two people?
Like if they're in an organization where this standard is set,
then yeah, it's a reasonable expectation to have.
But I think if you're dealing with two people in different organizations that are
collaborating, then you're just allowed to assert your preferences.
Right. So for our anonymous listener, I think our advice is make sure that your
friend who you're doing some admin work for knows how calendars work and that you can create shared events
without sort of turning over your entire calendar.
And if it's not an issue that results from a lack of understanding, then maybe have a
conversation about like how much harder it is to operate in this close collaboration
without being able to have the same calendar invites.
I have slightly different advice, which is if I were her, I would just take the initiative
to just send all the calendar invites.
Like, if it is a technical issue, I think it's going to be easier for her to be like,
let's agree on a time, great.
I'm sending you the calendar invitation.
Now the problem is solved.
And if she's not open to that, this friendship is over.
I'm going to add another layer to that.
Which is that first, just assess whatever cultural difference your you know calendar culture you guys are dealing with because this person may
really not like to use calendar invites for any number of reasons and whatever
you're used to is practical. Yeah you got to find out what kind of freak your
friend is. Alright let's take the next one. This I think Kevin is a question
that just about everyone with a cell phone will resonate
with.
Hey, Kevin and Casey.
Love the pod.
My name is Calvin.
I'm from the East Bay.
And here's my hard question.
I got a lot of scam emails, texts, and phone calls.
And I love stringing them on for as long as possible or just making them mad.
My wife hates that I do this, but it brings me great joy.
Here's my dilemma.
Recent reports have come out revealing that many of these scammers
are being trafficked, extorted, and forced to do this work against their will.
It has become a form of modern slavery.
I know this is true for some of the people I probably interact with.
So is it wrong for me to mess with these scammers?
Should I just ignore them or can I keep having fun even if it feels morally complex?
Thanks.
Also, last thing, I know an heiress that recently came into Billions of Dollars and wants to
support a tech podcast for people.
Send me your socials and we can make it happen.
That's intriguing.
We got to follow up with him.
Well, what do you think, Ed? I imagine you might get these texts.
How do you react when you get them?
I'm just cold ignore. It's so much easier that way.
I can see getting a little bit of like evil glee out of sort of taunting somebody that's out to scam you.
But you're also opening yourself up to more risk that way. out of sort of taunting somebody that's out to scam you.
But you're also opening yourself up to more risk that way.
You'd really never know who you're talking to,
what they know about you.
And like, just don't open that can of worms at all.
You know, I have to say,
I think Ed's approach is the right one here.
Like what Calvin told us is true.
The people who are doing this, like for the most part,
this was not their chosen lot in life.
Like something has gone horribly wrong for them
that they have to participate in this.
And you're just like taking somebody who has a very hard life
and making it even harder.
So while I get that it is so satisfying
to text these people back, and I will admit,
I have texted these people back
and cursed them and insulted them.
I have gone to a place of just not doing it.
And I would also just appeal to Calvin's self-interest
because my understanding is,
even if you are just responding to say,
like essentially screw you,
you do get identified as a warm phone number
and that information is shared with other scammers.
So by responding in this way, Calvin,
you may be making your own problem worse.
Now, how long do you guys think
that we will actually have human scammers calling people?
Like, how long before all of these are just hyper convincing AIs speed dialing people?
I mean, if you believe the research about persuasion that we've been talking about on the
show, you know, might not be all that much longer. But I have to say, Kevin, that might be like a
human rights victory, because then maybe they would take these people that are like locked up
in these scam centers and actually let them go home.
Yeah, that's one job I would love
to see automated very quickly.
Also, like, it's not your best self to do this, right?
Yeah.
It's not kind of reinforcing the best side of you
to be making someone else's life harder,
even if they're not in some sort of like terrible situation,
they're just a scammer.
Like, strive for better in yourself.
Do better.
As Melania Trump once said, be best.
Yeah.
Yeah.
All right.
Next up, here is an email that came to us
from a listener named Louis, or Louie, I suppose.
He wonders essentially, is it ever right to commit a crime
in order to prevent another crime?
So I'll read an excerpt from the email.
He says, a couple of years ago, circa 2016,
I had an idea I pursued, abandoned,
and wondered about ever since.
I started guessing private keys of a well-known blockchain
using some heuristics that I thought some less aware people might try to use
to generate their addresses.
It worked quite well, and I found one address
with several thousand dollars worth of cryptocurrency on it.
Most of the accounts I found had transactions on them
that had been completely emptied,
and when checking the accounts that had emptied them,
they were marked as fraudulent
by several blockchain explorers.
So to recap, I had found a stack of gold and it was only a matter of time before a bad
guy swooped in and stole it.
After some deliberation with my parents and girlfriend, now wife, I decided that the safest
course of action was to do nothing.
What do you think should have been the best and legally sound reaction?
Inaction, taking the tokens as my own, donating them to charity, or maybe something else I have not thought about.
What do you guys make of this?
Yeah, this is not complicated to me.
Just stay out of the mess.
It's like jumping into, you know,
like a meeting with a bunch of mobsters
and being like, this is a moral quandary, should I join in with them
or should I just leave?
This is like actually the plot of No Country for Old Men.
Yeah, I have to say, we should disqualify this
for not being a hard question.
This is just like, you know, do not commit a crime
and hopefully that will save you.
Now I will say, as far as some advice on what they could do that would be productive with
this, some blockchain projects, a lot of tech projects have these kind of bug bounty programs
where if you discover a security flaw in their system, you can contact them and say, hey,
I'm a security researcher and I found this bug and sometimes they will pay you money for doing that So if this blockchain project whatever it was has a bug bounty program
This person could actually turn that in and make the whole system safer for everyone and maybe get a reward out of that's a great
idea Kevin do that
So one last one before we get to our AI questions. This is from a listener who wonders when did headphones become optional?
Hi, my name is Sarah,
and I'm calling from Stratford, Ontario, Canada.
I have noticed that there's an epidemic these days
of people watching videos out loud on their phone in public
in restaurants, movie theaters, on trains.
And I've tried various different ways
to politely ask people if they have headphones
or if they could turn off the sound
on their games or their videos on their phones
to varying degrees of success.
Often people are quite rude to me
and just flat out say no.
So I was wondering how you would handle this.
Thank you.
I love this question.
Ed Helms, how would you handle this?
It sounds like she's handling it exactly right.
You say something and I rarely see this.
I've seen it, I did see it on an airplane recently,
and I was stunned, because on an airplane,
you have to listen so loud.
And this person had their phone just cranked all the way up,
watching like a basketball game or something super loud.
And yeah, I think her approach.
Did you say something?
I didn't.
I was a few.
He can't say, so do you know what social media
is gonna do to him if he says something?
I was a few rows back, it wasn't that bad.
But I could tell, for me it wasn't that bad,
but I could just tell it was crazy.
But yeah, I don't get it.
I feel like certain, there's just like,
some people don't understand how awful that is.
It's so true.
The reason I was so excited to get this question
is because I see this all the time now.
Every copier, I just spent two weeks in New York.
I would say every other subway car I was on,
somebody was doing exactly this thing.
Their behavior seems insane to me.
I'm an extremely conflict-avoidant person.
I would never once think to do what Sarah did
and actually ask these people to maybe turn it down, although that does seem like the right thing to do.
So I don't know what I could do.
The only thing I'm left thinking you could do is try to guess this person's private keys
and steal their crypto.
What would you do?
Or put your own headphones on.
What I would do, and I don't want to counter this that much because I don't spend a lot
of time in places where this is going on, but-
Kevin only takes private cars everywhere
No, but I I think my strategy if the sort of earnest request to put on headphones or something or turn down the volume failed
I think I would just start asking them questions. I'd be like hey what game you playing there?
What are the rules of Temple Run? Yeah, what's your high score?
Just troll them. Yeah, like I see you're watching a basketball game,
like fill me in, who are the hot draft picks this season?
And eventually maybe they just like catch on
and get so annoyed that they turn it off
and do something else.
I actually, I would love,
I don't think I could ever do that,
but I would love to observe you doing that.
It is crazy, like, cause you know,
headphones are so cheap at this point.
I mean, you can get earbuds for $15, I bet.
And yet it seems like the cheaper they get,
the more people are just saying,
you know what, the heck with it,
everybody can listen to the basketball game with me.
The weird thing too is when people are actually
having conversations on speaker phones.
Yes!
Like just hold it up to you.
Yes!
Like they'll hold the phone in front of their face
like a little piece of pizza
and talk into it with the speaker on, just put it up your ear.
I was, again, being in New York,
I saw multiple people are just walking down the street,
FaceTiming with people, and I'm like,
is it because you look so cool
because you're walking through the streets of New York
and you just sort of really want people to have the visual?
Like, I do not understand.
You're so likely to just, you know,
fall into an open manhole cover.
It's true. That is always happening.
Yeah.
All right.
When we come back, we will tackle your hard questions
about AI, including a worker who fears backlash
for using AI at work and a boyfriend who worries that AI
could doom his relationship.
Uh-oh. So Well, Ed, every time Casey and I talk about AI on the show, we have to do our AI disclosures,
so we'll do them real quick in our best speed read.
I work at the New York Times, which is suing OpenAI and Microsoft for copyright violations
related to the training of AI systems.
And my boyfriend works at Anthropic.
Ed, do you have a disclosure about AI that you'd like to add?
No.
Okay.
I'm terrified of it.
I'm terrified.
I feel like the singularity is around the corner, and I'm absolutely petrified.
So I like AI, is I guess how I hedge that.
Yeah, that's a good disclosure.
Okay. So one clear theme that has emerged
from the questions that our listeners submitted is that there is
a lot of uncertainty around how and when to use AI at work.
So our next two listeners are both grappling with that subject,
but from quite different perspectives.
Let's start with the perspective of a manager.
This comes to us from Scott Kaye who asks,
should I call out a junior person using AI and be a hypocrite?
His e-mail reads, as a team lead,
I sometimes use AI to help
brainstorm solutions when my developers hit a wall, but every so often I'll see one of my junior developers magically
land on the exact same AI suggested solution, and it's painfully obvious they didn't invent
it themselves.
And then I'm stuck thinking, do I call them out and ask, hey, walk me through your thought
process here, while fully aware that I'm over here secretly tag teaming with AI myself.
So what do you think about this?
Is it hypocritical to call out a junior employee for using AI
when you are using it yourself?
Yes, it is.
But I also think that we're just in this early moment with AI and we haven't
quite figured out how to navigate these things.
It seems like maybe everyone should just be owning their AI use a little more
transparently, but that sort of diminishes the magic of AI.
I guess people are thrilled to sort of present things as their own ideas.
I think this is sort of a strange question for this reason.
Most of the developers I know who are using AI understand that everyone
is using AI and that if you solved a problem using AI, most people would be like, yeah,
like we're all solving problems with AI. So I'm curious if Scott were here right now,
I would say, why is this an issue? Is it that the junior dev is suggesting really bad suggestions
from the AI? Because if that's the case,
then I think AI isn't really the issue, right?
The issue is that your junior colleague
is sort of bringing bad ideas into the workplace.
And that is worth calling out and saying,
hey, you know, this actually wouldn't work and here's why.
Yeah, I agree with you both.
I think we just need to like presume
that unless specified otherwise,
people are going to start using AI in their jobs,
basically whatever their job is.
Yeah.
I was hearing a talk from
the economist Tyler Cowan the other day who teaches at
George Mason University was talking about how he now
requires all of his students to use
an AI chat bot for their assignments,
and he doesn't consider it plagiarism,
he just grades the finished product.
I think that's how like we should
evaluate work at our jobs too.
Is like, is it good or not?
And if it's not good,
then you used AI wrong or you didn't use it in the right ways.
And if it's good and you used AI, like more power to you.
What matters is the finished product.
I just am not sure,
especially in an academic context,
if that if the finished product represents how educated the student is on the subject,
and that that's what the grade should reflect.
An educational environment is not a widget making.
I guess that's right. That's definitely a fringe position in academia,
where we still do care that people are thinking through things on their own.
But in the context of like a software team
at a big company, like what matters is whether
the code compiles or not.
Not whether you use the certain tool.
So do you worry though, like as somebody with kids
that like they're gonna get to school
and they're gonna be using chat GPT everything
and they might not develop the critical thinking skills
you want them to have?
Yeah, absolutely.
I mean, it goes back to what I was saying before
about just the generation of creative things, you know?
And I work in an industry where that's,
the people that are good at that are really rewarded for it
and it's highly competitive.
But when no one is as good as the AI,
then not only is like, does the question emerge like, what's the point
of doing it if the AI can just keep doing it better?
The other sort of darker question is like, what's the point of learning to do it or studying
the art forms?
Right.
And I mean, that's a, that's a very dystopian long view, but I'm not worried about that
in the near term for my kids, but I do feel like that's kind of...
I would just be happier if it let me learn more easily.
Like, you know in The Matrix when Kiano's like,
I know Kung Fu,
because they just uploaded it into his brain.
I would also like to know Kung Fu.
I would love that.
I would put that amount of effort into it.
And I will say, like, I'm a musician.
I really struggle to remember song lyrics.
I can't wait till I can put on glasses and just read song lyrics and suddenly have
like a thousand songs at my disposal that that sounds really really fun and
cool all right we're gonna get we're gonna get emails from people who say
that they've invented this so we'll pass those along to you okay great all right
now let's consider the perspective of somebody who is just starting out in their career
This listener asked to be anonymous for fear of backlash from colleagues, but she emailed the following
What do you all think about people who are AI snobs?
I am a NASA scientist and surprisingly I found that for an organization full of scientists
There's a lot of snobbery over being better than using AI
People basically act like those who use AI
are too stupid to solve problems themselves,
and they are smarter than everyone else
because they are capable of an existence
free of AI assistance.
I've even heard, quote,
even if AI can help you solve a problem faster,
why would you avoid the cognitive stimulus?
That's the whole fun of being alive.
So I guess my question is,
how do you respond to people who keep acting
like they are better than you
simply because they don't use AI?
Wow.
Have you heard a version of this, Ed?
No, I'm not sure I've heard this bubble up.
My hunch is that those people are probably lying.
That they are using AI behind closed doors.
But I haven't encountered anti-AI snobbery.
Oh, I have.
I mean, I think there are people who are reacting this way
basically as a fear response.
Like they worry that if they use AI,
and it makes them smarter
then maybe they weren't that smart to begin with,
or maybe they're gonna lose their job.
Or I think there are a lot of reasons
that people react this way to AI,
including the fact that they just cannot believe
that a computer could do what they do better than them.
And everyone seems to have a version of this
for their own job.
Everyone kind of thinks that AI's gonna take
everyone else's jobs, but me, I'm the special one,
and what I do can't be replicated,
and I see that attitude a lot.
Yeah, I mean, this is one where I wanna be careful
because look, I do believe if you have a job
and you don't want to use AI,
you don't have to use AI if you don't want to.
If you love the cognitive stimulus of everything you're doing,
like that's great, like you probably have a great job.
How should you relate to people who do use AI?
I would say with kindness, you know, particularly if they're using it well.
And I think a lot of folks, and I would include myself among these people, do feel like it's
giving me at least some kind of advantage in some set of things, you know. So I do believe that over
the long run, more and more people are going to come around because they're going to see people
like our emailer here just kind of doing well at their jobs. And they're going to see people like our emailer here just kind of doing well at their
jobs and they're going to assume that you know not just the AI but all the productivity tools
that they're using are helping them you know get a little bit of an advantage and so yeah I guess
that's my answer to that one. I mean that's that's a very kind and empathetic response that both of
you have given. There's also the option of just trolling your coworkers. Like you could go over with an abacus
and take away their calculator and say,
I didn't wanna deprive you of the cognitive stimulus
of using the abacus by taking this cheap shortcut.
Let me take the abacus.
Just use all cognitive power.
That's good.
Yes. All right, so we have two related final questions that I think get at the heart of how AI is
complicating many people's deepest and most meaningful relationships.
Let's play the first video.
Hey, Kevin, Casey and Ed, Dan here from Chicago.
I'm a devout listener of the show.
So I recently started a new relationship and after sharing an episode of Hard Fork with
my girlfriend, I realized that she hates AI.
She has a visceral negative reaction anytime I mention something AI related, unless it's
Adobe Illustrator or like A1 steak sauce.
And it makes her so uncomfortable that she doesn't even want to entertain a thought about it. Now this has become a
real point of tension because I use AI in my everyday personal
and professional life. And I'm really interested in these
thornier questions around the future of work, society and what
it means to be human in this new era. So how can I navigate a
situation where I can't even bring up something that is so intrinsic to my life with my significant other?
Kevin, I'm especially curious to hear your thoughts given that AI almost broke up your relationship too. Thanks guys.
Great question. Great question. Kevin, what are your thoughts?
So look, I have a lot of sympathy for this. I think a lot of people in my life are not as into AI as I am.
My wife is sort of getting more interested in it.
We talk about it sometimes, but for a long time, it was like, you know, she just,
it wasn't of immediate concern to her.
And so it was sort of my thing.
And that's why it's so important when you're in that situation to start a podcast,
because then you do actually have someone to talk about AI with
without ruining your relationship.
Ed, what do you think?
Every relationship has things that are tough
and that one person is into and another person isn't.
And this feels very surmountable to me.
At a certain point, the culture will probably start to,
you know, AI will start to just infiltrate his girlfriend's
life in ways that makes her more open to it.
But even if not, it just seems like find your buddy that you can have these conversations
with.
Yeah.
I mean, I have to confess this is not a problem I have in my relationship.
If anything, the issue in my relationship is could we talk about something other than that?
But you know, I feel like in you know, so many relationships there is a subject like this, you know
Like sports comes to mind, you know
Like maybe you're obsessed with a Golden State Warriors and your partner is it and every time you bring it up
You know
You see them rolling their eyes
And I just think this speaks to the fact that it helps to have people in your life other
than your primary partner that you can just kind of distribute the weight of your interests.
Yeah.
I mean, there are a lot of sort of AI clubs popping up around the country.
I've met people when I've been out doing various events who say, oh, I'm part of the local
AI club.
And this is a thing that I think is starting to emerge
over the last couple years.
And so maybe if there's one in Chicago,
you could sort of find the local AI club
and join that and find a way to have a weekly discussion
about these things.
What do people do at AI club?
Casey, the first rule of AI club is
you don't talk about AI club.
I walked right into that one.
But I will give my earnest piece of advice here, Casey, the first rule of AI club is you don't talk about AI club. I walked right into that one.
But I will give my earnest piece of advice here,
because I think this is something really important,
is that people don't automatically
buy that AI is going to be meaningful to them
until they see something that they struggle with in their life
that it is useful at solving.
So I think one thing that you could do if you're in a relationship where one partner cares a lot
about AI and the other person hates it,
is just to figure out like, what do they value?
Like what things do they like doing?
What do they struggle with?
What are some places maybe at their job
or in their personal life where they might be able to use AI
and don't force it on them,
but just maybe take one of those problems
and just prompt an AI model with it and see if
it solves something or does something interesting for them,
and then show it to the partner.
Maybe try to meet them where they are and make their interest
organic rather than just
pretending that they're into it for your sake.
Yeah. But I would also say take
no for an answer from your partner here,
and maybe just cool it on the AI talk for a while,
and see if she brings it up at any point and like maybe then you'll kind of
have an entry point but until then I don't know if it's worth it.
All right let's go to our last question here this comes from a listener who goes
by Elle. And let's take the Elle. And Elle asks how do I help people get
prepared for AI without totally freaking them out?
Hi, Kevin and Casey.
I'm Elle and I live in the deep south.
Here's some context for my question.
I've been getting anxious in conversations where AI comes up.
I'm tech avoidant but interested in
tech forecasting because I want to feel prepared for what's ahead.
But most people I know aren't as tuned in as I am,
so I'm hesitant to
share my realistic slash grim take on the potential of AI. I don't want to plant scary seeds in their
brains. As a result, I'm feeling mentally and existentially isolated. I'm doing better at making
the best of my time in case it's running out faster than I'd hoped, which is how we should live anyway,
so that's positive. On the other, I'm feeling distant from my peers and loved ones in a way that is
hard to articulate. What should I say to my loved ones if it comes up? I want them
to be mentally prepared but not super sad. Living in the moment but worried
about frightening others. Oh. Now Ed, I'm curious to hear your take on this because
it sounds like you may have a version of this yourself. You mentioned earlier that you are worried about the singularity and you're sort of terrified that
it might be coming soon. So what should Elle do about trying to live in the moment, take advantage
of the time that we have, but also avoid freaking out her friends and family? It's hard not to talk about something that is scaring you or that you're obsessing over.
And this may sound glib, but I think she could benefit from a therapist.
Someone that she really can explore these feelings with and help her process them.
And then also give her an outlet where she's not burdening friends and family with
that.
Because the other thing is, you know, none of us really know what's coming.
And so to be, and there are a lot of AI optimists out there, and maybe they're right.
So to be kind of chicken little and gloom and doom, much as it's it's not a choice sometimes feels a little premature and
again finding an outlet a therapist perhaps
To explore these feelings could be good. I'm glad you said that I had the same thought this really could
benefit from therapy and you know
I guarantee you l you will not be the only person talking about your fears about an AI future in therapy.
I think it's actually quite common here in the Bay Area for folks to talk to their therapist
about that.
You know, while I totally understand you're hesitant to bring everybody down with your
fears about AI, I do think that part of living in a democracy is bringing up the things that
you're concerned about.
And there's no reason why you couldn't take maybe, you know, break it into small chunks and talk to your friends about things that you're concerned about. And there's no reason why you couldn't take, maybe, you know, break it into small chunks
and talk to your friends about things
that you see out there that worry you.
So if you see that all of a sudden,
the chat bots have gotten super sycophantic
and you're reading stories about people
having like mystical experiences with chat bots
and convincing themselves that they're the Messiah,
and you worry about that technology
like being used by young people, let's say,
there's no reason why you shouldn't talk about that.
In fact, I think you should talk about it.
And I think one reason why we started this show, Kevin,
was we like talking about these issues
and we wanna get other people talking about them.
So I totally hear you, El, on not wanting to be a bummer,
but I think everyone's allowed to be a bummer
at least 10 or 20% of the time, don't you, Kevin?
Yeah, although I should say,
I spend a lot of time talking with people
about AI in my life.
And I have found that, like, when I'm in my sort of, like,
gloomiest mood about it, when I'm feeling like my P-Doom
is quite high, I tend to not have good conversations
with people because they sort of come away feeling like
we're all screwed and there's nothing we can do about it.
It sort of strips agency away from them
when I talk about it like that.
Whereas when I'm feeling more optimistic,
I tend to have conversations that are just like
sort of rooted in like wanting to help people
understand things or like make them excited
or give them some hope for the future.
And I think that when you give people a sense
that they are not just like inexorably marching
toward this future that they have no control over,
I think it just goes a lot better.
And so my advice for these like, you know,
sort of AI doom crowd is always to like not frame things
in terms of like what will happen no matter what,
but like to sort of sketch out paths for people
and say, well, if we make good decisions,
it could go in a really good way.
And if we don't, it could go in a really bad way.
But like, it's very important, I think,
to not make people think that they just have no choice
in the matter.
What do you think, Ed?
I don't know, I'm getting anxious talking about it,
honestly, but in what I think is a good, healthy way,
I do really like what you said, Casey,
that like we're allowed to be a bummer sometimes,
and it is important to kind of be honest
about where we're at with things,
in our, with our communities
and the people who care about us and love us.
And also if it's really feeling like an excessive burden
to seek help and assistance with it,
whatever form that takes.
Absolutely. Well, before we go, Ed, we want to give you the opportunity, do you have a hard question
you'd like to ask Kevin and I?
Anything going on with technology, any dilemma you've confronted recently that we can offer
you our expert assistance with?
Let's see.
I give my mom, who's 85, a lot of tech support, and it can be quite frustrating.
And I would love some advice on sort of moving through that
with grace and also to what extent it's necessary.
There are times where it feels like maybe this
is just not something to get figured out, and is that okay?
Kevin, what are your thoughts about that?
Yeah, so I have shifted my views on this recently.
My mom also requires some tech support from time to time,
and I'm not physically there, like in the same place as her,
so I often end up doing it over the phone.
And so what I have found is useful
is to just fix things for her.
Like do not try to walk her through it.
Just like the next time I am in the same place as her,
just take her phone and fix all the things she doesn't like.
There you go.
And do it very quickly and make it very hard
for her to undo those fixes.
And so basically I think that there is a point
at which people, they just do not want to learn
the entire process
of like changing some settings on something.
So if you can just sort of set it up for them,
they are eternally grateful and you save yourself
and them a lot of grief.
I think that is a perfect answer.
The only thing that I would add to it is
to the extent that you feel like your mom
may have any curiosity about technology,
I do think it's fun to nurture it a little bit.
Like, yes, I think you're almost always going to be better off just like fixing it.
That's just an act of love that you can give your parents is fixing things for them.
But you can also see if in the process of fixing that you might share a little bit about how it works
or what you think is interesting about it. See if that sparks anything for them.
Maybe they'll go off and learn a little something themselves.
I have to say, I talked to my mom this week
and she told me that she had just used Claude
to pick out some songs to put on a playlist
for her 50th wedding anniversary party
that is like coming up in a couple of months.
And I mean, I was beaming with pride
because she had had a good experience.
She did think that it was too sycophantic.
It actually worried her.
She sent me a screenshot.
She was like, this thing is being way too nice to me.
But-
What did it say?
It was like, you can't be having a 50th wedding anniversary.
You're only 40 years old.
That's right.
It was like, yeah, something in that neighborhood, you know.
But what I loved about it was, in the process of me talking to her all the time about AI,
she was sort of like, you know what,
let me investigate and see if this thing
could do anything for me.
And I think that is a really nice gift
we can give our parents too.
Amen.
I do think that it is a great expression of love.
Yes, tech support.
It's the least we can do for our moms
and other parental figures.
After what we put them through?
Exactly.
You better believe it.
All right, Ed Helms, thank you so much for joining us.
You can buy Ed's book now.
It's called Snafu, the definitive guide
to history's greatest screw ups.
Ed, this was great.
Thanks so much for having me guys.
I feel like we made it out of this without a single snafu
and that was important to me. Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited this week by Matt Collette.
We're fact-checked by Ana Alvarado.
Today's show was engineered by Katie McMurray.
Original music by Marion Lozano and Dan Powell.
Our executive producer is Jen Poyant.
Video production by Sawyer Roquet,
Pat Gunther, and Chris Schott. You can watch this full episode on YouTube at youtube.com
hardfork. Special thanks to Paula Schuman, Quy Binh Tam, Dahlia Haddad, and Jeffrey Miranda. dot com.