The Interview - From Hard Fork: Ed Helms Answers Your Hard Questions
Episode Date: May 24, 2025We’re off for Memorial Day weekend, but we're excited to bring you a great episode of the Hard Fork podcast, hosted by journalists Kevin Roose and Casey Newton. In this episode, Kevin and Casey tap ...actor, comedian and author Ed Helms to answer listeners' questions about their moral quandaries, ethical dilemmas and etiquette questions about technology and discuss his new book on historical blunders.
Transcript
Discussion (0)
Hi, it's David.
We're taking a break this week, but we're happy to share a recent episode from our friends
over at the New York Times tech podcast, Hardfork.
Host Kevin and Casey talk to the actor and comedian Ed Helms about his new book on historical
blunders, about how he uses technology in his own life, and then they all answer some
great listener questions.
Enjoy it, have a great Memorial Day, and we'll be back next weekend when Lulu talks to Miley Cyrus.
Here's the episode.
What's going on with you?
Well, I'll tell you what's going on with me is allergies.
It is allergy season, the sneezing.
It's allergy sneezing.
It's allergy sneezing season, the congestion,
I'm taking multiple pills.
I feel better today than I felt at any point
in the past three days.
And thank goodness, it's a recording day.
Is there a tech guy or a Clarton guy?
I am, I, uh...
Or a Flonase guy.
So here's the best allergy medicine.
Singulair is the brand name for it.
Active ingredient is something called Montelucast.
This thing changed my life, like seriously.
Like I was a runny nose monster for like years.
And then Montelucast came into my life my life, like seriously. Like I was a runny nose monster for like years
and then Montelucast came into my life
and it really helped me breathe through my nose.
Wow, have you ever tried like dissolving it
in like a tea of any kind?
No, why would I do that?
Cause then you'd be getting the singularity.
How did I walk right into that?
All right, start the show. That was great.
I'm Kevin Roos.
I'm a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hardfork.
This week, Ed Helms from the office comes to our office to talk about his new book and
answer your hard questions about tech.
I hope this episode wins a Dundee, Kevin. I have a hangover.
Well, we are very excited about today's episode. We told you this was coming a few weeks ago, and today we are going to be joined by Ed Helms for some conversation and some hard questions submitted by you, our listeners.
I am so excited about this, Kevin.
Me too.
So this came about because of a chance encounter that we had in Austin, Texas, when you were
down there for the iHeart Podcast Awards.
We are both big Ed Helms fans, and we were excited to bump into Ed in the backstage area,
and here to our surprise that he listened to Hard Fork.
Yeah.
And how did that happen?
Was that a mistake on your part, or were you sort of subscribed to the Kairos Whisher feed?
How did you start listening to Hard Fork?
That's a great question.
I have no recollection how I started, but I don't know.
You guys are pretty good.
Oh, thanks. You're good pretty good. Oh, thanks.
You're good at what you do.
Thanks.
So Ed has a new book out called Snafu.
It's what he calls the definitive guide
to history's greatest screw ups.
Should have a chapter or two in there about Casey.
Yeah, I sent them all of your screw ups,
so they didn't make the cut.
And Ed is here with us today in San Francisco,
and we said, come on down to the studio
and let's hang out and answer some questions.
So without further ado, Ed Helms, welcome to Heart Fork.
I am very excited to be here, gentlemen.
So what are you doing here?
Why are you in San Francisco?
I'm on my book tour.
I'm just, I'm bouncing all around the country.
I went to New York, Philly, DC, Atlanta, Chicago, Boston,
and now I'm here.
Tomorrow I go to LA and I'm hallucinating.
I have no idea what's happening anymore.
That's common in San Francisco.
Did you get the little baggies in Dolores Park?
Yeah, right.
So Ed, your book is all about
history's greatest screw-ups.
Do you have a favorite tech-related snafu
that you could share?
Oh, wow.
Well, do nuclear warheads count as tech?
Yeah, sure. Okay, do nuclear warheads count as tech? Yeah.
Yeah.
Sure.
Okay, great.
This one's amazing.
This was the Cold War, the 1950s.
Cold War did not bring out the best in us, as it turns out.
A plan was hatched to shoot a nuclear warhead at the moon.
And you might be wondering why.
Why would anyone think this is a good idea?
The thinking was that, you know, we're in fierce competition with the Soviets.
Everyone's sort of living in, in nuclear existential dread.
If we can hit the moon with a nuclear warhead, the Soviets will be so
terrified that we'll win the cold war.
And the research showed that it was very likely that it could easily
miss the moon slingshot around the gravitational field come right back and
hit us and which frankly would have served us right yeah you know what I
think it's a fair punishment and so eventually it was it was not followed
through on thank God but a lot of time, money, and resources went into it.
Carl Sagan was part of that research team.
It sounds like an idea that Elon Musk would come up with
after like a ketamine bender.
He's like, let's nuke the moon.
There's an episode of The Simpsons
where Mr. Burns wants to blot out the sun.
Oh yeah, of course.
This kind of reminds me of that,
except it really happened.
Yeah.
There's a fun section in your book
about the top five computer viruses of the 80s.
We don't have to go through all of them.
But I'm curious if you have a favorite virus from the viruses
that you've covered.
I like the guy that just wrote a dumb poem that pops up
on everybody's computer.
Mm-hmm.
Like what, I mean.
Was this the message of peace to Mac users,
or was this a different one?
No, this was the Elk Cloner virus.
Yeah.
God, Casey.
Oh, sorry.
I'm still catching up on my 80s viruses.
It's one of those things that you,
that a programmer thinks is sort of cute
and a little cheeky,
but then it just is disastrous.
Right.
Yeah, I feel like the hackers aren't so cheeky anymore.
Like you don't see like the good time prank hacks,
you just see like stealing, you know,
five million dollars in Bitcoin pranks.
Yeah, bring back fun hacks.
Yeah, yeah.
Yeah, the ones that make us chuckle.
So Ed, whenever we get a notable guest like you on the show,
we always ask them about their relationship
with technology.
So what's your relationship with technology like these days?
Well, it's really evolved.
I used to be very much on the ball and I would say very almost like ahead of the curve.
I was like an early adopter of technology as it came out.
I was getting, you know, the latest laptop.
You know, I jumped on the iPhones as soon as they came out.
I've always been a Mac guy, so it was fun.
And I sort of was like charged up by it.
I liked technology.
Somewhere, I don't know, in the last five, 10 years,
it has just blown past me.
And now I feel I'm just that old guy mad
that I can't log into my Citibank account or whatever.
Like, it just is a...
We'll help you, but just share your account name and the routing number if you would.
But I feel like there's a, there is a language to, or a visual language to iconography within apps,
and a sort of, like, assumption that you speak that language, and sometimes I'll open a new app,
or I'll upgrade an app
and like I'm just useless all of a sudden.
It's like going to the grocery store
and they've changed the location of all of the food.
No, they've changed the food.
Like avocados have turned into something
you've never seen before.
Yeah, the broccoli's blue.
Yes.
Do you have a problem app or an app
that you spend way more time on than you would like to?
Probably just like Instagram, like everybody else.
Or I mean, does that make me old?
No, no, no, not at all, not at all.
That's very millennial coded, actually.
Yeah, if you said TikTok, it'd skew you a bit younger,
but you know, look, there's billions of people
using Instagram.
Right.
Have you ever deleted Instagram,
or like set a screen time limit on it,
because you thought, I just, I can't be looking at it these days?
No, I've never, I've never put a limitation on it.
You think that's for quitters?
Yeah.
It's not ruining my life.
I don't feel like controlled by it.
It just keeps me up too late some nights.
Honestly, I really, I've laughed harder watching Instagram
than I have like a lot of movies and TV shows lately.
Like I get, it drives my wife crazy
because like I'm shaking the bed, you know,
and she's trying to fall asleep
and she's just like, turn it off.
She's not concerned about my screen time.
She's just like, stop shaking. And I'm trying to fall asleep, and she's just like, turn it off. She's not concerned about my screen time. She's just like, stop shaking.
And I'm trying to laugh quietly.
It'd be funny if like the sort of like sleeping
in separate beds thing from the 1950s came back
among couples, but just because of TikTok.
Because of TikTok.
Yeah.
I can see it happening.
Well, like what makes you laugh on Instagram the most?
Is it sort of the like native creators
who are just kind of like doing fun bits and sketches?
Or is it like people have sliced up every movie and TV show known to existence and they
know how to like serve you your very favorite ones?
It's usually prank related stuff.
And I have a pretty keen radar, but Instagram has hacked it.
So like, I hate anything that hurts someone physically.
Like I just can't get into that.
I don't like watching people fall or bonk into things.
That's very unnerving to me.
But I also don't like scare pranks
that are clearly traumatizing to somebody.
And there's tons of that stuff.
Yeah, I feel like you just described 80% of all pranks.
So what's left over?
But what's left over are the ones are just like the jump scare pranks,
and they'll be just be like a compilation of jump scares,
and I'm like crying, laughing.
Or just like weird scare pranks,
where there was a trend for a little while of people
that would lie on the floor with just their head
sticking out of a door on the floor,
and someone would walk in and see the head on the floor.
It just like, apparently it's terrifying
because the reaction is amazing.
Well, if you ever come back to Hard Fork,
I know how I'm greeting you.
Perfect.
Exactly.
Exactly like that.
What about AI?
Are you using any AI stuff in your life?
How do you feel about it?
Yeah, I definitely use ChatGPT.
For what?
Mostly just queries, like just picking its brain
for research or I want to know about something.
It's kind of my new Google, I guess.
Do you feel like you're Googling less
because you're using ChatGPT?
Yes, for sure.
Although, I still Google and I'm definitely,
I think you guys did a whole episode about this,
about how now with Google's AI,
no one goes to the links anymore.
Exactly.
There are all these industries that have built
their entire business models on the Google links.
That's a bummer.
That's just one of the many AI fallouts.
I use it occasionally to help like think of a,
you know, an email that I'm having trouble with,
or especially if it's something kind of loaded and I just...
You have to fire somebody.
Yeah.
Yeah.
The relationship between Hollywood actors, writers,
people who work on films and TV shows has been pretty
strained toward technology, I would say,
over the past few years especially.
First, it was the streaming platforms coming in
and disrupting that part of the business.
Now, actors and writers have been some of
the loudest
critics of AI, talking about how it steals work from people.
What do you think about the film industry's relationship
to tech?
Well, do you, tech or just AI in particular?
Both.
I mean, with tech, the relationship is, has long been,
it's like so integral to filmmaking.
Obviously the technology, going back to the dawn of photography.
And all the evolution of camera operation and movement and manipulation and and then digital manipulation of the image
that grew into digital animation and and it's just it's so it's so powerful and exciting on many levels. I do think that Chat GPT introduces a completely different paradigm of tech in entertainment.
And I feel like sometimes when you read notes on a project and you're like,
did this executive just feed this into...
It's thinly
veiled, ChatGPT, like.
You think, you're pretty confident you've seen some AI
notes at this point?
I do, yeah.
I think so.
But I also, I'm also, you know, stunned by, like, ChatGPT's
facility with creative tasks that if you give it, you know,
an assignment, it's totally stunning.
And I think the creative community is terrified
and for good reason.
And I'm also terrified, not even from a business standpoint,
but just from like a human,
like when the need for human creativity disappears,
will we still be creative?
Will we still, you know, what will drive us
to sort of like create beautiful things and amazing things and just whether it's a, you know, what will drive us to sort of like create beautiful things and amazing things
and just whether it's a, you know, a movie for the movie business or it's just a piece
of art for art's sake, that does terrify me.
Yeah.
What a, obviously that's a very difficult question, but do you have any thoughts on
how creative people should be approaching this moment?
It sounds like you've landed in a place similar to Kevin,
which is this can be a useful tool,
maybe a creative partner,
but there's like probably some part of your creativity
that you want to reserve for yourself
and not give over to a machine.
Yeah, that's a good characterization
of my relationship with it now,
but it's only going to get better and more powerful.
And I feel like also more intoxicating to use and more sort of like exciting and
affirming. But yeah.
Is there any tech product or service out there right now that you think will sort
of become a historical snafu that you might write about in a future volume of a
book? Like what out there today in the field of technology do you look at and think,
oh, that can't possibly be good?
Wow. So, I actually have kind of a macro take on this,
which is that we're at a point with
device usage where it's clearly bad for us.
Like it's clearly harming us.
And so, it feels like cigarettes in the 90s, right?
Where it was everybody knows, but like, come on, we're still doing it, right?
And all the corporate interest in it is still just sort of dumping money and more advancement into it.
And in the 90s, we're releasing better cigarettes and healthier cigarettes. And, but everyone still knew it was bad.
And it feels like we will, I hope at some point,
recognize that, or it will come to that inflection point
in the same way that cigarette smoking did,
where it just like, no, where we're just cultures
as a society, we're like, this is actually terrible. The only difference, or I should say,
there's a lot of differences between phones and cigarettes,
but like one of the most scary differences is that,
it's like if someone told you the only way
to do your banking or to do your shopping
is to smoke your cigarettes.
And so you have, it becomes so integrated into your life that
you have no alternative to this thing that is also toxic to you. So I don't know, I've,
I use my phone a ton and I still feel like I use it too much. It's a conundrum.
Here's how I say you solve it. You can, you actually sort of borrow a lot of ideas from
the cigarette example and you make it so that you can only use Instagram outside and you have to be 15 feet away from the door
Also you stink when you come back it makes you think it gives you terrible breath
All right, so Ed, for the last few weeks, we have been collecting questions from our listeners about their moral and ethical dilemmas involving technology.
And we're excited to get your take on some of these too.
Fantastic.
Let's do it.
Yeah, well, I would just mention this really has become one of our favorite segments to
do not only because our listeners are great and they ask such interesting questions of Yeah, well, I would just mention this really has become one of our favorite segments to do,
not only because our listeners are great and they ask such interesting questions of us,
but it really lets us get a peek into people's relationship with technology,
which I think is at the heart of what we love to talk about on the show.
Yeah, so every time we've done this, we've gotten tons and tons of questions about AI
and the ethics around AI use that will be a trend that continues today.
We got a ton of questions,
the most popular topic by far. But for this installment, I thought we could start off
with some of the non AI questions from listeners and then work our way to the AI pile.
All right. Ready to go? Yep. So first up, we have a video from a listener asking that
age old question, what do I do about my mom? Hi, Kevin and Casey.
My name's Charlene.
I love your podcast.
Thank you so much for doing it.
I listen every week.
I am listening up in Canada.
My question for Ed is a little internet privacy related.
How do I break it very gently to my Gen X mom
that I do not want her posting any pictures
of her future grandkids onto her Facebook wall.
Whoa, interesting.
Oh yeah.
What do you think, Ed?
That's a good one.
Like I'm a Gen X dad and I'm like,
no way am I putting pictures of my kids.
You don't put any pictures of your kids online.
No, nowhere.
And I have had that conversation with family members,
like, please, you know, pull back on this or that,
because sometimes you're just off at, like,
your cousin's house and they're taking pictures
and it's all great, it's like a family barbecue,
but you just have to be a little careful.
I mean, there's just no substitute
for direct conversation about this.
Although, you could consult Chachi PT
on how to approach the subject
in a tactful and tender way.
That would work.
The thought that comes to mind for me
is like maybe the way to have the conversation
is to present an alternative, right?
And say like, hey, like the baby's coming next month. We're all very excited
We would like for you to not post these photos sort of in public forums like Facebook
But we would love you to post them to like our shared ifoto library
Yeah, only our family members can see it and we're gonna get it started for you
And I'm gonna add it to your phone manually and now you're gonna get an endless stream of photos and you can share all of them
This way that might I think helps you.
Yeah, I know a lot of families with young kids
who do this either on the Photos app,
you can sort of create this shared album
and post photos there and people can comment on them.
It's sort of like, feels like social media,
but without the publicness of it.
Or you can do like a WhatsApp thread
or a group text or something.
But yeah, I think that's a really key thing is like,
you do not wanna deprive grandparents
of photos of their grandkids, or else they will just start taking them and posting them on their own. All right, well, I think that's a really key thing is like, you do not wanna deprive grandparents of photos of their grandkids,
or else they will just start taking them
and posting them on their own.
All right, well, I think we solved that one.
Yeah, so next up, we have a voice memo from a listener
who wonders what's going on
with her friend's online calendars.
And this listener asked us not to use her name
in case the friend listens to the show.
I've recently started doing some admin work
for a really good friend.
Most things are going really well,
but when we need to meet virtually or do a call,
I contact my friend to find a time,
tell her I'm flexible,
as my calendar is a lot less slammed than hers.
Like, just send me a calendar invite at a time
that works for you and I'll make it work.
The first time I did this I waited and there was no calendar invite a few hours later
So I was like do you want me to send you an invite and she said no, I put it on my calendar
But without adding me
She wants me to make my own calendar invite on my calendar
So like just two calendars off in space, not
collaborating and with no visibility between them. I think this is insane and very weird
as well as unproductive, self-defeating and frankly inconsiderate. She definitely doesn't
think it's weird at all. I've worked in offices for many, many years and I've never encountered
this.
Am I being hypercritical about this or is it as weird as I think it is?
What do you think Ed?
Well, I want to make sure I understand correctly.
So she's really upset that her friend just feels like she wants to have separate calendars.
And is not sort of adding her on the calendar invite.
It'd be like if I were like wanting to hang out with Kevin
and I added onto my calendar, hang out with Kevin at noon,
and then I didn't add Kevin, so now he has to go make
his own calendar entry, being like hanging out with Casey.
Right, right, right.
I mean, what occurs to me is like, I'm not sure if this friend of hers understands
that you can like do a shared calendar invite without giving the other person
like access to your entire calendar.
Yeah, it feels a little bit generational because that's my take.
I just manage my own calendar.
I find myself accepting calendar invites here and there,
but I rarely send them out.
That's a flex.
Yeah.
Well, I don't know. I don't think that way.
I'm just like, it's an old school way of doing it.
Everyone manages their own calendar.
You agree on a date and time in an email
or a text or whatever,
and then you add it to your own calendar.
But yeah, I don't know.
Now here's a question for you both.
Do you let other people see your calendar?
Because you can give people access
to be able to see all the appointments on your calendar,
and I'm curious if either of you do that.
Ed?
I have someone that I work closely with
who has full access to my calendar.
But of course I have multiple calendars.
My wife has access to another calendar.
And I don't know if anybody other than me
has access to all of them.
Yeah, I'm basically the same.
I have an assistant who can see my calendar
and I have a boyfriend who can see my calendar.
But that's it.
How about you?
My wife can see my calendar,
but because there's some like security settings around it,
it only says like busy or not busy.
Yeah.
It doesn't say like what I'm doing.
Interesting.
So that's an interesting way to do it.
But also what is the working relationship
of these two people?
Like if they're in an organization
where this standard is set,
then yeah, it's a reasonable expectation to have.
But I think if you're dealing with two people in different organizations that are collaborating,
then you're just allowed to assert your preferences.
Right.
So for our anonymous listener, I think our advice is make sure that your friend who you're
doing some admin work for knows how calendars work and that you can create shared events
without sort of turning over your entire calendar.
And if it's not an issue that results from a lack of understanding, then maybe have a conversation about like how much harder it is to
operate in this close collaboration without being able to have the same calendar invites.
I have slightly different advice, which is if I were her,
I would just take the initiative to just send all the calendar invites. Like if it is a technical issue,
I think it's gonna be easier for her to be like,
let's agree on a time, great.
I'm sending you the calendar invitation, now the problem is solved.
And if she's not open to that, this friendship is over.
I'm gonna add another layer to that.
Do it.
Which is that first just assess whatever cultural difference,
your calendar culture you guys are dealing with.
Because this person may really not like to use calendar invites for any number of reasons and
Whatever you're used to is practical. Yeah, you got to find out what kind of freak your friend is
Alright, let's take the next one this I think Kevin is a question to just about everyone with a cell phone will resonate with
Hey Kevin Casey, love the pod. My name's Calvin, I'm from the East Bay,
and here's my hard question.
I got a lot of scam emails, texts, and phone calls,
and I love stringing them on for as long as possible
or just making them mad.
My wife hates that I do this, but it brings me great joy.
Here's my dilemma.
Recent reports have come out revealing that many of these scammers are being trafficked,
extorted and forced to do this work against their will.
It has become a form of modern slavery.
I know this is true for some of the people I probably interact with.
So is it wrong for me to mess with these scammers?
Should I just ignore them or can I keep having fun even if it feels morally complex?
Thanks.
Also, last thing, I know an heiress that recently came into billions of dollars and wants to
support a tech podcast for people.
Send me your socials and we can make it happen.
That's intriguing.
We got to follow up with him.
Well, what do you think, Ed?
I imagine you might get these texts.
How do you react when you get them?
I'm just cold ignore.
It's so much easier that way.
I can see getting a little bit of like evil glee
out of sort of taunting somebody that's out to scam you.
But you're also opening yourself up to more risk.
That way you'd really never know who you're talking to, what
they know about you, and like just don't open that can of worms at all. You know, I
have to say I think Ed's approach is the right one here. Like what Calvin told us
is true. The people who are doing this, like for the most part, this was not
their chosen lot in life. Like something has gone horribly wrong for them, that
they have to participate in this.
And you're just like taking somebody who has a very hard life
and making it even harder.
So while I get that it is so satisfying
to text these people back,
and I will admit I have texted these people back
and cursed them and insulted them,
I have gone to a place of just not doing it.
And I would also just appeal to Calvin's self-interest
because my understanding is,
even if you are just responding to say,
like essentially screw you,
you do get identified as a warm phone number
and that information is shared with other scammers.
So by responding in this way, Calvin,
you may be making your own problem worse.
Now, how long do you guys think
that we will actually have human scammers calling people?
Like how long before all of these
are just hyper convincing AIs speed dialingIs, speed-dialing people?
I mean, if you believe the research about persuasion
that we've been talking about on this show,
you know, might not be all that much longer.
But I have to say, Kev, that might be, like,
a human rights victory,
because then maybe they would take these people
that are, like, locked up in these scam centers
and actually let them go home.
Yeah, that's one job I would love to see
automated very quickly.
Also, like, it's not your best self to do this, right?
It's not kind of reinforcing the best side of you
to be making someone else's life harder.
Even if they're not in some sort of like terrible situation,
they're just a scammer.
Like, strive for better in yourself, too.
Do better.
As Melania Trump once said, be best.
Yeah.
Yeah.
All right.
Next up, here is an email that came to us
from a listener named Louis, or Louis, I suppose.
He wonders essentially, is it ever right to commit a crime
in order to prevent another crime?
Oh, here we go.
So I'll read an excerpt from the email.
He says, a couple of years ago, circa 2016,
I had an idea I pursued, abandoned,
and wondered about ever since.
I started guessing private keys of a well-known blockchain
using some heuristics that I thought some less aware people
might try to use to generate their addresses.
It worked quite well, and I found one address
with several thousand dollars worth of cryptocurrency on it.
Most of the accounts I found had transactions on them
that had been completely emptied,
and when checking the accounts that had emptied them,
they were marked as fraudulent
by several blockchain explorers.
So to recap, I had found a stack of gold,
and it was only a matter of time
before a bad guy swooped in and stole it.
After some deliberation with my parents and girlfriend,
now wife, I decided that the safest course of action
was to do nothing.
What do you think should have been the best
and legally sound reaction?
Inaction, taking the tokens as my own,
donating them to charity,
or maybe something else I have not thought about?
What do you guys make of this?
Yeah, this is not complicated to me.
Just stay out of the mess.
It's like jumping into a meeting with a bunch of mobsters
and being like, this is a moral quandary.
Should I join in with them or should I just leave?
This is actually the plot of No Country for Old Men.
Yeah.
I have to say, we should disqualify this
for not being a hard question.
This is just like, you know, do not commit a crime
and hopefully that will save you.
Now I will say as far as some advice
on what they could do that would be productive with this,
some, you know, blockchain projects,
a lot of tech projects have these kind of bug bounty programs
where if you discover a security flaw in their system,
you can contact them and say,
hey, I'm a security researcher and I found this bug
and sometimes they will pay you money for doing that.
So if this blockchain project, whatever it was,
has a bug bounty program, this person could actually
turn that in and make the whole system safer for everyone and maybe get a reward out of it.
That's a great idea, Kevin.
Do that.
So one last one before we get to our AI questions.
This is from a listener who wonders, when did headphones become optional?
Hi, my name is Sarah and I'm calling from Stratford, Ontario, Canada.
I have noticed that there's an epidemic these days of people watching
videos out loud on their phone in public, in restaurants, movie theaters, on trains, and I've
tried various different ways to politely ask people if they have headphones or if they could
turn off the sound on their games or their videos on their phones to varying degrees of success.
or their videos on their phones to varying degrees of success. Often people are quite rude to me and just flat out say no.
So I was wondering how you would handle this.
Thank you.
I love this question.
Ed Helms, how would you handle this?
It sounds like she's handling it exactly right.
You say something and I rarely see this.
I've seen it.
I did see it on an airplane recently and I was stunned because on an airplane you have to listen so loud.
Right.
And this person had their phone just cranked all the way up watching like a basketball game or something super loud.
And yeah, I think I think.
Did you say something?
I didn't. I was I was a few.
He can't say. Do you know what social media is going to do to him if he says something?
I was a few rows back. It wasn't that bad.
I just... But I could tell. For me, it wasn't that bad.
But I could just tell it was crazy.
But, yeah, I don't get it.
I feel like certain...
There's just like some people don't understand how awful that is.
It's so true.
The reason I was so excited to get this question is because I see this all the time now.
Every cop... I just spent two weeks in New York. I would say every other The reason I was so excited to get this question is because I see this all the time now every copy
I just spent two weeks in New York
I would say every other subway car I was on somebody was doing exactly this thing that behavior seems insane to me
I'm an extremely conflict-avoidant person
I would never once think to do what Sarah did and actually ask these people to maybe turn it down
Although that does seem like the right thing to do
So I don't know what I could do the only thing I'm left thinking you could do is try to guess this person's private keys
and steal their crypto.
What would you do?
Or put your own headphones on.
What I would do, and I don't want to counter this that much because I don't spend a lot
of time in places where this is going on.
Kevin only takes private cars everywhere.
No, but I think my strategy, if the sort of earnest request to put on headphones or something,
or turn down the volume failed, I think I would just start asking them questions. I'd be like,
hey, what game you playing there? What are the rules of Temple Run? What's your high score?
You just troll them.
Yeah. Like I see you're watching a basketball game, like fill me in. Who are the hot draft picks this
season? And eventually maybe they just like catch on and get so annoyed that they turn it off
and do something else.
I actually, I would love,
I don't think I could ever do that,
but I would love to observe you doing that.
It is crazy, like, cause you know,
headphones are so cheap at this point.
I mean, you can get earbuds for $15, I bet.
And yet it seems like the cheaper they get,
the more people are just saying,
you know what, the heck with it.
Everybody can listen to the basketball game with me.
The weird thing too is when people are actually having
conversations on speaker phones.
Yes!
Like just hold it up to you.
Yes!
Like they'll hold the phone in front of their face
like a little piece of pizza
and talk into it with the speaker on.
Yes.
Just put it up to your ear.
I was, again, being in New York,
I saw multiple people are just walking down the street
FaceTiming with people.
And I'm like, is it, is it cause you look so cool because you're walking through the streets of New York
And you just sort of really want people to have the visual like I do not understand
You're so likely to just you know fall into an open manhole cover
That is always happening. Yeah. All right when we come back we will tackle your hard questions about AI
All right, when we come back, we'll tackle your hard questions about AI, including a worker who fears backlash
for using AI at work and a boyfriend who worries
that AI could doom his relationship.
Oh. Music Well, Ed, every time Casey and I talk about AI in the show, we have to do our AI disclosures,
so we'll do them real quick in our best speed read.
I work at the New York Times, which is suing OpenAI and Microsoft for
copyright violations related to the training of AI systems.
My boyfriend works at Anthropic.
Ed, do you have a disclosure about AI that you'd like to add?
No.
Okay.
I'm terrified of it.
I feel like the singularity is around the corner,
and I'm absolutely petrified.
So I like AI, is I guess how I hedge that.
Yeah, that's a good disclosure.
Okay, so one clear theme that has emerged from the questions
that our listeners submitted is that there is a lot of
uncertainty around how and when to use AI at work. So our next
two listeners are both grappling with that subject, but from
quite different perspectives. Let's start with the
perspective of a manager.
This comes to us from Scott Kaye who asks,
should I call out a junior person using AI and be a hypocrite?
His email reads, as a team lead,
I sometimes use AI to help
brainstorm solutions when my developers hit a wall,
but every so often I'll see one of my junior developers
magically land on the exact same AI suggested solution, and it's painfully obvious they didn't invent it
themselves. And then I'm stuck thinking do I call them out and ask hey walk me
through your thought process here while fully aware that I'm over here
secretly tag-teaming with AI myself. So what do you think about this? Is it
hypocritical to call out a junior employee for using AI when you are using it yourself?
Yes, it is. But I also think that we're just in this early moment with AI,
and we haven't quite figured out how to navigate these things.
It seems like maybe everyone should just be owning their AI use a little more transparently,
but that sort of diminishes the magic of AI.
I guess people are thrilled to sort of present things
as their own ideas.
I think this is sort of a strange question for this reason.
Most of the developers I know who are using AI
understand that everyone is using AI
and that if you solved a problem using AI,
most people would be like,
yeah, like we're all solving problems with AI.
So I'm curious if Scott were here right now,
I would say, why is this an issue?
Is it that the junior dev is suggesting
really bad suggestions from the AI?
Because if that's the case,
then I think AI isn't really the issue, right?
The issue is that your junior colleague
is sort of bringing bad ideas into the workplace.
And that is worth calling out and saying, hey, you know,
this actually wouldn't work, and here's why.
Yeah, I agree with you both.
I think we just need to presume that unless specified
otherwise, people are going to start using AI in their jobs,
basically whatever their job is.
I was hearing a talk from the economist Tyler Cowan the other day who teaches
at George Mason University was talking about how he now requires all of his students to use
an AI chat bot for their assignments and he doesn't consider it plagiarism, he just grades
the finished product. And I think that's how like we should evaluate work at our jobs too is like,
is it good or not? And if it's not good, then you used AI wrong
or you didn't use it in the right ways.
And if it's good and you used AI, like more power to you.
What matters is the finished product.
I just am not sure, especially in an academic context,
if the finished product represents how educated
the student is on the subject and that that's what
the grade should reflect.
Like, an educational environment is not a widget-making.
I guess that's right.
I mean, that's definitely a fringe position in academia,
where we still do care that people are thinking
through things on their own.
But in the context of like a software team at a big company,
like what matters is whether the code compiles or not.
Not whether you use the certain tool. But do you worry though, like what matters is whether the code compiles or not. Yes, for sure, for sure.
Not whether you use the certain tool.
But do you worry though, like as somebody with kids
that they're going to get to school
and they're going to be using chat GPT everything
and they might not develop the critical thinking skills
you want them to have?
Yeah, absolutely.
I mean, it goes back to what I was saying before
about just the generation of creative things.
And I work in an industry where that's,
the people that are good at that are really rewarded
for it and it's highly competitive.
But when no one is as good as the AI,
then not only is like, does the question emerge like,
what's the point of doing it if the AI can just
keep doing it better?
The other sort of darker question is like,
what's the point of learning to do it
or studying the art forms?
And I mean, that's a very dystopian long view,
but I'm not worried about that in the near term for my kids,
but I do feel like that's kind of...
I would just be happier if it let me learn more easily.
Like, you know, in The Matrix when Keanu's like,
I know Kung Fu,
because they just uploaded it into his brain.
I would also like to know Kung Fu.
I would love that.
I put that amount of effort into it.
And I will say, I'm a musician,
I really struggle to remember song lyrics.
I can't wait till I can put on glasses
and just read song lyrics,
and suddenly have like a thousand songs at my disposal.
That sounds really really fun and cool.
Alright, we're gonna get emails from people who say that they've invented this, so we'll pass those along to you.
Okay, great.
Alright, now let's consider the perspective of somebody who is just starting out in their career.
This listener asked to be anonymous for fear of backlash from colleagues, but she emailed the following,
What do you all think about people who are AI snobs?
I am a NASA scientist and surprisingly, I found that for an organization full of scientists,
there is a lot of snobbery over being better than using AI.
People basically act like those who use AI are too stupid to solve problems themselves
and they are smarter than everyone else because they are capable of an existence free of AI
assistance. I've even heard, quote, even if AI can help you solve a problem faster. Why would you avoid the cognitive stimulus?
That's the whole fun of being alive
So I guess my question is how do you respond to people who keep acting like they are better than you simply because they don't
Use AI Wow have you heard a version of this ed then no
I'm not sure I've heard this this bubble. My hunch is that those people are probably lying.
That they are using AI behind closed doors.
But I haven't encountered anti-AI snobbery.
Oh, I have.
I mean, I think there are people who are reacting this way, basically as a fear response.
Like they worry that if they use AI,
and it makes them smarter,
then maybe they weren't that smart to begin with,
or maybe they're going to lose their job,
or I think there are a lot of reasons that people react this way to AI,
including the fact that they just cannot believe that
a computer could do what they do better than them.
Everyone seems to have a version of this for their own job. that they just cannot believe that a computer could do what they do better than them.
And everyone seems to have a version of this
for their own job.
Like everyone kind of thinks that AI
is gonna take everyone else's jobs,
but me, I'm the special one,
and what I do can't be replicated.
And I see that attitude a lot.
Yeah, I mean, this is one where I wanna be careful
because look, I do believe,
if you have a job and you don't want to use AI,
you don't have to use AI if you don't have to use AI if you
don't want to if you love the cognitive stimulus of everything
you're doing like that's great. Like you probably have a great
job. How should you relate to people who do use AI I would
say with kindness, you know, particularly if they're using it
well. And I think a lot of folks and I would include myself among
these people, do feel
like it's giving me at least some kind of advantage in some set of things, you know?
So I do believe that over the long run, more and more people are going to come around because
they're going to see people like our emailer here just kind of doing well at their jobs.
And they're going to assume that, you know, not just the AI, but all the productivity
tools that they're using are helping them get
a little bit of an advantage.
And so, yeah, I guess that's my answer to that one.
I mean, that's a very kind and empathetic response
that both of you have given.
There's also the option of just trolling your coworkers.
Like, you could go over with an abacus
and take away their calculator and say,
I didn't want to deprive you of the cognitive stimulus
of using the abacus by taking this cheap shortcut.
Let me take the abacus.
Just use all cognitive power.
That's good.
Yes.
All right, so we have two related final questions
that I think get at the heart of how AI is complicating many
people's deepest and most meaningful relationships. Let's play the first video.
Hey Kevin, Casey, and Ed. Dan here from Chicago. I'm a devout listener of the
show. So I recently started a new relationship and after sharing an episode
of Hard Fork with my girlfriend, I realized that she hates AI. She has a visceral negative
reaction anytime I mention something AI related, unless it's Adobe Illustrator or like a one
steak sauce. And it makes her so uncomfortable that she doesn't even want to entertain a
thought about it. Now this has become a real point of tension because I use AI in my everyday
personal and professional life and
I'm really interested in these thornier questions around the future of work, society, and what it
means to be human in this new era. So how can I navigate a situation where I can't even bring up
something that is so intrinsic to my life with my significant other. Kevin, I'm especially curious to hear your thoughts
given that AI almost broke up your relationship too.
Thanks guys.
Great question, great question.
Kevin, what are your thoughts?
So look, I have a lot of sympathy for this.
I think a lot of people in my life are not as into AI
as I am.
My wife is sort of getting more interested in it.
We talk about it sometimes, but for a long time it was like, you know, she just, it wasn't
of immediate concern to her.
And so it was sort of my thing.
And that's why it's so important when you're in that situation to start a podcast, because
then you do actually have someone to talk about AI with without ruining your relationship.
Ed, what do you think?
Every relationship has things that are tough and that one person is into and another person
isn't.
And this feels very surmountable to me.
At a certain point, the culture will probably start to, you know, AI will start to just
infiltrate his girlfriend's life in ways that makes her more open to it.
But even if not, it just seems like find your buddy
that you can have these conversations with.
Yeah, I mean, I have to confess this is not a problem
I have in my relationship.
If anything, the issue in my relationship is
could we talk about something other than AI?
Yeah.
But you know, I feel like in so many relationships,
there is a subject like this, you know, like sports comes to mind, you know, I feel like in so many relationships, there is a subject like this,
like sports comes to mind,
maybe you're obsessed with the Golden State Warriors
and your partner isn't,
and every time you bring it up,
you see them rolling their eyes.
And I just think this speaks to the fact that
it helps to have people in your life
other than your primary partner
that you can just kind of distribute
the weight of your interests in. Yeah, I I mean there are a lot of sort of AI clubs popping up around the country. I've met people
when I've been out doing various events who say, you know, oh, I'm in I'm part of the local AI club.
And this is a thing that I think is starting to emerge over the last couple years. And so maybe if
there's one in Chicago, you could sort of find the local AI club and join that
and find a way to have a weekly discussion
about these things.
What do people do at AI club?
Casey, the first rule of AI club is
you don't talk about AI club.
I walked right into that one.
But I will give my earnest piece of advice here
because I think this is something really important
is that people don't automatically buy that
AI is going to be meaningful to them until they see something that they struggle with
in their life that it is useful at solving.
So I think one thing that you could do if you're in a relationship where one partner
cares a lot about AI and the other person hates it is just to figure out like, what
do they value?
What things do they like doing?
What do they struggle with?
What are some places maybe at their job or in
their personal life where they might be able to use AI?
Don't force it on them,
but just maybe take one of
those problems and just prompt an AI model with it and see if
it solves something or does something interesting for them,
and then show it to the partner.
Maybe try to meet them where they are and make their interest organic rather than just like
pretending that they're into it for your sake. Yeah, but I would also say like take no
for an answer from your partner here and like maybe just cool it on the AI talk for a while
and like see if she brings it up at any point and like maybe then you'll kind of have an entry point
but until then I don't know if it's worth it.
All right, let's go to our last question here.
This comes from a listener who goes by Elle.
And let's take the Elle.
And Elle asks, how do I help people get prepared for AI
without totally freaking them out?
Hi, Kevin and Casey.
I'm Elle, and I live in the deep south.
Here's some context for my question.
I've been getting anxious in conversations where AI comes up.
I'm tech avoidant but interested in tech forecasting because I want to feel prepared for what's ahead.
But most people I know aren't as tuned in as I am,
so I'm hesitant to share my realistic slash grim take on the potential of AI.
I don't want to plant scary seeds in their brains.
As a result, I'm feeling mentally
and existentially isolated.
I'm doing better at making the best of my time
in case it's running out faster than I'd hoped,
which is how we should live anyway, so that's positive.
On the other, I'm feeling distant from my peers
and loved ones in a way that is hard to articulate.
What should I say to my loved ones if it comes up?
I want them to be mentally prepared, but not super sad.
Living in the moment, but worried about frightening others.
L.
Now, Ed, I'm curious to hear your take on this
because it sounds like you may have a version
of this yourself.
You mentioned earlier that you are worried
about the singularity and you're sort of terrified
that it might be coming soon.
So what should L do about trying to live in the moment,
take advantage of the time that we have,
but also avoid freaking out her friends and family?
It's hard not to talk about something that is scaring you
or that you're obsessing over.
And this may sound glib,
but I think she could
benefit from a therapist.
Someone that she really can explore these feelings
with and help her process them and then also give her
an outlet where she's not burdening friends and
family with that. Because the other thing is, you
know, none of us really know what's coming. And so to be,
and there are a lot of AI optimists out there, and maybe they're right. So to be kind of chicken
little and gloom and doom, as much as it's not a choice sometimes, feels a little premature.
And again, finding an outlet, a therapist perhaps to explore these
feelings could be good. I'm glad you said that I had the same thought this really
could benefit from therapy and you know I guarantee you, you will not be the
only person talking about your fears about an AI future in therapy I think
it's actually quite common here in the Bay Area for folks to talk to their
therapist about that. You know while I totally understand your hesitant to bring everybody down with your fears about AI
I do think that part of living in a democracy is bringing up the things that you're concerned about and
There's no reason why you couldn't take maybe you know
Break it into small chunks and talk to your friends about things that you see out there that worry you
So if you see that all of a sudden the chat bots have gotten super sycophantic in your reading
stories about people having like mystical experiences with chat bots and convincing
themselves that they're the Messiah and you worry about that technology like being used by young
people, let's say, there's no reason why you shouldn't talk about that. In fact, I think you
should talk about it. And I think one reason why we started this show, Kevin, was we like talking
about these issues and we want to get other people talking about them. And I think one reason why we started this show, Kevin, was we like talking about these issues
and we wanna get other people talking about them.
So I totally hear you, El, on not wanting to be a bummer,
but I think everyone's allowed to be a bummer
at least 10 or 20% of the time, don't you, Kevin?
Yeah, although I should say,
I spend a lot of time talking with people
about AI in my life.
And I have found that when I'm in my sort of like
gloomiest mood about it, when I'm feeling like my P-Doom is quite high,
I tend to not have good conversations with people because they sort of come away feeling like
we're all screwed and there's nothing we can do about it. It sort of strips agency away from them
when I talk about it like that.
Whereas when I'm feeling more optimistic,
I tend to have conversations that are just like,
sort of rooted in like wanting to help people
understand things or like make them excited
or give them some hope for the future.
And I think that when you give people a sense
that they are not just like inexorably marching
toward this future that they have no control over.
I think it just goes a lot better.
And so my advice for these like, you know,
sort of AI doom crowd is always to like,
not frame things in terms of like,
what will happen no matter what,
but like to sort of sketch out paths for people and say,
well, if we make good decisions,
it could go in a really good way.
And if we don't, it could go in a really bad way.
But like, it's very important, I think, to not make
people think that they just have no choice in the matter.
What do you think, Ed?
I don't know, I'm getting anxious talking about it,
honestly, but in a, what I think is a good, healthy way.
I do really like what you said, Casey, that like,
we're allowed to be a bummer sometimes, and it is
important to kind of be honest about where we're at
with things in our, with our communities and the people who care about us and love us.
And also, if it's really feeling like an excessive burden to seek help and assistance with it,
whatever form that takes.
Absolutely. Well, before we go, Ed, we want to give you the opportunity, do you have a
hard question you'd like to ask Kevin and I?
Anything going on with technology, any dilemma you've confronted recently that we can offer you our expert assistance with?
Let's see.
I give my mom, who's 85, a lot of tech support.
And it can be quite frustrating.
And I would love some advice on sort of moving through
that with grace and also to what extent it's necessary.
There are times where it feels like maybe this is just not
something to get figured out and is that okay?
Kevin, what are your thoughts about that?
Yeah, so I have shifted my views on this recently.
My mom is also requires some tech support from time to time
and I'm not physically there, like in the same place as her.
So I often end up doing it over the phone.
And so what I have found is useful
is to just fix things for her.
Like do not try to walk her through it.
Just like the next time I am in the same place as her,
just take her phone
and fix all the things she doesn't like.
And do it very quickly and make it very hard
for her to undo those fixes.
And so basically I think that there is a point
at which people, they just do not want to learn
the entire process of like changing some settings
on something.
So if you can just sort of set it up for them,
they are eternally
grateful and you save yourself and them a lot of grief.
I think that is a perfect answer. The only thing that I would add to it is to the extent
that you feel like your mom may have any curiosity about technology, I do think it's fun to nurture
it a little bit. Like, yes, I think you're almost always going to be better off just
like fixing it. That's just an act of love that you can give your parents is fixing
things for them. But you can also see if in the process of fixing that you might share
a little bit about how it works or what you think is interesting about it. See if that
sparks anything for them. Maybe they'll go off and learn a little something themselves.
I have to say I talked to my mom this week and she told me that she had just used Claude to pick out some songs to put on a playlist for her 50th
wedding anniversary party that is like coming up in a couple of months. And I mean, I was
beaming with pride because she had had a good experience. She did think that it was too
sycophantic. I actually worried her. She sent me a screenshot. She was like, this thing
is being way too nice to me. But-
What did it say? It was like, you can't be having a 50th wedding anniversary. You're only 40 years old.
It was a yeah something in that neighborhood, you know
But what I loved about it was in the process of me talking to her all the time about AI
She was sort of like, you know what?
Let me like investigate and see if this thing could do anything for me
And I think that is a really nice gift we can give our parents too. Amen she was sort of like, you know what, let me investigate and see if this thing could do anything for me.
And I think that is a really nice gift
we can give our parents too.
Amen.
I do think that it is a great expression of love.
Yes, tech support.
It's the least we can do for our moms.
After what we put them through.
Parental figures, exactly.
You better believe it.
All right, Ed Helms, thank you so much for joining us.
You can buy Ed's book now.
It's called, Snafu, The Definitive Guide
to History's Greatest Screw-Ups.
Ed, this was great.
Thanks so much for having me, guys.
I feel like we made it out of this without a single snafu,
and that was important to me. Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited this week by Matt Collette.
We're fact-checked by Aina Alvarado.
Today's show was engineered by Katie McMurray.
Original music by Marion Lozano and Dan Powell.
Our executive producer is Jen Poyant.
Video production by Sawyer Roquet,
Pat Gunther and Chris Schott.
You can watch this full episode
on YouTube at youtube.com slash her fork.
Special thanks to paula schuman
queuing tam dahlia haddad and jeffrey miranda as always you can email us at hard fork at ny times.com Thanks for watching!