Fresh Air - Best Of: Education & A.I. / Having A Child In The Digital Age
Episode Date: May 24, 2025Professors and educators are now turning to A.I. to prepare lessons, teach, and even grade students' work. We talk with New York Times tech reporter Kashmir Hill about the ongoing debate in higher-ed ...about A.I.. TV critic David Bianculli reviews One to One, a new documentary about John Lennon and Yoko Ono.Also, writer Amanda Hess talks about motherhood in the digital age, navigating a world where apps, surveillance tech, and a relentless stream of algorithmic advice have become part of pregnancy and parenting. Her book is Second Life.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
These days, there's a lot of news. It can be hard to keep up with what it means for you,
your family, and your community. Consider This from NPR is a podcast that helps you make sense
of the news. Six days a week, we bring you a deep dive on a story and provide the context,
backstory, and analysis you need to understand our rapidly changing world.
Listen to the Consider This podcast from NPR.
Listen to the Consider This podcast from NPR. From WHYY in Philadelphia, this is Fresh Air Weekend.
I'm Tonya Mosley.
Today, New York Times tech reporter Cashmere Hill returns to the show to talk about a surprising
twist in who is using generative AI.
Colleges and universities have been trying to fight against students using tools like
chat GPT to do class assignments and communicate. Well, Hill's latest article reveals how
professors and educators are now turning to AI to prepare lessons, teach, and even
grade students' work. Also, New York Times writer Amanda Hess talks about
motherhood in the digital age, navigating a world where apps, surveillance tech,
and a relentless stream of algorithmic advice
have become par for the course of pregnancy and parenting.
Plus, David Bianculli reviews One to One,
the new documentary about John Lennon and Yoko Ono.
That's coming up on Fresh Air Weekend.
I'm Tanya Mosley, co-host of Fresh Air.
At a time of sound bites and short attention spans, our show is all about the deep dive.
We do long-form interviews with people behind the best in film, books, TV, music, and journalism.
Here our guests open up about their process and their lives in ways you've never heard
before.
Listen to the Fresh Air podcast from NPR and WHYY.
On the Indicator from Planet Money podcast, we're here to help you make sense of the
economic news from Trump's tariffs.
It's called in game theory, a trigger strategy or sometimes called grim trigger which sort of has a cowboy s green to it
To what exactly a sovereign wealth fund is for insight every weekday listen to NPR's the indicator from Planet Money
Keeping up with the news can feel like a 24-hour job
Luckily, it is our job every hour on the NPR News Now podcast, we
take the latest most important stories happening and we package them into five-minute episodes.
So you can easily squeeze them in between meetings and on your way to that thing. Listen
to the NPR News Now podcast now.
This is Fresh Air Weekend. I'm Tonya Mosley. We are living in the age of AI, and for a while now, chatbots have been helping students
take notes during class, put together study guides, make outlines, and summarize novels
and textbooks.
But what happens when we start handing over even bigger tasks, like writing entire essays
and work assignments, and asking AI to help us figure
out what to eat and how to reply to emails.
While professors say more and more students are using generative AI to write essays and
complete homework assignments, one survey by Pew Research found that about a third of
teens say they use it regularly to help with schoolwork. But it's not just students.
Professors are also using generative AI to write quizzes,
lesson plans, and even soften their feedback.
One academic called ChatGPT a calculator on steroids.
And universities are working to establish guidelines
and using software to track AI use.
But some students are now pushing back on that, saying that many of these detection
tools are inaccurate.
Well today we're joined by New York Times tech reporter Cashmere Hill, who has been
tracking how AI is reshaping daily life and the ethical gray zones it poses.
Last fall, Hill actually used AI to run her life for a week, choosing what to wear, eat,
and do each day to see what the outcome would be.
Hill is also the author of Your Face Belongs to Us, a Secretive Startup's Quest to End
Privacy as We Know It, which investigates the rise of facial recognition tech and its
disturbing implications for civil liberties.
Kashmir Hill, welcome back to Fresh Air.
Hi, Tanya. It's so nice to be here.
You know, I was talking with a professor friend recently who said he really is in the middle
of an existential crisis over AI. He teaches a writing intensive course, and he actually
worries that with these tools, his job might
not even exist in a few years. And so I wanted to know from you, can you give us a sense
of just how widespread the use of this generative AI is, how it's become kind of a commonplace
on college campuses and schools?
Yeah. I mean, this has been going on for a few years now, basically ever since OpenAI
launched Chat GPT.
You know, students are using Chat GPT a lot to ask it questions, to answer problems, to
help write essays.
And I talked to professors, and they told me, you know, they're very sick of reading
Chat GPTEs because individuals think when they use this tool, it makes them
so smart, it helps them, you know, get such great insights. But for the professors that
are reading this material, it all starts to sound the same.
That's because there are words and phrases that are used so commonly that then they become
part of the generative AI and it's spit back out?
Yeah, exactly. There are certain words that use this. It's also just the formatting. They
said it has a certain way of doing paragraphs where it will have one sentence that's, you
know, short and then one that's long and one that's short. It really does feel like there's
a model for how it writes and they're seeing that model coming from all of these students
instead of hearing their, you know, their distinct voices and their distinct way of
thinking. And yeah, they are doing a lot to try to encourage
students to think for themselves, to maybe use the AI tools but not turn over
everything to the tools.
You know, this isn't surprising to me because people, especially students,
always are trying to find a shortcut. Plagiarism has always been an issue in academia, but the stories we are hearing are kind of astounding.
Yeah, I mean one of the the greatest pieces I've read on this is by New York
magazine came out this month and it was called Everybody is Cheating Their Way
Through College. And you know they had all these interviews with students where
they're saying you know I'm not totally dependent on ChatGBT, but I do use it to figure out what I'm going to
write, how I'm going to structure it, maybe write the lead of the paper for me.
It sounded to me almost like a Mad Libs version of college where you're just kind of filling
in the blanks a little bit and thinking around what ChatGBT is doing.
Your latest piece kind of turns the tables because you took a look at how professors
are using generative AI to teach, and what did you find?
Yeah, this story started for me.
I got an email from a senior at Northeastern University who said that her professor was
misusing AI, and she sent me some materials from the class. She was
reading lecture notes that he had posted online and found in the middle of them this kind
of query, this back and forth between her professor and ChatGBT. The professor was asking
ChatGBT provide more examples, be more specific. And as a result, she had looked at PowerPoint
slides that he had posted, and she found that
those had all these telltale signs of AI, kind of extraneous body parts on office workers.
This was a business class.
Like extra fingers on an image, stuff like that.
An extra arm, you know, distorted text, because these systems aren't very good at kind of
rendering pictures of text, kind of egregious misspellings. And so she was upset. She said, I'm paying a lot for this class.
The tuition for that class was around $8,000.
And she said, I expect kind of human work for my professor.
I don't think it should be AI.
And she had filed a complaint with Northeastern
and asked for her tuition for the class back.
And, you know, first I wondered,
is this a one-off or is this something that's happening on other campuses? So I started
looking at places where students review their professors. The big site is Rate My Professors.
And I noticed that in the last year there had been this real spike in students complaining
that their professors were overly reliant on AI, using it to, you know, make materials
for class, make quizzes that didn't make sense, give
assignments that didn't have actual answers because they
were broken because these systems are not always perfect,
and using it to grade their work and give them feedback.
And the students were really upset. They felt like it was
hypocritical because they had been told not to use AI in many cases.
And yeah, they also felt shortchanged,
like they are paying for this human education
and then they were getting AI instead.
One of the complaints on rate, my professors, was,
it feels like class is being taught by an outdated robot.
Wow.
You know, where is the learning in this?
And I'm just wondering what professors are actually saying.
I mean, I guess a big part of it,
as you write in this article, seems to be a resource issue.
Some professors are overworked.
Others have multiple jobs.
They might be an adjunct professor.
But what are some of the things that they're
sharing with you about why they're doing this?
Yeah, I reached out to many of the professors whose students had mentioned their AI use,
and they're very candid about it.
They said, yes, you know, I do use AI, and they told me about the different ways that
they're using it to create course materials sometimes, that it saves them a lot of time,
and that they use that time to spend with students.
Like one business professor told me that it took him now hours to prepare lessons,
and it used to take him days.
And so he's now been able to have more office hours for students.
Some did say that they used it as a guide in grading because they have so many assignments to grade.
Some of these professors, they're adjunct professors, which means that they're not kind of tenured or full-time with the university. So they may be teaching at several different universities.
Their classes may have 50 students, 100 students, so they have hundreds of students. And they
just said it's an overwhelming workload and that AI can be helpful. You know, they've
read these papers, they've been teaching these classes for years, and they said these
papers aren't very different from one another, and ChatGBT can help me
with this.
They also said that, you know, students need to learn how to use AI.
So some of them were trying to incorporate AI into their class in order to teach students
how to use it because they will likely use it in their future careers.
They also were kind of using AI because, you know, there's a
generational divide between professors and students, and they
felt like it kind of made them hipper, or it made their class
materials fresher, and they were hoping it would be more appealing
to students.
Okay, that's interesting.
Yeah. But in some cases, that was, yeah, backfiring because the
students, they feel skeptical
of the technology.
There's also kind of a disconnect between what the professors were doing and what the
students were perceiving.
So the professors told me at least they weren't, you know, completely saying, okay, okay,
ChachyBT, like, come up with the lesson plan for this class.
They said they were uploading documents that they had to ChatGBT and saying kind of convert this into a lesson plan or make a cool PowerPoint
slide for this. It was really nuanced and more complicated than I expected when I first
set out to figure out what was going on.
Okay. I'm just curious. It's just depended on the subject, I would guess, but is AI good
at grading? So, I reached out to dozens of professors, and there was no real through line on this
with the professors. Some said, it's terrible at grading, and others said it was really
helpful. So, I don't know, and I don't think there's somebody who's really done a study
on this yet. What kind of surprised me is that all the professors I talked to, they're just kind of
navigating this on their own.
I did talk to one student who had figured out or suspected that his professor was using
AI to grade.
So he put in a secret prompt, you know, in a visible font that said basically give me
a great grade on this paper.
So it really is
this kind of cat and mouse game right now.
I actually even noticed that you asked professors in the comments section of this latest article
to share what their universities are doing. But did you find any that are putting in effective
guidelines, any institutions?
I spent a lot of time talking to faculty at Ohio University in Athens, Ohio, and they
have a bunch of generative AI faculty fellows who are really trying to figure out what is
the best way to incorporate AI into teaching and learning where it enhances the educational
experience and doesn't detract.
And I asked kind of like, what are the rules there? And Paul
Shovelin, who is kind of the person who ended up featuring in the article, said
they don't do rules because it's too hard to do hard and fast rules. It really
depends on the subject. So instead they have principles. And, you know, the
principles are kind of saying, you know, this is a new technology. We should be
flexible with it. But one of the rules was there is no one-size-fits-all approach to AI. It really is flexible from class to
class. I would say two things that I heard were that professors should be transparent
with students about how they're using AI, and they really need to review anything that
comes out of the AI system to make sure that it's accurate, that it makes
sense, that they should be bringing their expertise to the output, not just relying
on the system.
And from what I was seeing, that was not always happening, and that's where things were going
wrong.
You know, one of the things that I keep hearing about is how hit or miss these detection tools
are as a way to combat this.
And one of your colleagues at the Times actually just wrote an article about how sometimes these detection tools get it
wrong. There was a student in Houston who received a zero after a plagiarism
detection tool identified her work as AI-generated, but she actually could
prove that she wrote it herself. I was wondering how common is this?
According to some studies, the AI detection services get it wrong anywhere from 6% to more.
I have certainly heard many stories of students saying that it says that they used AI when they didn't.
I actually heard this from professors as well that I talked to.
People who are more sophisticated about the use of AI said they don't trust these
detection systems.
One professor told me, you know, she had uploaded her own writing to it and it said
that her writing was AI generated when she knew it wasn't.
So, there does seem to be some skepticism about these tools and some universities no
longer use them.
And instead, professors told me that when they think that something is written
by AI, they'll often talk to that student one-on-one about it. But yeah, the systems,
as I understand it, tend to be a little discriminatory, you know, for students for whom English is
a second language. They often detect that writing as AI generated when it's not. And
there's some other ways it's kind of misjudging the writing of some types of students
as being AI-generated.
Let's take a short break.
If you're just joining us, we are talking to Cashmere Hill.
She's a tech reporter for The New York Times.
And we're talking about the growing use of artificial intelligence in our daily lives,
from the classroom to the workplace to our homes.
And the deeper consequences that come with it
will continue our conversation after a short break.
I'm Tanya Mosley, and this is Fresh Air Weekend.
We've all been there, running around a city,
looking for a bathroom, but unable to find one.
Hello.
Do you have a restroom we could use?
A very simple free market solution
is that we could just pay to use a bathroom, but we
can't.
On the Planet Money podcast, the story of how we once had thousands of pay toilets and
why they got banned from Planet Money on NPR, wherever you get your podcasts.
Imagine if you will, a show from NPR that's not like NPR, a show that focuses not on the
important but the stupid,
which features stories about people smuggling animals in their pants and competent criminals
in ridiculous science studies.
And call it Wait, Wait, Don't Tell Me because the good names were taken.
Listen to NPR's Wait, Wait, Don't Tell Me.
Yes, that is what it is called wherever you get your podcasts. On NPR's Thru Line. Witnesses were ending up dead. Let's get into the experiment that you did on your own life back in November.
So you allowed your life for a week to be controlled by generative AI and you had it
decide just about everything.
Your meals for the day, your schedule, your shopping list, what to wear.
Also you uploaded your voice for the tool to clone, your likeness to create videos of
you.
And what was so interesting about this experiment to me, in addition to what you did, is that each of these AI tools you revealed has its own personality.
And I'm putting that in air quotes.
But how did those personalities show up when you input at your requests?
Yeah, I was trying all the chat bots.
Shachibiti is the most popular, but I tried, you know, your requests? Yeah, I was trying all the chat bots. Chat GBT is the most popular.
But I tried, you know, Google's Gemini, which I found to be very kind of sterile,
just business-like. I was using Microsoft's Copilot, which I found to be a little over-eager.
Every time I interacted with it, it would ask me questions at the end of every interaction,
like it wanted to keep going. I used Anthropix Claude, which I found to be very moralistic. You know, I
told all the chatbots I'm a journalist, I'm doing this experiment of turning my life over
to generative AI for the week and having it make all my decisions. And all the chatbots
were down to help me except for Claude, which said it thought that the experiment was a
bad idea and it didn't want to help me with it Claude, which said it thought that the experiment was a bad idea and
it didn't want to help me with it because I shouldn't be outsourcing all my decision-making to AI
because it can make mistakes, it's inaccurate, the question of free will. So I kind of thought of
Claude as Hermione Granger, who is kind of upstanding.
I mean, what makes Claude special then? Because if it's saying no to that prompt but all of the others are saying yes, what makes it stand apart in this field?
It's a result of training. So I talked to Amanda Askell, who is a philosopher that works for OpenAI, and her job-
Oh, it's interesting they have a philosopher. Yes.
Yes, yes. There's a lot of new jobs in AI these days, which are quite interesting. But yeah, her job is to kind of fine-tune Claude's personality.
And so this is one of the things that she's tried to build into the system,
is high-mindedness and honesty.
And she did want the system to push back a little, was trying to counter-program the sycophancy
that's kind of embedded in
these systems. And it was one of the only systems that would kind of tell me when I
thought something I was doing was a bad idea. And it refused to make decisions for me. So,
I was getting my hair cut, for example. And I went to chat GBT and I said, hey, I'm going
to get my hair cut. I want it to be easy." And it's like, get a bob, which
is kind of speaks to why I felt so mediocre by the end of the week. That's a very average
haircut. And Claude said, I can't make that decision for you, but here are some factors
that you could think about. You know, how much time do you want to spend on your hair,
et cetera.
Did that feel like a benefit?
I did really like that about Claude. I think that's important that these systems don't act too sycophantic.
I think it's good if they're pushing back a little bit.
I still think it's important for these systems to periodically remind people that they are,
you know, word-generating machines and not human entities or independent thinking machines.
But yes, I liked Claude.
And a lot of the experts I
talked to who used generative AI a lot in their work said they really like Claude. It's
their favorite chatbot. And they especially liked it for writing. They said they thought
it was the best writer of the group. But yeah, it was interesting. But chatGBT is the one
I ended up using the most that week, in part because it was game to make all my decisions
for me.
You had it plan out meals.
I'll tell you when I read that, I actually perked up like,
oh, wait a minute, you know, because we have to choose
what's for dinner every single day, seven days a week
till we die.
I mean, it's just something we always have to do.
How did that feel to let it plan out your meals
and grocery lists, and did it do a good job?
Yeah, so I, at the beginning of the week, I said, you know, unfortunately can't go out
to the grocery store for us yet, but it made the list.
I said organize it by section.
And you know, my husband and I usually go back and forth throughout the week making
this list.
And ChatGBD just did it in seconds, which was wonderful.
We went to the store, we bought everything.
But as we're picking up the items, I'm just realizing
ChatGPD wants me to be a very healthy person.
It picked out very healthy meals.
It actually wanted me to make breakfast, lunch, and dinner every day,
which is laughable.
Like, I work for the New York Times.
Yeah.
Like, I'm busy.
I'm lucky if I have, like, toast or cereal or cereal for breakfast and like a bowl of chips for lunch.
So it had these unrealistic expectations about how much time I had.
And I told it, hey, we need some snacks. Like I can't just be eating like a healthy round meal morning, afternoon and night.
And so its snacks for us were almonds and dark chocolate, like no salt and vinegar chips,
no ice cream.
And so it was interesting to me that embedded in these systems was, you know, be very healthy.
It was like kind of an aspirational way of eating.
And I did wonder if that has something to do with the scraping of information from the
internet, that people kind of project their best selves on the internet.
Like maybe it had mostly scraped wellness influencers, ways of eating as opposed to real people.
Were there any tools that you felt like, oh I could keep this in my life and it would improve my life?
So it did make me feel boring overall. Kind of made me feel like a mediocre version of myself.
But I did like that it freed me of decision paralysis. Sometimes I'm bad at making decisions. So at one point I had it choose the paint
color for my office and I am very happy with my paint color. Though when I told
the person in charge of model behavior at OpenAI that I used it to choose my
paint color, she was kind of horrified. And so that's just like.
And it shows what color for you.
The color name is secluded woods and the actual color was brisk olive.
But I did like it.
My husband also agreed that it was the best of the five colors that Chachi Petit had recommended
and that it ultimately chose.
But she said, man, that's just like asking a random person on the street.
But what I really like it for around my house is taking a photo of a problem.
Like I had discolored grout in the shower and I take a photo and I uploaded chat GPT
and I'm like, can you tell me what's going wrong here? And it's very good at, I think, diagnosing those problems,
at least when I do further research online. And so that has been kind of my main use case. I use it for since. You know Kashmir, you're deep
into this world because of your job. You've done these experiments, you've
talked to so many experts. After that article came out with your experiment
back in the fall, you asked yourself if you want to live in a world where we are
using AI to make all of our decisions all the time. It almost feels like that's not even a question really because we are seeing it in real time. But I'm just
wondering what did you come to?
I personally don't want to live in a world where everybody is filtering their kind of
every decision through AI or turning to AI for every message they write. I do worry about how much
we use technology and how isolating it can be and how it might disconnect us from one another.
So, you know, I write about technology a lot. I see a lot of benefits to its use, but I do hope that we can learn to maybe de-escalate our
technologies a bit, be together more in person, talk to each other by voice. If people worry about
AI replacing us or taking our jobs, I worry more about it coming between us and just its fraying of societal fabric,
kind of this world in which if all of us are talking to an AI chatbot all the time,
it is super personalized to us.
It's telling us what we want to hear.
It's very flattering.
I worry about that world in terms of filter bubbles and how we would increasingly
kind of be alone or seeing the world.
And how it will distort our ability to interact with each other.
Yes, distort our shared sense of reality and our ability to be with each other,
connect with each other, communicate with each other.
So I just hope we don't go that way with AI.
Kashmir Hill, thank you so much for your reporting and this conversation.
Oh, thank you so much for this conversation. It was wonderful.
Kashmir Hill is a tech reporter for the New York Times.
In 1971, the year after the Beatles broke up,
John Lennon and Yoko Ono moved from London to New York.
They spent the next 18 months living in a small Greenwich Village apartment before moving uptown to the Dakota, a more lavish and secluded
building. During that time, they held a benefit concert for the Children of Willowbrook, a
state-run Staten Island facility housing the disabled in horrifying conditions. It was
the only full-length concert Lennon gave after The Beatles,
and a new film by Kevin MacDonald documents both the concert and that
period in Lennon's life. It's called One to One John and Yoko, now streaming on
demand. Our TV critic David Bianculli has this review. Kevin MacDonald and editor
and co-director Sam Rice Edwards framed their movie about John Lennon and Yoko Ono in the early 70s
by looking through the lens of television.
In this case, it's a perfect framing device.
As Lennon arrived in this country, being more politically outspoken than he was as a Beatle,
he and his wife, Yoko Ono, eagerly went on TV talk shows to rally support for their causes,
showing up everywhere from Dick Cavett to a weak co-hosting The Mike Douglas Show.
And even more eagerly, John Lennon devoured television.
In their small Greenwich Village apartment, which is recreated for the documentary, John
and Yoko installed a TV at the foot of their bed so they could lounge around watching.
And both the variety and sheer volume of what was available delighted them.
We're very comfortable here, especially like having TV, you know, 24 hours a day or something.
Suits me fine. Suits me fine.
What are your favorite programs?
I just like TV, you know. To me it replaced the fireplace when I was a child.
And if you want to know what 20 million Americans are talking about on Saturday night, it's
what they saw on Friday night on TV.
It's a window on the world.
Whatever it is, that's that image of ourselves that we're portraying.
They consumed it all, from the Walden's to Watergate coverage and lots and lots of news
about Richard Nixon and Vietnam
and George Wallace and Attica.
They also watched American football games
and beauty pageants.
And in one of Lennon's first local radio appearances
after arriving, he responded to a phone-in caller
by demonstrating his familiarity
with televised beauty pageants.
Yes?
I'd like to ask John a question.
Sure. John? Yeah? I can like to ask John a question. Sure.
John?
Yeah?
I can't believe I'm speaking to a myth.
A myth?
Yeah.
Myth world or myth universe?
For Lenin, it was a time of reinvention, both musically and in terms of his political
involvement.
He fell in with activists like Jerry Rubin and appeared and performed at a rally protesting the 10-year
sentence of another activist, John Sinclair, for a minor drug possession.
But after agreeing to headline a series of national protest tour dates leading up to
the 1972 national political conventions, Lenin backed off because he sensed the leaders of
that movement were advocating violence.
Even so, Lenin's activities got
him singled out by the Nixon administration, which threatened to deport him and installed
listening devices on his phone. And just as President Nixon ended up secretly taping his
own White House conversations, John Lenin ended up taping his own phone calls, too.
From heated talks with his then manager
to casual chats with friends,
they provide some of the best moments in this documentary.
In this call, which is loaded with suspicious static,
a reporter asks about the wiretap rumors.
People say their phones are bugged.
First of all, I thought it was paranoia
given reading all these, you know, conspiracy theory books.
You can hear things going on on the phone every time you pick it up, people clicking
in and out.
There's a lot of repairs going on downstairs to the phones every few days down in the basement.
I started taking my own phone calls too, so I don't know why, but at least I'll have a copy of whatever they're going to try and say I'm
talking about.
Eventually, John and Yoko find yet another cause by watching TV.
After seeing a news report by ABC correspondent Geraldo Rivera exposing the terrible treatment
of young disabled patients at Willowbrook State Development Center, John and Yoko decide
to hold a benefit
concert at Madison Square Garden, just as fellow Beatle George Harrison had done the
year before with his Concert for Bangladesh.
They called theirs the One to One Concert, and this film plays many songs from that show
full length.
Imagine, Instant Karma, and Mother, a searingly emotional song about John feeling abandoned
by his parents, a father who left and a mother who died.
And even a Beatles song, to which Lennon adds an overt message of opposition to the Vietnam
War to the audience's obvious delight. He rolled a coaster, he got early warning
He got muddy water, he won mojo filter
He said one and one and one is three
And I've got to be good looking cause he's so hard to see
Come together
Right now
Stop the war
Sean Ono-Lennon is one of this documentary's executive producers,
which may explain why some of the more unflattering details from the period
are omitted or downplayed.
But Yoko gets her due here, as she should,
as an artist in her own right,
and as the victim of some awful treatment by Beatles fans and the press. And by using TV to
tell their story, one-to-one John and Yoko retells the story of that time as well. Incendiary times.
Inspirational artists. Amazing music. David Bianculli is professor of television studies at Rowan University. He reviewed One
to One, John and Yoko, now streaming on demand.
Coming up, journalist Amanda Hess talks about how technology is changing motherhood. I'm
Tanya Mosely, and this is Fresh Air Weekend.
My next guest is Amanda Hess.
She's a journalist, cultural critic, and now author of a new memoir titled Second Life,
Having a Child in the Digital Age.
The book starts with a moment every expecting parent dreads, a routine ultrasound that is
suddenly not routine.
When Hess was 29 weeks pregnant, doctors spotted something that indicated
her baby could have a rare genetic condition.
What followed was a spiral of MRIs, genetic testing,
consultations with specialists,
and like many of us would do,
a late night dive into the internet for answers.
That search led her down a rabbit hole
and to fertility tech, AI-powered embryo
screening, conspiracy theories, YouTube birth vlogs, the performance of motherhood on Instagram,
and threaded through it all, an unsettling eugenic undercurrent suggesting which children
are worth having.
Known for her commentary on Internet culture and gender at the New York Times, Hess turns her critique inward, asking herself, what does it mean to become a parent while plugged
into an algorithmic machine that sorts scores and sells versions of perfection and what's
considered normal?
Amanda Hess, welcome to Fresh Air.
Thank you so much for having me.
You opened this book with a moment that
I mentioned soon to be parents fear that's a routine ultrasound that shows a
potential abnormality and at the time you were seven months pregnant what did
the doctor share with you? He told me that he saw something that he didn't
like and that praises really stuck with me but what he he saw something that he didn't like, and that phrase has really stuck with
me.
But what he saw was something that when I saw it, I thought was cute, which is that
my son was sticking out his tongue.
And that's abnormal if the baby is like not just bringing the tongue back into the mouth.
Although of course I didn't know that at the time.
After several weeks of tests when I was about eight months pregnant, we learned that my
son has Beckwith-Wiedemann syndrome, which is an overgrowth disorder that, among other
things, can cause a child to have a very enlarged tongue.
One of the things you do in your writing that's really powerful is you integrate the ways
that technology really infiltrates every waking moment of our lives, including this particular
moment when the doctor looked at your ultrasound.
And I'd like for you to read about this moment just before you receive that news from the
doctor.
You're on the sonogram table.
You're waiting for the doctor to arrive.
And as you're lying there with that goo that they put on your stomach to allow for the ultrasound wand to
glide over your pregnant belly, your mind begins to race.
Can I have you read that passage?
Sure.
The errors I made during my pregnancy knocked at the door of my mind.
I drank a glass and a half of wine on Mark's birthday before I knew I was pregnant.
I swallowed a tablet of Ativan for acute anxiety after I knew.
I took a long hot bath that crinkled my fingertips.
I got sick with a fever and fell asleep without thinking about it.
I waited until I was almost 35 years old to get pregnant.
I wanted to solve the question of myself before bringing another person into the world, but
the answer had not come. Now my pregnancy was, in the language of obstetrics, geriatric.
For seven months we'd all acted like a baby was going to come out of my body like a rabbit yanked from a hat.
The same body that ordered mozzarella sticks from the late-night menu and stared into a computer like it had a soul.
The body that had, just a few years prior, snorted a key of cocaine supplied by the party bus driver hired to transport it to medieval times.
This body was now working very seriously
to generate a new human. I had posed the body for Instagram, clutching my bump with two
hands as if it might bounce away. I had bought a noise machine with a womb setting and thrown
away the box. Now I lay on the table as the doctor stood in his chamber, rewinding the tape of my life.
My phone sat on an empty chair six feet away.
Smothered beneath my smug maternity dress,
it blinked silently with text messages from Mark.
If I had the phone, I could hold it close to the exam table
and Google my way out.
I could pour my fears into its portal and process them into answers.
I could consult the pregnant women who came before me, dust off their old message board
posts and read of long ago ultrasounds that found weird ears and stuck out tongues.
They had dropped their baby's fates into the internet like coins
into a fountain, and I would scrounge through them all, looking for the lucky penny. For
the woman who returned to say, it turned out to be nothing. Trick of light."
Thank you so much for reading that, Amanda. I think that every soon-to-be mother, every
mother can really identify with that.
And I think just in life, like, we've come to this place with our relationship with technology
that we can kind of Google our way out of tough moments.
You write about receiving that first alarming warning of this abnormal pregnancy and how
even before getting a second or third opinion that clarified this diagnosis, your mind didn't jump to something you did,
but to something that you were.
And that moment seemed to crystallize
kind of this deeper fear about your body
and how it surveilled and judged, especially in pregnancy.
Can you talk just a little bit about how technology
also kind of fed into your judgment about yourself.
Yeah. You know, I started to think about writing a book about technology before I became pregnant,
not sort of planning to focus it on this time in my life. And then instantly once I became
pregnant, my relationship with technology became so much more intense.
And I really felt myself being influenced by what it was telling me.
I'm someone who, you know, I understand that reproduction is a normal event, but it really
came as a shock to me when there was a person growing inside of me and I felt like I really
didn't know what to do.
And so I also, you know, early in my pregnancy didn't wanna talk to any
people about it.
So I turned to the Internet, I turned to apps.
Later when my child was born, I turned to gadgets.
And it was only later that I really began to understand that these technologies
work as narrative devices and
they were working in my life to tell me a certain story about my role as a parent and
the expectations for my child.
Is there something inherently different about an app and us being able to hold these technologies, you know, in
the palm of our hand and constantly have access to them. You know, I'm thinking
about when I was a pregnant person and I just had all the books around what to
expect when you're expecting and other types of text. Is there something
inherently different about our relationship when it is presented to us
in the form of technology that has a different effect on us?
I think so.
I had books too, and the first difference I noticed is that I wasn't carrying this
like big pregnancy book everywhere I went.
But my phone was always there.
And so even if I did not intend to bring my pregnancy app with
me, it was there constantly. And so I found myself looking at it again and again. I think
I was looking for reassurance that I was doing okay. And so even if I wasn't doing exactly
what this app had said, I wasn't missing something major. And there was someone, it really felt like, along with me who was keeping track.
And so there became this real intimacy to our pseudo relationship that I didn't have
with like an informational pregnancy book. That sense of reassuredness too, I want to talk a little bit about like the privilege
in that because on the face of it, it's like the ability to know and understand that all
seems positive.
I'm thinking about like some of the big technologies that are coming into fruition now or already
there like OpenAI, Sam Altman's
funding of the genomic prediction, which is supposedly going to offer embryo tests predicting
everything from diabetes risk to potential IQ of a baby.
But you actually point this out in the book that there is a growing divide because on
one side there are these affluent parents who have access to this kind of screening and then on the other
Many parents can't even get basic access to prenatal care
How did your experience kind of help you reflect on those extremes?
You know, I think after the particular
Circumstances of my pregnancy. I became really interested in prenatal testing and how it
was advancing. And interested in the fact that it was so, it seemed like such an exciting
category for all of the male tech leaders that we know so much about now. And, you know,
it was only through, like, reading about them a little bit that I came to understand that
this new ascendant technology that offers what they call polygenic analysis of embryos.
So different outlets promise to find different characteristics, but they're offering everything from screening
that predicts an increase in IQ points, that screens for hereditary cancers, all of this
stuff is something that you can only use if you're going to go through IVF. And so after paying for this embryo screening,
which is a few thousand dollars,
you're also choosing to go through in vitro fertilization,
which is not only just a really difficult experience
for many people, but extremely expensive
and out of reach for most people.
And as I was reading one story about this, I was really struck by a woman people, but extremely expensive and out of reach for most people.
And as I was reading one story about this, I was really struck by a woman who founded
one of these companies who told one of her investors that instead of going through IVF
herself, she should simply hire a surrogate and have her do it for her. And that to me really crystallized this idea
of like a reproductive technology gap.
I think the thing that worries me the most
about these technologies is again,
there seems to be so much interest and investment
in understanding what certain children will be like and trying
to prevent children with certain differences and very little investment in the care for
those children, research that could help these children and adults.
And so I really found myself on both sides of this divide where I had access to
what was at the time, you know, some advanced prenatal testing, but was also able to see
after my child's birth that, you know, who's being born into a world that is not innovating
in the space of accommodating disabilities in the way that it is innovating in the space of accommodating disabilities in the way that it is innovating in the space
of trying to prevent them.
I want to talk a little bit about this idea of surveillance.
So your work as a cultural critic, you often touch on surveillance, both state and personal.
And in this book, you describe how new parents also surround themselves with surveillance
tech, so baby monitors and nursery cameras that are constantly watching.
And of course, in our daily life, we're all under so many forms of surveillance.
How do you think this surveillance culture is affecting us?
Or how did it affect you in those early days as a mother mother when you've got that baby monitor in your baby's room?
Like are we habituating our children to be watched 24-7?
I think we are.
I mean I had this experience of during pregnancy habituating myself to some external authority
watching my pregnancy.
And then after my child was born, I became the authority who was watching him and
surveilling him.
And I think there's this way that surveillance can become
confused with care and attention and love.
And I had this experience with my kids where I'd installed this
fancy baby monitor that I was testing out for the
book and the video was uploaded to some cloud server so I could watch it from anywhere.
I could watch them if they were taking a nap in their crib, but I was at the coffee shop
down the street or whatever and somebody else was there with them.
And it could make it seem as if I were close to them,
because I would see my adorable children
and have this experience of being able to just watch them
sleep peacefully, which is so different from the experience
of dealing with them most of the time.
But it wasn't until one night when the camera was set up
and I laid down with my son in his bed and I sensed this presence
in the corner of the room, these like four red glowing eyes.
You could see it from his perspective, right?
Yes, that I could really see it from his perspective and like he's not seeing, you know, this beautiful
like smiling image of me.
Watching him, like he's seeing four mechanical eyes.
And I spoke with my friend who had used a camera with her kid who eventually she asked
for it to be taken out when she was three years old or something and could articulate
this because she didn't want the eye, as she called it, to be watching her in her bedroom.
And I think, you know, so many times these technologies
are purchased by parents before their kids are even born.
And they want to do what's right, and they're scared, you know.
And they want to make sure that they have everything they need,
like, before the child arrives.
And so, we're not even giving ourselves a chance
to really understand what it is we're getting
and whether we actually need it.
Right.
I mean, this goes back to like your ability to control the situation.
I remember there was a time when I think our baby monitor went out in the middle of the
night.
So I woke up like from a deep sleep.
It's eight o'clock.
I'm like, wow, I slept for like from a deep sleep. It's 8 o'clock. I'm like, wow We slept for like eight nine hours and I realized that the baby monitor had died and
That's how I was completely okay. I was completely freaked out like what if I missed like a catastrophe that happened?
But then when you think back it's like okay if that were the situation I would have heard it
I mean I have my senses
were the situation, I would have heard it. I mean, I have my senses.
Do you feel like these technologies in many instances
kind of take us outside of ourselves
from we're like giving control over to the technology?
Yeah, I had this experience with my son
where I heard about a robotic crib called the SNU
before he was born.
I got this secondhand version off of a parental listserv and set it up before he was born. I got this secondhand version off of a
parental listserv and set it up before he was born. So I was just sitting there waiting for him to come sleep in it and
the SNU, you know,
promises that SNU babies tend to sleep one to two hours more than other babies,
which is such a tantalizing promise to a new parent. Like, one to two hours is so many hours for a parent of a newborn.
And my son just really didn't take to the SNU.
And I spent such a long time, like, trying to troubleshoot the SNU to try to get it to work for my baby
until eventually I found that I was really troubleshooting my child.
And he had become so entwined with the technology that I really didn't know where the workings
of the machine ended and where
my son's, you know, sleep patterns began.
And so this technology that's often sold as a tool to help us better understand our kids
and get like data insights into them, in this case for me, it actually made it more difficult
for me to understand what was going
on with him and how he really wanted to sleep.
Well, Amanda, I really appreciate you writing this book and thank you so much for taking
the time to talk with us about it.
Thank you so much.
Amanda Hess is a journalist, cultural critic, and author of the memoir, Second Life, Having a Child
in the Digital Age.
Fresh Air Weekend is produced by Teresa Madden.
Fresh Air's executive producer is Danny Miller.
Our managing producer is Sam Brigger.
Our technical director and engineer is Audrey Bentham.
Our digital media producer is Molly Seabee Nesbur.
Our consulting video producer is Hope Wilson.
With Terry Gross, I'm Tanya Mosely.