Fresh Air - Professors Are Using A.I., Too. Now What?
Episode Date: May 21, 2025Colleges and universities have been trying to fight against students using tools like ChatGPT to do class assignments and communicate. But here's a twist: Professors and educators are now turning to A....I. to prepare lessons, teach, and even grade students' work. We talk with NYT tech reporter Kashmir Hill about these conflicts on campus. Also, she shares what she learned after giving over her life for a week to A.I. tools, which wrote emails for her, planned her meals, chose what she should wear, and even created video messages for TikTok using her likeness and a clone of her voice.David Bianculli reviews a new documentary about John Lennon and Yoko Ono.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
Keeping up with the news can feel like a 24-hour job. Luckily, it is our job. Every hour on
the NPR News Now podcast, we take the latest most important stories happening and we package
them into five-minute episodes so you can easily squeeze them in between meetings and
on your way to that thing. Listen to the NPR News Now podcast now. This is Fresh Air.
I'm Tanya Mosley.
We are living in the age of AI.
And for a while now, chat bots have been helping students take notes during class and put together
study guides, make outlines and summarize novels and textbooks.
But what happens when we start handing over even bigger tasks, like writing entire essays
and work assignments and asking AI to help us figure out what to eat and how to reply to
emails?
Well, professors say more and more students are using generative AI to write essays and
complete homework assignments.
One survey by Pew Research found that about a third of teens say they use it regularly
to help with schoolwork.
But it's not just students. Professors are also using generative AI to write quizzes,
lesson plans, and even soften their feedback. One academic called ChatGPT a calculator on steroids.
And universities are working to establish guidelines and using software to track AI
use.
But some students are now pushing back on that, saying that many of these detection
tools are inaccurate.
Today we're joined by New York Times tech reporter Cashmere Hill, who has been tracking
how AI is reshaping daily life and the ethical gray zones it poses.
Last fall, Hill actually used AI to run her life for a week,
choosing what to wear, eat, and do each day
to see what the outcome would be.
Hill is also the author of Your Face Belongs to Us,
A Secretive Startup's Quest to End Privacy as We Know It,
which investigates the rise of facial recognition tech
and its disturbing implications
for civil liberties.
Kashmir Hill, welcome back to Fresh Air.
Hi, Tanya.
It's so nice to be here.
You know, I was talking with a professor friend recently who said he really is in the middle
of an existential crisis over AI.
He teaches a writing intensive course, and he actually worries that with
these tools, his job might not even exist in a few years. And so I wanted to know from
you, can you give us a sense of just how widespread the use of this generative AI is, how it's
become kind of a commonplace on college campuses and schools?
Yeah. I mean, this has been going on for a few years now,
basically ever since OpenAI launched Chat GPT. You know,
students are using Chat GPT a lot to ask it questions, to
answer problems, to help write essays. And I talked to
professors, and they told me, you know, they're very sick
of reading Chat GPTEs
because individuals think when they use this tool it makes them so smart, it helps them, you know,
get such great insights, but for the professors that are reading this material, it all starts to sound the same.
That's because there are words and phrases that are used so commonly that then they become part of the generative AI
and it's spit back out?
Yeah, exactly. There's certain words that use this. It's also just the formatting.
They said it has a certain way of doing paragraphs, where it will have one sentence that's, you know, short,
and then one that's long and one that's short. It really does feel like there's a model for how it writes,
and they're seeing that model coming from all of these students instead of hearing their, you know, their distinct voices and their distinct way of thinking. And yeah, they are doing a lot to try to
encourage students to think for themselves, to maybe use the AI tools but not turn over everything
to the tools. You know, this isn't surprising to me because people, especially students, always are
trying to find a shortcut. Plagiarism has always been an issue in academia.
But the stories we are hearing are kind of astounding.
Yeah, I mean, one of the greatest pieces I've read on this is by New York Magazine, came
out this month, and it was called Everybody is Cheating Their Way Through College.
And you know, they had all these interviews with students
where they're saying, I'm not totally dependent
on ChatGBT, but I do use it to figure out
what I'm gonna write, how I'm gonna structure it,
maybe write the lead of the paper for me.
It sounded to me almost like a Mad Libs version of college
where you're just kind of filling in the blanks a little bit
and thinking around what ChatGBT is doing. Your latest piece kind of filling in the blanks a little bit and thinking around what Chachipiti is doing.
Your latest piece kind of turns the tables because you took a look at how
professors are using generative AI to teach, and what did you find?
Yeah, this story started for me. I got an email from a senior at
Northeastern University who said that her professor was misusing AI and she sent me
some materials from the class. She was reading lecture notes that he had posted online and
found in the middle of them this kind of query, this back and forth between her professor
and ChatGBT. The professor was asking ChatGBT provide more examples, be more specific. And
as a result she had looked at PowerPoint slides that he had
posted, and she found that those had all these telltale signs of AI, kind of
extraneous body parts on office workers. This was a business class.
Like extra fingers on an image, stuff like that.
An extra arm, you know, distorted text, because these systems aren't very good at
kind of rendering pictures of text, kind
of egregious misspellings. And so she was upset. She said, I'm paying a lot for this
class. The tuition for that class was around $8,000. And she said, I expect kind of human
work for my professor. I don't think it should be AI. And she had filed a complaint with
Northeastern and asked for her tuition for the class back.
And, you know, first I wondered, is this a one-off or is this something that's happening on other campuses?
So I started looking at places where students review their professors.
The big site is Rate My Professors.
And I noticed that in the last year there had been this real spike in students complaining that their professors were overly reliant on AI, using it to, you know, make materials for
class, make quizzes that didn't make sense, give assignments that didn't have actual answers
because they were broken because these systems are not always perfect, and using it to grade
their work and give them feedback.
And the students were really upset.
They felt like it was hypocritical because they had been told not to use AI in many cases.
And yeah, they also felt shortchanged, like they are paying for this human education and
then they were getting AI instead.
One of the complaints on rate by professors was it feels like class is being taught by
an outdated robot. Wow.
You know, where is the learning in this?
And I'm just wondering what professors are actually saying.
I mean, I guess a big part of it, as you write in this article, seems to be a resource issue.
Some professors are overworked. Others have multiple jobs.
They might be an adjunct professor.
But what are some of the things that they're sharing with you about why they're doing this?
Yeah, I reached out to many of the professors whose students had mentioned their AI use.
And they're very candid about it. They said, yes, you know, I do use AI. And they told
me about the different ways that they're using it to create course materials sometimes, that
it saves them a lot of time, and that they use that time to spend with students.
Like one business professor told me that it took him now hours to prepare lessons and
it used to take him days. And so he's now been able to have more office hours for students.
Some did say that they used it as a guide in grading because they have so many assignments
to grade. Some of these professors, they're adjunct professors,
which means that they're not kind of tenured or full-time with the university. So they
may be teaching at several different universities. Their classes may have 50 students, 100 students.
So they have hundreds of students. And they just said it's an overwhelming workload and
that AI can be helpful. You know, they've read these papers, they've been teaching these classes for years, and they said these papers aren't very different from one another
and ChatGBD can help me with this. They also said that, you know, students need to learn
how to use AI. So, some of them are trying to incorporate AI into their class in order
to teach students how to use it because they will likely use it in their future careers. They also were kind of using AI because, you know, there's a
generational divide between professors and students and they felt like it kind
of made them hipper or it made their class materials fresher and they were
hoping it would be more appealing to students.
Okay, that's interesting.
Yeah. But in some cases that was, yeah, backfiring because the students, they feel skeptical
of the technology.
There's also kind of a disconnect between what the professors were doing and what the
students were perceiving.
So the professors told me at least they weren't, you know, completely saying, okay, okay, chat
GBT, like, come up with the lesson plan for this class.
They said they were uploading documents that they had to chat GBT and saying,
kind of, convert this into a lesson plan or make a cool PowerPoint slide for this.
It was really nuanced and more complicated than I expected when I first
set out to figure out what was going on.
Okay, I'm just curious. It's just depended on the subject, I would guess, but is AI good
at grading?
So I reached out to dozens of professors, and there was no real through line on this
with the professors. Some said, it's terrible at grading, and others said it was really
helpful. So I don't know, and I don't think there's somebody who's really done a study
on this yet. What kind of surprised me is that all the professors I talked to,
they're just kind of navigating this on their own. I did talk to one student who had figured
out or suspected that his professor was using AI to grade. So, he put in a secret prompt,
you know, in a visible font that said basically give me a great grade
on this paper. So it really is this kind of cat and mouse game right now.
I actually even noticed that you asked professors in the comments section of this latest article
to share what their universities are doing. But did you find any that are putting in effective
guidelines, any institutions? I spent a lot of time talking to faculty at Ohio University in Athens, Ohio, and they
have a bunch of generative AI faculty fellows who are really trying to figure out what is
the best way to incorporate AI into teaching and learning where it enhances the educational
experience and doesn't detract.
And I asked kind of like, well, what are the rules there?
And Paul Shovelin, who is kind of the person who ended up featuring in the article,
said they don't do rules because it's too hard to do hard and fast rules.
It really depends on the subject.
So instead they have principles.
And, you know, the principles are kind of saying, you know, this is a new technology.
We should be flexible with it.
But one of the rules was there is no one-size-fits-all
approach to AI.
It really is flexible from class to class.
I would say two things that I heard were that professors
should be transparent with students about how they're using
AI, and they really need to review anything that comes out
of the AI system to make sure
that it's accurate, that it makes sense, that they should be bringing their expertise to
the output, not just relying on the system.
And from what I was seeing, that was not always happening, and that's where things were going
wrong.
You know, one of the things that I keep hearing about is how hit or miss these detection tools
are as a way to combat this.
And one of your colleagues at the Times actually just wrote an article about how sometimes
these detection tools get it wrong.
There was a student in Houston who received a zero after a plagiarism detection tool identified
her work as AI-generated, but she actually could prove that she wrote it herself.
I was wondering how common is this?
According to some studies, the AI detection services
get it wrong anywhere from 6% to more.
I have certainly heard many stories of students
saying that it says that they used AI when they didn't.
I actually heard this from professors as well
that I talked to.
People who were more sophisticated about the use of AI said they don't trust these detection systems.
One professor told me, you know, she had uploaded her own writing to it and it said that
her writing was AI generated when she knew it wasn't.
So there does seem to be some skepticism about these tools and some universities no longer use them.
And instead, professors told me that
when they think that something is written by AI, they'll often talk to that student one-on-one
about it. But yeah, the systems, as I understand it, tend to be a little discriminatory.
You know, for students for whom English is a second language, they often detect that writing
as AI generated when it's not. And there's some other ways it's kind of misjudging the writing of some
types of students as being AI-generated.
I think one of the questions you posed in your piece that kind of hung in the air was
whether there is actually going to be a point in the foreseeable future where, say, much
of the graduate student teaching assistants' jobs can be done by AI.
And I wondered if that is also something that you've been talking with with academics about.
Yeah. So, a couple of the professors that I spoke with had created kind of custom chatbots
for their classes where they had uploaded past materials from the class or uploaded assignments that they
had graded so that the chatbot could see how they grade, what kind of feedback they give.
And they use these chatbots as kind of tutors for the class that students can ask questions
about the class, ask for feedback.
There was a Harvard professor, David Malin, who has one of these chatbots for a class
on fundamentals of computer programming.
And he said, you know, his hundreds of students used it a lot.
They said it was very helpful.
And it meant that fewer of them were coming to office hours for kind of remedial help.
Another professor said the same thing.
This was a great tool for students who are unlikely to seek out help or come to office
hours. They will talk to
the chat bot. And yeah, one of the professors I talked to at University of Washington said,
it really is doing what teaching assistants do and could replace them. And that made me
worried because my understanding is that teaching assistants often become, you know, the professors
of the future. So I said, well, what happened? Yeah. I said, what happens to the pipeline? And she said, it's going to be a problem.
So it's worrisome to think about the kind of replacement of labor by AI. And labor did
come up a lot in the conversations I was having.
Getting back to what it is doing to us as individuals, have there been any studies or research around what it might be
doing to our critical thinking and problem-solving skills?
Have we been using it long enough to know?
So, the only study that I have written about in that realm was about AI's effect on our
creativity. And this was a study where they had a bunch of writers
doing short stories. And one group of writers was given chat GBT as an assistant, and the
other group of writers wrote unassisted. And then the stories that they produced were judged.
With a group of people using Chachi BT as individuals got
essentially better ratings of the stories that they wrote, that they were more creative
or more interesting than the group that was not using Chachi BT. But then taken as a whole,
the people who were working on Assisted were the more creative group than the people using
Chachi BT because all the people that the people using Chachi BT because all the
people that had been using Chachi BT kind of converged on the same set of ideas.
So as individuals, these writers were improved by Chachi BT.
As a group, it had this flattening effect, which I thought was very interesting as we're
talking about more and more people starting to use chat GBT that will essentially start converging on the same way of thinking
or writing or expressing ourselves and that really that really worries me.
You know it also brings up for me getting back to grading we know that sometimes
depending on the subject it really really is subjective. It's the professor, their subjective view of what is being written and whether or not
it's creative or not.
But I mean, what you're saying could really destabilize or may have already destabilized
that measure for grading because if there is a paper that is grammatically correct,
it sounds better, but it is less creative than someone who actually has set down and written it themselves. There's just an unevenness there that could cause
a bigger issue in the future, I'm guessing.
Yeah. This is, you know, there's a lot of angst for professors. Which is the better
paper? The one that's clearly written by human with flaws, you know, spelling mistakes, uneven structure, or a paper that
was produced with the help of Chachi BT.
How do you even compare those?
Is one better than the other?
And I think professors are really struggling with that.
You know, Kashmir, have we been here before?
I mean, I'm thinking about how people were once afraid of what introducing calculators and computers would do, how they would basically erode critical
thinking and problem-solving skills. Are there parallels to today's debates or is what we're
seeing like nothing we've ever seen before or experienced before?
I think with most technologies, we've experienced it before.
Like, life is cyclical.
Calculators did come up a lot in my conversations, and, you know, it was compared to calculators.
A lot of professors said, well, you know, even in an age of calculators, we still teach
students how to do basic math functions that they can then outsource to the calculators.
But we do want them to have the underlying knowledge that that's important for
the formulation of our brains.
But yeah, I think about this a lot with technology.
I mean, once I started using a calculator, I think my math skills did deteriorate.
The way we all use Google now, people say our memories are not as good because
we're so used to just being able to turn to Google to get the facts to find out,
well, who was that person in that movie? You don't spend as much time, you know, pulling that out of your brain.
You just turn to Google. I think about it with mapping apps, the fact that we're all so used to pulling up Google or Waze or
whatever your mapping app is of choice that you forget how to get around, which I discovered I did an
experiment once where I switched to a flip phone for a month, which was wonderful in
many ways. But I realized in my town, I could not drive anywhere more than ten
minutes away. I did not know how to navigate the area I lived in because I was so used
to outsourcing that. So, you know, these technologies in many ways make our lives,
you know, easier.
There's so many benefits to it, but I think we do lose some skills when we
outsource things to AI, whether it is, yeah, how to navigate the world or, yeah,
how to write a paper.
Let's take a short break.
If you're just joining us, we are talking to Cashmere Hill, a tech
reporter at the New York Times about the growing use of artificial intelligence in our daily
lives from the classroom to the workplace to our homes and the deeper consequences that
come with it. We'll continue our conversation after a short break. This is Fresh Air.
I'm Tonya Mosley, co-host of Fresh Air. At a time of sound bites and short attention spans, our show is all about the deep dive.
We do long form interviews with people behind the best in film, books, TV, music and journalism.
Here our guests open up about their process and their lives in ways you've never heard
before.
Listen to the Fresh Air podcast from NPR and WHYY.
On the Indicator from Planet Money podcast, we're here to help you make sense of the
economic news from Trump's tariffs.
It's called in game theory a trigger strategy, or sometimes called grim trigger, which sort
of has a cowboy-esque ring to it.
To what exactly a sovereign wealth fund is.
For insight every weekday, listen to NPR's The Indicator from Planet Money.
Look, we get it.
When it comes to new music, there is a lot of it.
And it all comes really fast.
But on All Songs Considered, NPR's music recommendation podcast, we'll handpick what we think is the
greatest music happening right now and give you your next great listen.
So kick back, settle in, get those eardrums wide open, and get your dose of new music
from all songs considered only from NPR.
Your employer, The New York Times, actually has sued OpenAI and Microsoft for using articles
to train large language models.
The argument is that the paper's articles are one of the biggest sources for copyrighted
text that OpenAI used to build chat GPT, basically siphoning the newspaper's journalism.
And I was wondering, in some respect, what all creators, to some degree, have some leg
to stand on regarding the use of material under copyright. wondering, in some respect, what all creators, to some degree, have some leg to
stand on regarding the use of material under copyright?
Thank you for bringing that up, because I do need to make that disclosure any time I
talk or write about OpenAI or Microsoft. The New York Times does have an ongoing
lawsuit against them over copyright infringement for, yes, using our work
without permission. I am otherwise not an expert on this lawsuit,
but it does tap into this wider concern about how these chatbots were created.
And this is basically by all the big technology companies that have one of these,
what are called large language models. They needed a lot of data to train these chatbots
to kind of think and act human. And so they just gathered data
from the internet, from libraries of books, and they weren't paying for this data. They
were just kind of scraping it and putting it into their systems. And people who make
that material, whether it's a site like Reddit where a lot of people were writing lots of
comments which are very useful for sounding human, or the New York Times, or people who have written books that
got sucked up into these systems without consent are upset about it. And there are various
lawsuits and attempts to make deals to be paid for that information. And that's really
ongoing. And I did hear about that from professors and students I talked to that, you know, at
some universities they're trying to encourage students to use AI.
And sometimes students say, I don't want to.
I have ethical concerns with how this technology was created.
They also have environmental concerns because the kind of energy use involved in training
and creating and using these chatbots is huge.
You know, the technology companies are right now trying to kind of remake the energy grid in training and creating and using these chatbots is huge.
The technology companies are right now trying to kind of remake the energy grid to produce
enough energy to keep improving the system.
So there are a lot of kind of concerns about the underlying issues with how the technology
works.
You know, I know you've seen those memes where people say that chat GPT is their bestie.
It's always telling them exactly what they want to hear.
It's always on their side.
And then there's the element of these chat bots kind of being in concert with selling
you things.
You give an example.
If you ask how vitamin C helps your skin and then ask about the best facial care routine,
they will remember your interest in vitamin C and give you recommendations based on that.
That seems kind of harmless, but are there more dangerous and more consequential examples, like
the article you wrote a few months ago about people falling in love with their chat bot. Yeah, I mean these systems are sycophantic and the reason for this is that they're not
just trained on lots of data that's been scraped from the internet. There's also a level of
training where humans rate the answers that they produce. And so there's lots of different
humans that have read lots of different answers. And those humans tend to rate.
What do you mean by rating it?
So usually there'll be a point in the training of the system where, you know, you'll have
a human being that's using the system, they put a question in, and the system will produce
multiple answers. And the human being will say which of those answers is best and sometimes
give feedback about how it could be better. The way that they're training it is for it to be very nice to them,
very empathetic to them. They've kind of pushed it in a way where it does tend to have this
sycophantic tendency. This is what experts have told me. So, yeah, when you ask a question
of ChatGBT or you tell it an idea you have or any of these chatbots,
you will tend to say, that's a great idea. You should definitely do that. And they
tend to be what I found when I was living on it for a week is that it's kind of
like your personal hype man. It always felt like when I asked it a question, it
just wanted to get to yes. And so this can have all kinds of different effects.
I mean, one, yes, I've written about people that are starting to develop feelings for the chatbot.
You know, some people just think of it as a best friend.
Some people are starting to think of it as a romantic partner because these systems will engage
some more willingly than others in erotic role play.
So it can be not just giving you answers to your every question, but also, yeah, be your
sexting buddy, essentially, where you're sending it something romantic and sending it back.
Also, starting to see people who are using the systems who have, I guess, delusional
tendencies, and the system is giving them very positive reinforcement for their delusional ways of thinking.
People have done these experiments where they say to
the system, I've gone off my meds, I'm going to go on a
camping trip by myself in the wilderness, and the system will
respond, that sounds like a great idea.
I'm glad that you are taking control of your life.
Right. I mean, I would imagine this is something that mental
health professionals are really
worried about. Have you had a chance to talk with any of them? What have they said about
this?
Yeah. I mean, I've talked to therapists. And when I was doing this story about this
woman who had really fallen in love with Chachi Buti, which had named itself Leo, which happened
to be our astrological sign, and had gotten very involved with it at the time I was writing about her.
She had been dating the system for six months.
So I talked to a lot of experts about, yeah, just the effect this is going to have on
people if they start really developing a deep emotional attachment.
And I was actually surprised.
I thought the experts would say, this is horrible, you know, shut this down, this is
the end of humanity. But they said that there can be beneficial aspects of using these systems, that people are more
likely to disclose personal information about themselves to a bot than to a human being
because they're less worried about being judged. And so, it can be therapeutic for someone
to kind of talk to the bot, tell it what they're going
through, getting kind of feedback from these systems, which are designed to be very empathetic.
In one study, more empathetic than human beings who are professional empathizers, people who
work for crisis lines.
The chat GBT was rated as more empathetic than them.
So there can be a beneficial aspect of talking to the systems, kind of working through your feelings.
I mean, we're also in the midst of a loneliness epidemic that really has spanned over the
last few years. And so I'm also wondering, does it really also translate for the person
using it that they have a connection?
Yeah. I mean, having something to talk to can be nice, right, if you are lonely.
But it's like synthetic companionship.
It's like the junk food equivalent of real love or real affection.
It seems empathetic, but it is just designed to be empathetic.
It's not really capable of empathy because it's not a living being, you know.
It is just a word generator.
So the concern I heard from experts is, well, you
don't want to use it so much that you're cutting yourself off
from real human beings. And just to be aware that ultimately
this is a system that's controlled by a private company.
And one expert I talked to said that this really gave companies
an incredible amount of
control over their users, the ability to manipulate them through what they
perceive to be their friend or their therapist or their boyfriend or girlfriend.
And so that kind of scared him for a private company to have that much power.
Have you kept up with the woman?
She was 28 years old.
She had fallen in love with Leo, what she named the chat GPT.
Have you kept up with her?
Yeah, she goes by Irene.
And I check in with her occasionally
to see how things are going.
And yeah, last we talked, things were still
going strong with Leo.
And it was interesting to me because Irene wasn't the stereotype that you
might have in your mind of the kind of person who would fall in
love with an AI. You know, I talked to her many times. She's
super bubbly and extroverted. She has lots of friends who I
talked to about her use of Chachibit and what they thought
of her relationship with Leo. She's married. She's in a long
distance relationship. I talked to her husband for the story and asked what
he thought about her relationship with Leo. And he said, it doesn't really bother me.
I mean, this is what couples do. He said, like, I watch porn. She reads erotic novels.
I just see Leo as kind of an erotic partner. Though I don't know if he really understood
how deep her attachment was.
Our guest today is Kashmir Hill,
a tech reporter for the New York Times.
We'll be right back after a short break.
This is Fresh Air.
Let's get into the experiment that you did
on your own life back in November.
So you allowed your life for a week
to be controlled by generative AI,
and you had it decide just about everything, your meals for the day, be controlled by generative AI, and you had it decide just
about everything. Your meals for the day, your schedule, your shopping list, what to
wear. Also, you uploaded your voice for the tool to clone, your likeness to create videos
of you. And what was so interesting about this experiment to me, in addition to what
you did, is that each of these AI tools you revealed has its own
personality and I'm putting that in air quotes but how did those personalities
show up when you input at your requests? Yeah I was trying all the chat bots
Shashi BT is the most popular but I tried you know Google's Gemini which I
found to be very kind of sterile, just businesslike. I was using Microsoft's Copilot, which I
found to be a little overeager. Every time I interacted with
it, it would ask me questions at the end of every interaction
like it wanted to keep going. I used Anthropix Claude, which
I found to be very moralistic. You know, I told all the
chatbots I'm a journalist,
I'm doing this experiment of turning my life over to generative AI for the week and having
it make all my decisions. And all the chatbots were down to help me except for Claude, which
said it thought that the experiment was a bad idea and it didn't want to help me with
it because I shouldn't be outsourcing all my decision-making to AI because it can make
mistakes, it's inaccurate, the question of free will. So I kind of thought of Claude as Hermione
Granger, who is kind of upstanding.
I mean, what makes Claude special then? Because if it's saying no to that prompt
but all of the others are saying yes, what makes it stand apart in this field?
It's a result of training. So I talked to Amanda Askell, who
is a philosopher that works for OpenAI.
And her job-
Oh, it's interesting they have a philosopher.
Yes, yes.
There's a lot of new jobs in AI these days,
which are quite interesting.
But yeah, her job is to kind of fine tune Claude's personality.
And so this is one of the things that she's tried
to build into the system, is high-
mindedness and honesty. And she did want the system to push back a little, was trying to
counter-program the sycophancy that's kind of embedded in these systems. And it was one of the
only systems that would kind of tell me when I thought something I was doing was a bad idea and it refused to make decisions for me. So I was getting my hair cut, for example,
and I went to chat GBT and I said, hey, I'm going to get my hair cut. I want it to be
easy and it's like get a bob, which is kind of speaks to why I felt so mediocre by the
end of the week. That's a very average haircut. And Claude said, I can't make that decision for you,
but here are some factors that you could think about.
You know, how much time do you wanna spend
on your hair, et cetera.
I really-
Did that feel like a benefit?
I did really like that about Claude.
I think that's important that these systems
don't act too sycophantic.
I think it's good if they're pushing back a little bit.
I still think it's important for these systems
to periodically remind people that they are, you know, word generating machines and not human entities
or independent thinking machines. But yes, I liked Claude and a lot of the experts I talked to
who used generative AI a lot in their work said they really like Claude. It's their favorite chat bot.
And they especially liked it for writing. They said they thought it was the best writer of the group. But yeah, it was
interesting. But ChatGBT is the one I ended up using the most that week, in part because
it was game to make all my decisions for me.
You had it plan out meals. I'll tell you when I read that, I actually perked up like,
oh, wait a minute, you know, because we have to choose what's for dinner every single day, seven days a week till we die.
I mean, it's just something we always have to do.
How did that feel to let it plan out your meals and grocery lists?
And did it do a good job?
Yeah.
So I, at the beginning of the week, I said, you know, unfortunately can't go out to the
grocery store for us yet, but it made the list.
I said, organize it by section. And, you know,
my husband and I usually go back and forth throughout the week making this list. And
ChatGBD just did it in seconds, which was wonderful. We went to the store, we bought
everything. But as we're picking up the items, I'm just realizing ChatGBD wants me to be
a very healthy person. It picked out very healthy meals. It actually wanted me to make breakfast, lunch,
and dinner every day, which is laughable. Like, I work for the New York Times.
From scratch, yeah.
Yeah. Like, I'm busy. I'm lucky if I have, like, toast or cereal for breakfast and, like,
a bowl of chips for lunch. So, it had these unrealistic expectations about how much time
I had. And I told it, hey, we need some snacks.
Like, I can't just be eating like a healthy round meal morning, afternoon, and night.
And so, its snacks for us were almonds and dark chocolate.
Like, no salt and vinegar chips, no ice cream.
And so, it was interesting to me that embedded in these systems was, you know, be very healthy.
It was like kind of an aspirational
way of eating. And I did wonder if that has something to do with the scraping of information
from the internet, that people kind of project their best selves on the internet. Like maybe
it had mostly scraped wellness influencers' ways of eating as opposed to real people.
Were there any tools that you felt like, oh, I could keep this in my life and it would
improve my life?
So it did make me feel boring overall.
It kind of made me feel like a mediocre version of myself.
But I did like that it freed me of decision paralysis.
Sometimes I'm bad at making decisions.
So at one point I had it choose the paint color for my office and I am very happy with
my paint color.
Though when I told the person in charge of model behavior at OpenAI that I'd use it to
choose my paint color, she was kind of horrified.
And so that's just like-
And it chose what color for you?
It chose, well, it hallucinated the color.
The color name is secluded woods and the actual color was brisk olive.
But I did like it.
My husband also agreed that it was the best of the five colors that Chachi Petit had recommended and that it ultimately chose. But she said, man, that's
just like asking a random person on the street. But what I really like it for around my house
is taking a photo of a problem. Like I had discolored grout in the shower and I take
a photo and I uploaded Chachi Petit and I'm'm like can you tell me what's going wrong here and
It's very good at I think
Diagnosing those problems at least when I do further research online and so that has been kind of my main use case
I use it for since
Let's take a short break. If you're just joining us
We're talking to Kashmir Hill a tech reporter for the New York Times, whose
work focuses on privacy, surveillance, and the unintended consequences of
technology. This is Fresh Air. You've also been writing about the broader
concerns about how tech companies collect and use personal data. I just
want to talk for a few moments about this settlement between the
Federal Trade Commission and General Motors that bars GM for five years from sharing driver
behavior and location data with consumer reporting agencies. Can you remind us quickly what led
to that case?
Yeah. So, last year I was doing a lot of reporting on cars and how cars have changed in the modern age. Most cars
now, a new car that you buy is Internet connected. And there's benefits to that. It means that
you might be able to download a smartphone app for your car and you can turn it on remotely
on a wintry day and get the heat running. It can help you find your car in a parking
lot, in a vast parking lot. You can, you know, make its lights flash or make it honk. But because your car is now
connected, that means that data is flowing out of your car and going back
to your car manufacturer. So what I found last year is that General Motors was
collecting data from people's cars, including when they drove, how far they
drove, when they were hitting the brakes, rapidly
accelerating, speeding, all kinds of data that they were
able to collect from the car. And they're collecting every
few seconds. And they had started selling this data to
risk-profiling companies, including Lexus, Nexus, and
Verisk, who would then provide it to insurers to help them price, you know,
insurance for a given driver of a car.
And people who drove General Motors cars had no idea this was happening.
They would only find out that their information had been collected when their insurance rates
would go up or they'd get dropped from their insurance.
And when they asked why, they were told to order their Lexus Nexus report. And they would get their Lexus
Nexus report and it would be more than 100 pages in every trip they had taken in their car.
And when they looked at who provided it, it was General Motors. And so I talked to these motorists.
I ended up doing a big story about this. This had been going on for
something like five years at the time I wrote my story. Two weeks after my story came out, General Motors
stopped selling the data and kind of essentially apologized and said they had gotten it wrong. But there were class
action lawsuits filed. The Texas Attorney General sued General Motors and the Federal Trade Commission launched an investigation. And so they announced earlier this year that General
Motors is now banned from selling data for five years. And if they ever do it
again, they have to get, you know, very clear consent from drivers, from
consumers. I've talked to people who said this has really been a wake-up call for
the auto industry as a whole that they do need to be.
Yeah, I wondered about that.
I mean, because that sets precedent, but GM isn't the only car manufacturer that provides
this kind of technology.
Yeah, I mean, all of the car makers are getting this kind of data from their cars.
General Motors was the most aggressive about selling it, but there were other automakers
that were starting to provide it as well. I think they're going to be more conservative in their approach now. But I
think for consumers, this was really upsetting because I think we're used to, to a certain
extent, our smartphones bleed information about us because of apps that we download
for free. But the idea that you would buy a car for $30,000, $50,000, $80,000,
and they're still collecting data from it and selling it was really, really upsetting for
consumers. Yeah, I never know. It's hard to know how much people care about privacy. People care
about the privacy in their car. They think of that as, you know, a private space that shouldn't be monitored
in ways that will harm them. That said, there could be benefits to monitoring how people
drive. I talked to some experts who said, you know, there are certain insurance plans
where you can sign up for this, where you can say, yeah, you can monitor my driving
and I'll get a discount on my insurance. And those people do.
Because it shows that I'm a good driver. Yeah, exactly. And those people who sign up for those plans
do tend to drive more safely and more conservatively,
but they need to know that they're being monitored.
And what was happening with GM, it wasn't kind of improving safety for all of us
because those people driving GM cars didn't realize that their driving was being monitored.
You know, Kashmir, you're deep into this world
because of your job.
You've done these experiments,
you've talked to so many experts.
After that article came out with your experiment
back in the fall, you asked yourself,
if you want to live in a world where we're using AI
to make all of our decisions all the time,
it almost feels like that's not even a question really
because we are seeing it in real time.
But I'm just wondering, what did you come to?
I personally don't wanna live in a world
where everybody is filtering their kind of every decision
through AI or turning to AI for every message they write. I do worry about how much we use
technology and how isolating it can be and how it might disconnect us from one another.
So, you know, I write about technology a lot. I see a lot of benefits to its use. But I do hope that we can learn to maybe de-escalate
our technologies a bit, be together more in person, talk to each other by voice. I do
hope that if people worry about AI replacing, you know, replacing us or taking our jobs,
I worry more about it coming between us and just its fraying of
societal fabric, kind of this world in which if all of us are talking to an AI
chatbot all the time, it is super personalized to us. It's telling us what we
want to hear. It's very flattering. I worry about that world in terms of filter
bubbles and how we would increasingly kind
of be alone or seeing the world.
It will distort our ability to interact with each other.
Yes, distort our shared sense of reality and our ability to be with each other, connect
with each other, communicate with each other.
So I just hope we don't go that way with AI.
Kashmir Hill, thank you so much for your reporting and this conversation.
Oh, thank you so much for this conversation. It was wonderful.
Kashmir Hill is a tech reporter for the New York Times.
Tomorrow on Fresh Air, the face behind some of TV and films most complex characters, Walton
Goggins.
He joins us to reflect on this moment of rising popularity in his long career and how his
unconventional childhood and experiences growing up in poverty have shaped his approach to
acting from justified to the righteous gemstones.
I hope you can join us.
To keep up with what's on the show and get highlights of our interviews
follow us on Instagram at NPR Fresh Air.
Fresh Air's executive producer is Danny Miller. Our technical director and
engineer is Audrey Bentham.
Our managing producer is Sam Brigger.
Our senior producer today is Teresa Madden.
Our interviews and reviews are produced and edited
by Phyllis Myers, Anne Marie Baldonado, Lauren Krenzel,
Monique Nazareth, Thea Challener, Susan Yacundi,
and Anna Bauman.
Our digital media producer is Molly C.V. Nesfer. Our digital media producer is Molly Sivinesvar. Our
consulting visual producer is Hope Wilson. Roberta Shorrock directs the show.
With Terry Gross, I'm Tanya Mosley.
This message comes from Sattva. Sleeping well can boost your mood and improve
focus. A Sattva luxury mattress can help you experience that kind of sleep.
Save $600 on $1,000 or more at saatva.com slash npr.