Up First from NPR - Higher Education’s AI Problem
Episode Date: November 23, 2025Across the country, colleges and universities are struggling to figure out how to incorporate AI into the classroom. ChatGPT debuted almost exactly three years ago. And very quickly, students began to... see its potential as a study buddy, an immense research tool and, for some, a way to cheat the system.This week on The Sunday Story we look at the rapid growth of AI in higher ed and consider what it means for the future of teaching and learning. Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
I'm Aisha Roscoe, and this is the Sunday story where we go beyond the news to bring you one big story.
It's been three years since ChatGPT, the artificial intelligence chatbot, was released to the public.
Since then, generative AI has infiltrated so many aspects of our lives, including higher education.
If you're not using AI to make your college experience easier, you are falling behind.
I told you guys not to be using ChatGPT, and now we have students,
literally getting kicked out of university for getting caught.
Today, we're going to explore how AI is changing the college experience
and what that means for the professors and students trying to navigate it all.
We'll be right back.
This message comes from Wise, the app for using money around the globe.
When you manage your money with Wise, you'll always get the mid-market exchange rate with no hidden fees.
Join millions of customers and visit Wise.
T's and C's apply.
Support for this podcast and the following message come from Dignity Memorial.
When you think about the people you love, it's not the big things you miss the most.
It's the details.
What memories will your loved ones cherish when you're gone?
At Dignity Memorial, the details aren't just little things, they're everything.
They help families create meaningful celebrations of life with professionalism and compassion.
To find a provider near you, visit DignityMemorial.com.
Our Common Nature is a musical journey with Yo-Yo Ma and me, Anna Gonzalez, through this complicated country.
We go into caves, onto boats, and up mountain trails to meet people, hear their stories, and of course, play some music, all to reconnect to nature.
Listen to Our Common Nature from WNYC, wherever you get podcasts.
We're back with a Sunday story. Joining me is education reporter Lee Gaines, who covers
AI and higher ed. Lee, welcome to the show. Hi, Aisha. Thank you so much for having me.
So chat GPT is everywhere these days. How do we get to this point where it is kind of like
so integrated in people's lives? Yeah, so I want to start today by taking us back in time. So we're
going to go way back to the spring of 2023 when chat GPT was still a brand new technology. So I mean,
That's not long ago. I mean, it's kind of hard to wrap your head around, like, how it's become such a big deal in such a short amount of time.
Yeah, I know. It's crazy how much it's already changed the world and the economy. But, you know, this technology is a huge deal.
People have compared it to the invention of the internet or even as big a change as the industrial revolution.
So anyway, back in 2023, there's this student, Max Moundis.
He's a senior at Vanderbilt University in Nashville, Tennessee, and he's a computer science major.
So after ChatGPT was released, Moundis starts freaking out about the power of this new invention, generative AI.
I couldn't focus because I was experimenting with Chat ChbT in a lot of these tools, and I could watch them generate code in any coding language at the time and generate them.
extremely quickly, like nearly instantaneously.
Moundez says the AI generated code wasn't perfect, but neither was the code he could write at the time,
and he knew this technology was only going to get better.
And I started spiraling, and I was just thinking, you know, did everything I just learn
over the past four years, is that now obsolete?
You know, is the four years of my time plus the tuition that I paid, you know,
should I put that towards something else?
And I was basically having perpetual panic attacks and catastrophizing the situation.
Yeah, I mean, it's understandable, right?
Like how that could be really stressful.
It does seem like computer science and, like, coding.
That might not be the best major right now.
Yeah, totally.
And Moundez was feeling that so much.
And he was so stressed out, actually, that he sought advice from one of his professors.
I remember approaching him and kind of trying to express to him that I just felt sort of defeated by this.
The professor he talked to at Vanderbilt was Dan Arena.
Arena told me a lot of his students were deeply worried about AI and what it would mean for their careers.
Most of my students were getting ready to enter the job market.
So a lot of them were starting to panic because they're like, well, this is all new technology.
We never learned anything about this while we were in undergrad.
Now we're going out there.
now do we have to worry about our jobs? Are we going to still get jobs? And at that time,
I felt like my role specifically was to try to say, like, let's just try to look at this
accurately and just really see what's happening. So Arena gets this kind of quirky idea. He decides
he's going to give the same final exam he's giving students to chat GPT. So this way Arena could get
a sense of whether the chatbot could actually replace an entry-level worker in the computer
science field. So I created a fictitious student named Glenn Peter Thompson, which was GPT, right,
the initials. Okay. So how did Glenn Peter Thompson do? Pretty bad, actually. Much to my surprise,
ChatsyPT had like actually did the lowest in one of my sections. And then the other two sections,
it did marginally better, but it was still well below the mean. So I shared that with my students.
And I just said, like, hey, this is where we're at right now.
You are significantly more prepared than chat GPT is to take on a role in computer science and industry.
Well, you know, that's a big thing.
Like, I'm sure that must have made the students feel really good that right now the computers haven't replaced them.
Oh, 100%.
Yeah, that's what Arena told me.
He said that this really calmed down his students, except that that that was.
almost three years ago, right? And ChatchipT has improved since then. So this past spring,
Arena told me that he repeated his experiment with ChatchipT. And this time I made up a name,
which was Gwen Piper Thompson. I pretended that this was Glenn's younger sister now at Vanderbilt.
And this time around, Chatchipt, or Gwen, scored in the low 80s. So better, but still not great.
And Arena says his students were again relieved.
But Aisha, even in just a couple years, chat GPT improved a lot on that test.
So it's pretty easy to see where things are headed.
And I asked, Arina, what happens when chat GPT can ace the exam?
So at the point that it really does catch up to what my students need to be able to do,
then I need to go back to the drawing board.
And I need to say like, okay, well, how can I then incorporate this technology to make them even better
and more productive than they were previously without this technology.
So Arena sounds pretty calm about this, right?
But I wouldn't say he represents everyone in academia.
So Arena's kind of an outlier here?
Well, I definitely think it's safe to say that concerns about how AI is going to impact the job market are pretty valid.
I talked to Tanya Tetlow about this.
She's the president of Fordham University, and she's a president of Fordham University, and she's a
very concerned about how AI will impact jobs for computer science majors.
We worry very much about the job market they will inherit quite soon.
Coding jobs in computer science, for example, have started to disappear.
And our applications for computer science majors went down by a third last year.
And that hit will start to also apply to things like accountants or junior lawyers in law firms,
that the tasks that are the most technical are quickly being supplanted by technology.
I mean, if the job market is changing this fast, how can or how should universities adapt?
Well, Tetlow told me she believes universities need to make sure students get skills AI can't replicate,
like critical thinking, emotional intelligence, and ethical jobs.
judgment. What employers will increasingly need from them is that proficiency in technology,
for sure, but also the most human of skills that won't get replaced by machines.
Okay, but is she saying that there will still be a role for computer science majors as long as
they have these skills? Yes, definitely. So Tetlow thinks there's still going to be a demand for people
with these technical skills, but that alone won't be enough.
They'll need to have these other skills too.
So in other words, they need to understand what the AI is doing and work with humans to oversee it.
So we've heard from the professors and the administrators, but what about the students?
Like, do you have a sense of how many college students in general are using AI?
I mean, the short answer is a lot.
There was this recent survey done of about 1,000 undergrads.
It was conducted by Inside Higher Ed and Generation Lab.
They found that a huge amount of students, 85% said they used generative AI for coursework in the past year.
Okay, 85%.
I mean, that's a huge majority of students.
Yeah, it is.
And about half of those surveyed said they used it in ways that you could argue support their learning, like brainstorming ideas,
having AI asked them questions like a tutor would and using it to study for tests. Also, 42% said
they're using it like an advanced search engine. Now, a smaller percentage, about a quarter of
students said they used it to complete their assignments with 19% saying they had used AI to write
full essays for them. Oh, I mean, that's not good. That's not good. Because that's what? So that's like a quarter
of students, almost a quarter of students. So close to 20 percent, they're basically cheating,
right? They're basically cheating. Yeah, I think a lot of people would call it that. I think that
some of these students might describe it as working smarter rather than harder, but yeah.
Well, I mean, look, yes, you know, there are ways to work smarter, not harder. You could like
take some money out to register, but it's still stealing. You know what I'm saying?
Oh yeah, 100%. I think professors would agree with that. And I talked to this recent college grad, Aisha Tarana, about this. Tarana interviewed 10 students at her school, the University of Minnesota Twin Cities, for a research project. And she asked these students how they were using AI.
One of the interviewees said something that kind of stuck out to me. She had said something along the lines of, it makes it easier to do better.
and that was kind of the seam that a lot of people were saying to me
was that it just makes it easier to be better and do better
and get to my goals faster.
And I was like, hmm, that's interesting.
I mean, I can see that.
And I have friends and stuff who talk about using AI in this way.
Like, overall, for the students, it sounds like a mixed bag.
Like, you have some students that are using AI as a tool,
as a study buddy, as an edit.
as a brainstorming partner, and then some people who are using it as kind of an end and of
itself, right? So using it to do all of the work. Yeah, that's right. And so, I mean,
how are professors responding to this? It's complicated. And it seems like the academic response
really depends on who you talk to. So three years into the generative AI,
Revolution, there is no broad consensus. I've spoken to professors that are banning it outright in
their classes and others who are embracing it. Okay, so tell me about some of these, you know,
perspectives that you're hearing. Yeah, let's start with Leslie Clement. She's a professor at
Johnson C. Smith University, a historically black university in Charlotte, North Carolina.
It's absolutely changed how I teach. It's expanded, how I think, and how I think.
I learned.
Clement teaches English, Spanish, and African studies.
She told me her goal has always been to foster critical, ethical, and inclusive thinking
in her students.
And at first, she was skeptical about AI, but now she says it's her mission to ensure her
students apply those same skills to how they use AI.
We encourage them to use it because we know they're going to use it, but to use it in a
responsible way.
So what does that look like in practice?
So Clement says she allows her students to use AI to create outlines for papers and to find
sources for their research. She says she also teaches her students to fact-check what AI gives
them because it does make mistakes. And if students use AI to refine or edit their papers,
she asked them to compare the original draft to the AI version and reflect on the changes it made.
That's really interesting. So it's like she's accepting that the AI
exists, but she's also making them kind of interrogate the use of it and what they are
getting from using it. Yeah, again, she's trying to foster those critical thinking skills
alongside the AI. And Clement also co-created a new course with two other professors
called AI and the African Diaspora. Clement says they also introduced students to a large
language model called Latimer.a.i, which I had actually never heard of before.
So it's kind of considered the black chat GPT. So it's supposed to provide more information
about black history, black experiences than chat GPD does. So it sounds like Leslie Clement
is like really going all in. Like she's embracing the possibilities of AI and using it to kind of
enhance her curriculum and her teaching. But I'm guessing there are a lot of professors who may not
be on the same page as her. You would definitely be correct. And I talk to someone who isn't,
who thinks AI may actually cause serious harm. If we're not careful, the presence of AI can
poison our relationships with our students. That's coming up. Stay with us.
This message comes from TED Talks Daily, a podcast from TED, bringing you new ideas every day through TED Talks and conversations.
Learn about the ideas shaping humanity, from connecting with your inner monologue to finding out of aliens exist.
Listen to TED Talks Daily.
Do you have a question you just don't feel like Googling?
We're Ian and Mike, hosts of How to Do Everything.
We can help answer all of your most pressing questions, like, can I cook lasagna in my dishwasher?
Where do you park your blimp?
Or the timeless classic, what's that smell?
I, ooh.
Listen to the How to Do Everything podcast on the NPR app or wherever you get your podcasts.
Wildcard is where big name interviews feel like conversations with a friend.
I mean, I can't believe how lucky I've been.
You didn't say goodbye the right way, McConaughey.
She told me, I don't think you're Princeton material.
I'm nothing if not open, I guess.
I'm Rachel Martin.
watch or listen to Wildcard on the NPR app, YouTube, or wherever you get your podcasts.
We're back with the Sunday story talking to journalist Lee Gaines about AI and higher ed.
So, Lee, you told me about someone who is skeptical about the benefits of AI.
So who was that?
Yeah, now I want to introduce you to Dan Cryer.
He's an English professor at Johnson Community College in Kansas.
I think that on a scale of zero to 10, 10 being AI tools are extremely beneficial to humanity's education, and zero being not only are they not beneficial, but they're actively harmful.
I'm probably in like a one or a two.
So Cryer's biggest concern is that AI tools are going to act as a shortcut that cheat students out of the education they signed up for.
He told me that part of the problem is students sometimes think the goal.
of education is the final paper or the grade or the degree. I try to convince students that the product
is not where it's at. Like, we don't need more research papers written by college students.
What we need is students to go through the process of writing research papers so that they can
become better thinkers so that they can put together a cogent argument so they can differentiate
between a good source and a bad source so they can write a strong paragraph. Yeah. I,
those critical thinking skills that we talked about earlier.
Exactly.
And Cryer thinks that by using AI, students are robbing themselves of the benefits of that process.
He says it's like if we went to the gym and we thought that the point was for the weights
to move up and down rather than to build muscle.
I mean, that analogy really makes sense to me because you have to like practice actually doing
the uncomfortable thing of.
sitting down and looking at a blank screen and making something come out of nothing.
Totally. And I mean, I think you and I really understand that as journalists, like it took a
lot of work to get to a point where we feel confident with our writing. In Cryer's mind,
using AI is basically like bringing a forklift to the gym. And to add to that, many colleges,
including Cryers, often provide AI tools like Microsoft's co-pilot to students for free.
And Cryer thinks this puts students in a tough spot because it becomes super easy for them to use it to do all their work for them.
And then it further becomes their responsibility to not cross that line, even as the tool is kind of beckoning them over it.
I mean, yeah, that would definitely be pretty irresistible.
But do we know what the downstream impacts of that are?
Like, is there research that actually shows that AI harms critical thinking skills?
Well, I would say it's still too early to know what the long-term impacts might be,
but there is some evidence that increased reliance on AI tools does impact critical thinking
skills.
So there's a study from MIT that recorded the brain activity of people using AI to write essays,
while another group used Google Search and a third group used nothing but their own
brains. And they found that of the three groups, the people who used AI had lower neural
connectivity and engagement. I mean, that don't sound good. That doesn't sound like a good sign for
what AI may be doing to our brains. Yeah, it really doesn't. And it's something that students are
also concerned about. So remember Aisha Tarana, who interviewed 10 students for a research project at
the University of Minnesota? Yeah, yeah. Well, she told me this fear of AI came up a lot during
her interviews. The biggest concern that I found was the conversation around it hindering
critical thinking because we also live in a very, like, technology surrounding world. Like,
we're constantly on Instagram, Snapchat, or like TikTok or some form of social media.
There's always noise in our brains that sometimes, like, you don't have room.
to think, I feel like sometimes.
So even when you want to critically think, well, here's another outlet for you to not critically
think. Boom.
It's convenient.
And now Tarana agrees with Dan Cryer in Kansas.
She doesn't think AI has much value for higher education and, in fact, thinks it poses more
potential harm than benefit.
And how are professors dealing with students who are using AI tools to do their work
for them. It's a huge issue. And from my reporting, I would say that it's creating a real
crisis of trust in classrooms. So part of the problem is the technology that's supposed to help
professors catch AI use, known as AI detection tools, are unreliable. They sometimes label
work written by humans as being AI generated and vice versa. Also, there are now tools people can
used to humanize their writing to bypass AI detection software. It's a mess, honestly. But to be
clear, some students are using AI in ways that a lot of people would probably describe as unethical.
So just a quick search on TikTok, and I found a lot of examples of people using AI or talking about
using AI to do all the work for them and other people giving advice for how to do that.
I'm a senior at Stanford, and every single one of my essays has been written by AI.
Friendly reminder that colleges have literally no way of knowing if you use AI to write your
college essays.
If you're not using AI in college, you're cooked.
Yes, it doesn't matter.
You could be a 4.0 GPA student or a 1.6 GPA student.
All my smart friends use it.
All my friends who aren't smart use it to.
So you're talking about the cheaters.
Yeah, that is who I'm talking about.
And here's Leslie Clement, again, the professor at Johnson C.C.
Smith University in North Carolina.
We have students who continue to turn in full papers that they just put in the criteria for
the paper and chat GPT, and they give us those exact papers.
So to deal with this, Clement is trying to change how she teaches.
She assigns fewer papers and more in-class collaborative projects.
She also assigns way less at-home reading now.
I have students read in class because I know they're just going to go and ask for summary.
So we actually read, you know, different excerpts, and then maybe we'll discuss it in class.
And then we come up with ideas together.
This is really, like, requiring, like, a different way of teaching, right?
100%.
Dan Cryer, the community college professor, has also changed the way he teaches in response to AI.
He told me he's drastically reduced the amount of online teaching he does,
and he has students write in class more often now.
He says that this is emblematic of a bigger issue.
AI has created more work for educators.
And part of that work is not letting suspicions around AI use poison relationships with students.
If you are always thinking of your students' work from the point of policing them and keeping them from using these tools, then that trust relationship that is so key between students' work.
and teachers, has really broken down.
So it seems like kind of stepping back for a moment,
like there are all of these seemingly valid criticisms of this technology
and how policing of this technology could erode the trust
between professors and students and really hurt the relationship.
But then on the other hand, we also heard from Tanya Tetlow,
the president of Fordham University,
And she said earlier that universities are facing a real urgency here because the job market is changing.
And AI is going to be needed to deal with the economy of tomorrow.
So how do universities deal with all of those contradictions?
That is like the billion dollar question.
But I will say that Tetlow says doing nothing isn't an option.
She also doesn't think universities should just embrace AI uncritically.
She says higher ed's role is to model the difference between responsible and irresponsible use.
I think that where we use AI as a tool to do important and good work better, it is responsible,
where we seed our judgment and responsibilities to technology without,
constant monitoring and checking of its accuracy, we have violated our own duties
and responsibilities. But we need to have it as a tool always within our control, not to give
over our most important functions to a soulless machine that has no conscience. So Tetlo is saying
AI as a tool under human control responsible. AI as a replacement. AI as a replacement.
for human judgment, irresponsible. But as we've heard, there's disagreement within higher
ed about whether professors should embrace or reject this technology. So where does this leave
students? Well, I want to talk about what happened to Max Moundis. He's the computer science
major from Vanderbilt, who was terrified by ChachyPT when it was first released. So yeah, where
is he now? He's actually working as an AI research engineer for Vanderbilt
university. So he's doing AI research and building AI tools for the university.
Okay. So he basically like became one with the machines. Or I mean, he embraced the technology
that was worrying him. I love that. Become one with the machine. A hundred percent. I would say
he just leaned right into it. I had this moment where everything just clicked. And I realized that,
that my ability to see the capability of this technology and the different ways that it could
augment traditional work, that perspective itself was valuable and that my computer science
knowledge wasn't obsolete. It was actually what enabled me to understand how to leverage
this technology effectively. I guess my last question to you then is with all of these
different viewpoints, where does that leave us as a society?
I mean, to me, this looks like one massive experiment on higher ed that no one consented to.
You know, chat GPT didn't come with a guidebook when it was released.
It was just put into the world.
And now it's everyone's challenge to deal with.
And the reality is AI isn't going anywhere.
Higher education has to adapt to it.
But we really don't understand the full risks or the benefits to students yet.
And that research is actually happening in real time on an entire generation.
of students. And if higher ed gets this right, maybe universities and colleges can supercharge
learning using AI and train students for jobs where they'll be overseeing the use of this
technology. And if we get it wrong, maybe students won't develop critical thinking skills,
and they'll be in this unforgiving job market eroded by AI. Maybe higher ed as a whole will be
devalued. The stakes really couldn't be higher. This really does seem,
It will be a key question for our time.
Lee, thank you so much for all of this reporting.
Thank you so much, Aisha, for having me.
That was education reporter Lee Gaines.
This reporting was supported by a grant from the Tarbell Center for AI Journalism.
And if you want to hear more about AI and education,
our friends over at the TED Radio Hour podcast have a series called
Are the Kids All Right?
That looks at the use of AI in elementary school classrooms.
This episode of The Sunday Story was produced by Andrew Mambo.
The editor was Jenny Schmidt.
It was engineered by Robert Rodriguez.
Fact-checking by Sassil Davis Vasquez.
The rest of the Sunday Story team includes Justine Yan and Leanna Simstrom.
Irene Noguchi is our executive producer.
In this episode, you heard social media.
clips from TikTok users at
Advice with Shiv
at study fetch underscore
Alex at
College Guy 2
at Ivy underscore
Roadmap and at
studying dot with dot
car. I'm Aisha Rosco
up first is back tomorrow with
all the news you need to start your week
until then have a great rest of your
weekend.
The StoryCorps podcast is celebrating the 10th anniversary of our annual tradition,
the great Thanksgiving listen, where young people interview an important elder in their lives.
I'm speaking with my grandmother.
My dad, my tutu, which means grandma in Hawaiian.
Hear from the students at one high school outside Chicago on this episode of the StoryCorps podcast from NPR.
I'm Jesse Thorne.
Adam Scott grew up in Santa Cruz, California, one of the biggest.
surf cities in the country.
But when Adam Scott hits the beach, he's got other plans.
Skimboarding, it's like surfing for people that are afraid of the ocean.
It's a very special live bullseye with Adam Scott, boots, Riley, and more.
Find us in the NPR app at maximum fun.org or wherever you get your podcasts.
Making time for the news is important, but when you need a break, we've got you covered on
all songs considered.
NPR's music podcast.
Think of it like a music discovery show.
a well-deserved escape with friends,
and, yeah, some serious music insight.
I'm gonna keep it real.
I have no idea what the story is about it.
Here new episodes of all songs
considered every Tuesday,
wherever you get podcasts.
