Front Burner - AI cheating runs wild on campus
Episode Date: May 30, 2025The use of generative AI has become rampant on college and university campuses across North America. KPMG - who surveyed over 400 Canadian students about this in the fall — found that around 60 per ...cent use AI models like ChatGPT in their assignments.James Walsh recently wrote a piece in New York Magazine called Everyone is Cheating Their Way Through College, where he spoke to dozens of students, professors, and administrators about the AI cheating surge, and how it’s ratcheting up a debate about the future of the higher education system in North America.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
We're all looking for great places to visit in Canada.
One of my favorites is the Stratford Festival.
The theatre is truly of the highest caliber and there's so much selection.
They have 11 large-scale shows on stage and trust me, whatever is on manure there will
be exceptional.
People always think Shakespeare when they think of Stratford, but it's so much more.
Broadway musicals, family shows, classic comedy and drama.
Whether it's Robert LaPage's Macbeth or Donna Fior's Annie, you will be blown away.
It's the perfect Canadian getaway.
To quote William Shatner, who got his start in Stratford, every Canadian should make the
pilgrimage to Stratford.
Start your next adventure at StratfordFestival.ca.
This is a CBC Podcast.
Hi, I'm Elaine Chao, filling in for Jamie Poisson.
We've had a lot of conversations on the show about the growth of the AI industry in the
last few years, from whether it's a bubble that's about to burst to how hard it is to
govern.
The technology is evolving so quickly and unpredictably that it's upended a wide array of sectors,
from entertainment to manufacturing
and to higher education.
Today, I'm talking to James Walsh about that.
He wrote a piece in New York Magazine this month
called Everyone is Cheating Their Way Through College.
The use of generative AI has become rampant
on college and university campuses across North America. cheating their way through college. The use of generative AI has become rampant
on college and university campuses across North America.
According to KPMG, who surveyed over 400 Canadian students
about this in the fall,
around 60% use AI chatbots like chat GPT
for their schoolwork.
A recent poll from study.com in the US
found that 90% of 1 a thousand students who were surveyed
used it on the majority of their assignments.
Today, how students are using AI to cheat, why professors and institutions are struggling
to keep it in check, and how this is ratcheting up a debate about the value and future of
higher education in North America.
Hi, James.
Hi, Elaine.
You spoke to dozens of students while reporting on this story. But there is really one in particular that really stood out to me.
His name is Roy Lee, and he goes to Columbia.
And he told you that AI wrote about 80% of every essay
he turned in, which to me was pretty astounding.
And tell me a little bit more about Roy
and how he was using AI in his schoolwork.
Right, Roy was fascinating to me. He worked really hard to get to an Ivy League school.
He had first gotten into Harvard in college and had that admission rescinded because of
a disciplinary issue when he was in high school.
And then he spent a year, sort of gap year, and then a year at
community college and eventually found his way to Columbia, where he proceeded to use AI to kind of
cheat his way through every course. And it wasn't because he've, you know, had any sort of
academic deficiencies or anything, it was simply because he saw homework assignments as hackable and therefore he felt it was like
permission to use AI because he believed, hey, I'm going to have this tool in my life
forever.
I might as well use it.
Right.
He even created software that would allow other students to cheat on their coding interviews.
Was that right? Right. software that would allow other students to cheat on their coding interviews, was that
right?
Right.
So he has this ambition to be a kind of a classic startup Silicon Valley founder.
And so he went to school with this mission of two-pronged mission.
He said school is good for finding a co-founder and finding a wife. Yeah, yeah.
Eventually, he did find a co-founder.
And they landed on this idea of using
AI to build this platform which allowed coders to cheat
their way through remote job interviews.
Coders applying to jobs at really big tech companies often have to do these
kind of riddles, these puzzles that Roy kind of saw the same way he saw, you know, homework
assignments that they were stupid and hackable and, you know, not really a reflection of
the work a coder would be doing in the workplace. And so he built this platform and advertised
it by hacking his way through an Amazon job interview. in the workplace. And so he built this platform and advertised it
by hacking his way through an Amazon job interview.
Hey guys, I'm about to join my interview
for my final round Amazon,
oh wait, Amazon internship,
and I'm using interview coder.
So here's how you can do it.
Just ended the meeting.
I don't know if you could tell,
but I was pretty much copy pasting exactly
what interview coder gave me, but down to tell, but I was pretty much copy pasting exactly what InterviewCoder
gave me.
But down to the comments, I was commenting, I was like writing down exactly what InterviewCoder
gave me.
So that feels like an offer.
So I'll be back if I get an offer.
That's how you can use InterviewCoder to get a real Amazon internship.
He got the job, which he turned down and posted a video online to promote this platform.
He's clearly enterprising.
Certainly.
And that was a very enterprising use of AI.
But for many of the students that you spoke to, AI was being used also in some very mundane
ways.
Like, I want ChatGPT to help me write an essay for my first year English class.
How would that process work?
Sure. So essay writing was a big part of this. an essay for my first year English class? Like how would that process work?
Sure, so essay writing was a big part of this.
You know, of course, students are using it
for their coding assignments,
or they're using it for math or data analysis in sciences.
But essay writing was really a big part of my reporting.
And for example, you know, I spoke to a student
at a top university in New York City,
and she really walked me through her process
for writing essays.
This is somebody who told me she enjoys writing.
She enjoyed writing essays on her own,
kind of cold turkey when she was in high school.
But then she got to college and sort of took the same view
as Roy is that it was kind of essays
were now hackable because of AI.
So she would sit down with the assignment and she would prompt CHAPGBT. She would first give it some
information about herself. You know, I'm a first year student. This is the class I'm taking. And
we've been reading, you know, these different authors. And then here is the essay prompt that my
professor has given me. My conversation with her was really insightful because she started
off our conversation saying, I am totally against cheating. She knew that plagiarism
was against the student handbook. And so I'm against quote, copy and pasting from ChatGBT. But what she did instead of copy and pasting, you know,
an essay from ChatGBT into a document was she asked ChatGBT
to generate an outline and topic sentences for each paragraph of an essay.
So essentially, this chatbot was coming up with the idea, the topic sentences,
and she kind of had to do a kind of paint by numbers
for her essay.
And so she saw that as the work.
Right.
Yeah.
You know, she wrote her essay in the sense that she literally wrote it.
She was very proud of how quickly she can do this.
You know, she said she started the assignment the morning it was due, woke up early and spent two hours on it, and filed
the assignment.
When normally it would have taken her four or five days of writing, she could just kind
of bang it out.
I feel like a through line in a bunch of your interviews was kind of this as a time saver,
right?
Like you spoke to a student, you call her Sarah in the piece from Wilfrid Laurier University
in Ontario, and I think she said something to the effect of, I have to spend a lot of
time on TikTok, so I don't have much time to organize my schoolwork Sure, Sarah. Sarah was a first-year student at Wilfrid Laurier and she told me that she was fully
addicted to
Using chat GPT. In fact, she had taken to reddit to confess her addiction and earnestly ask for help because she recognized
herself
Coming to depend on AI in a way that kind of freaked her out.
But you're right, you know, she told me that addiction, it was sort of a cyclical thing
where she would spend hours and hours on TikTok until her eyes hurt, she said. And that meant that
using ChatGPT to write an essay just kind of saved her time. I think we've all had the experience
of being on social media and time flies by and you think what did I accomplish? But that's
probably increasingly a challenge for college students who will get lost in social media
and then suddenly look up and think, oh, I have a deadline. And so it's easier for me,
the time saver, to write an essay using Chat GPT and write
an essay that, you know, it may not be a perfect essay, but it's safe enough that, you know,
I'll get a good enough grade.
Like, I've heard some students even describe it as part of their learning style.
Well, certainly, I think that probably comes from all of the positive or creative use cases.
You know, there are plenty of positive use cases
for AI and education.
And we are delighted to finally announce
the grand opening of the Carnegie Learning Office.
The main service that's being provided in this office
is the development of our K6 math software product
called Mathia Adventure.
It's actually producing greater hints and suggestions
for students to improve their mathematical abilities,
but it's not giving them the answer
in any way, shape, or form.
And so we're using it in a very subtle
and intentional way to help students
improve their math outcomes.
AI can be a good study buddy.
It can help students understand concepts
that maybe they didn't really have a good grasp on during
class or they can produce study guides that are really useful. So I do think there is this kind of
sense among a lot of college students that it's like this, you know, Socratic method of learning
and they're having a conversation with a chat bot.
And so I think a lot of college students sort of see that as their study method.
That's how they learn now.
So we've talked a bit about the student side of things.
I want to ask you about how teachers are dealing with all of this.
And from the folks that you spoke to, you know, what have they been noticing in assignments since AI became so prevalent on campus?
Yeah, I think right away, certainly professors who taught writing classes
began to notice students handed in papers that sounded robotic or suddenly had
perfect grammar or one professor pointed out,
it was strange how counterpoints and counterarguments were suddenly presented just as rigorously
as the central argument. So these papers didn't really sound like college freshmen or sophomores
if they even sounded human at all. That started very soon after CHAT2BT launched
and really has only gotten worse.
And I spoke to quite a number of professors
who kind of felt like they were in this state of despair,
because for a number of reasons.
Number one, there's really no good way
to catch and prove somebody is using AI.
There is software, these AI detectors, and their actual effectiveness varies widely between platforms.
And so the professors I spoke to just did not feel comfortable relying on those detection platforms to sort
of bring a case against students. It's kind of turned, it's made the student teacher dynamic.
To do so, right?
Yeah. And it also just makes this sort of cat and mouse student teacher dynamic. It
makes professors kind of have to build this Perry Mason case against students. And this is something that is so prevalent that it would
require quite a bit of a professor's time
to kind of catch every single AI plagiarist.
And students also know this.
Students have said, if you're caught, just deny, deny, deny,
because they can't actually prove that you're using AI.
So it really has been extremely frustrating for professors.
Secondly, the student can write their own paper,
and it cannot be detected as AI generated.
But if you really dig into it, have that student come up
with their own topic for the paper?
Have they rewritten AI?
It's quite easy to kind of
game the system. Other students told me about laundering AI essays through other AIs so that
detection software won't pick up on it. You can insert typos, you can have AI insert typos,
there are AI platforms that now, you know, advertise themselves as having kind of greater
authenticity, authentic
language.
You can upload past essays so that it will write in your own voice, assuming that at
this point you have past essays that are your own voice.
So there are any number of ways to kind of fool professors and it's just led to this
kind of intractable situation where professors don't really know what to do.
[♪ music playing, no audio for the rest of the video.
I found the details of this one study about AI detection software, so Stark, that researchers
use fake student profiles to hand in work that was 100% AI-generated.
And after professors graded this work, they only caught about 3% of the AI-generated work.
Imagine that this has made it all quite challenging to come up with policy around dealing with
this.
You mentioned a bit some instructors would put in Trojan Horse white text into their assignments, and there seems
to be just a wide array of policies being implemented to deal with this.
Yeah, the Trojan Horse to me was a really good example of kind of the desperation.
These are professors who, and a number of professors do this, who will insert between
paragraphs of their prompt. who, and a number of professors do this, who will insert, you know, between paragraphs
of their prompt.
Let's say they assign an essay and the assignment has two paragraphs, and in between those paragraphs
they'll insert in little white text something that is a total non sequitur.
It says mention Finland, mention Dua Lipa, you know, mention broccoli.
And the idea is that if a student were to blindly just copy and paste that
prompt, throw it into chat GPT, chat GPT or whatever platform produces an essay and they copy and paste
that essay and put it back into a document and hand it in, you know, the essay will have some strange
off-ramp about broccoli or Finland or Dua Lipa. And so it's just kind of fascinating
because these are students who not only didn't write
their own essay, but they also didn't read their own essay
before handing it in.
And that's sort of what it's come to,
that professors are resorting to these Trojan horses.
And so it's honestly one of the more depressing anecdotes
that I learned about while writing this report, the story.
As much as professors are frustrated with students using AI at the same time, some are using AI in their own work.
A US survey of 1,800 higher education instructors this year showed that about 40% use AI for their courses. There was also a recent piece in the New York Times
about how some teachers were using
chat GBT to create lecture notes, organize presentations,
even provide feedback on students' essays.
And how do you think that kind of complicates
the debate around all of this and how to deal
with AI in higher education?
You know, I don't know if it complicates it. I think it's a reflection. This story really,
this is about the early adopters of AI. You know, today's college students are tomorrow's
entry level people in the workplace. And so the idea that professors are using it, just
as other people in other jobs are using
it is not all that surprising.
Of course, to the extent that it's violating kind of the student-professor compact, that's
something that of course we have to be concerned about.
If a professor or a teacher is using AI to leave comments and feedback on a paper, that
would certainly feel to me like I was being shortchanged if
I'm paying tens of thousands of dollars for my education.
And it also opens the possibility, which a number of teachers and professors pointed
out to me, that if an essay is AI generated and a professor is leaving feedback that's
great.
Right.
You have two robots creating each other. Yeah. Or maybe just even one robot. Right. You have two robots creating each other. Or maybe just even one robot. Of course, it raises all kinds of concerns. I also spoke with professors
who were using AI in ways they really felt was beneficial to the teaching process. I spoke with
a professor who had used AI to generate an entire textbook for a comparative literature class. And she told me that the textbook had freed up time with her TAs.
And this was the best class she taught in her career
because she felt like it was such a good textbook and the students were so engaged.
So there are these positive use cases.
Right. And I also read that there are high schools in Miami
that have introduced AI chatbots
to allow students to almost speak to historical figures.
You could talk to Abraham Lincoln and to learn about his life and so forth.
So there's that side of things as well.
What strikes me is that there is, even in talking about the use of generative AI in education, there is such a spectrum of its use, right?
Which complicates, you know, when we're talking about the ethics of something, that, you know,
using it to generate ideas for curriculum is different than using it to grade something,
right?
So there is such, it just strikes me that there is such nuance in looking at all of
this. Yeah, I also just don't think is such nuance in looking at all of this.
Yeah, I also just don't think it's apples to apples in terms of overworked professors
relying on it for help versus students who, you know, do not have fully formed brains
and have never known a college without AI and the sort of downstream consequences of
that might be a little different. We're all looking for great places to visit in Canada. One of my favorites is the Stratford
Festival. The theater is truly of the highest caliber and there's so much selection. They
have 11 large-scale shows on stage and trust me, whatever is on when you're there will
be exceptional.
People always think Shakespeare when they think of Stratford,
but it's so much more.
Broadway musicals, family shows, classic comedy and drama.
Whether it's Robert LaPage's Macbeth or Donna Fior's Annie,
you will be blown away.
It's the perfect Canadian getaway.
To quote William Shatner, who got his start in Stratford,
every Canadian should make the pilgrimage to Stratford.
Start your next adventure at StratfordFestival.ca.
At Desjardins Insurance,
we know that when you own a nail salon,
everything needs to be perfect, from tip to toe.
That's why our agents go the extra mile
to understand your business
and provide tailored solutions for all its unique needs.
You put your heart into your company, so
we put our heart into making sure it's protected.
Get insurance that's really big on care.
Find an agent today at Desjardins.com slash business coverage.
Another layer to all of this is that many of these universities
where students are using chat GPT on their
assignments, cheating with chat GPT, many of these universities actually have partnerships,
financial partnerships with chat GPT's parent company, OpenAI, right? Which I would imagine
make banning it as a way of dealing with the situation quite unlikely, right?
There is money and profits at stake here as well.
Certainly.
I mean, AI platforms are generated by people and nonprofits or companies or companies within
nonprofits in the case of OpenAI.
A lot of these schools have partnerships with OpenAI.
Whether or not that plays into ban,
the question about banning, I don't know,
because even if they were to ban it on campus,
I don't know how that would even be enforceable.
So I think that would be very, very difficult.
But OpenAI, all the platforms that offer products like chat,
GPT, EDU, which provides some guardrails and is tuned sort of more
to the learning experience, they say.
But they also provide ChatGBT Plus, a platform that costs $20 a month for free during finals.
Right.
This is like target marketing to students, right?
Certainly.
Certainly. Certainly, certainly. You know, these platforms are in a race to capture users and younger users,
teens and college students are really, you know, an important demographic for them. I had kind of
the surreal experience as we were finishing this story. I got a push notification that Google had
just sent out an email to parents saying that they were
about to offer a chat bot to, I believe, children 13 and under.
So it's certainly, like any company, they are trying to capture these young users.
AMT – James, what do you think are the wider implications of all of this on the higher education system
more broadly here?
I think we're long past the time when anybody kind of thought about college in this sort
of ideal way as a place for students to go off to expand their minds for a few years.
I think Roy, that Columbia student,
was really instructive,
not just because of his views on AI and cheating,
but also because of his understanding of
college as this transactional place,
a place where he would go to simply to meet a co-founder,
not because he wanted to engage with the core curriculum,
which was supposed to be a Columbia, you know,
intellectually expansive and personally transformative as the school advertises or describes it.
And he was just there kind of as a stepping stone to Silicon Valley.
I do think that there's going to be a very fundamental reframing in how we do almost
every bit of knowledge work in the future.
Essays, writing is not going to be the same, tests are not going to be connected the same,
memorization will not need to be happening.
We're headed towards a future where almost all of our cognitive load is offshore to LLMs,
and I think people need to get with the program.
I think this kind of introduction of AI has forced a lot of educators and then administrators
to kind of rethink what they are assigning
and what they value in terms of education.
Is it this transactional place where students are there
to get a degree that can increase their earning potential
or to get a certain skill
that will increase their earning potential? Or is it a place where students go to, like I said, ideally
expand their mind and develop critical thinking skills so that they can, you know, leave more
fulfilling or enriching lives?
— James, did reporting on the story change the way that you reflect back on your own
education? Reporting on the story changed the way that you reflect back on your own education.
You know, I admit as I was reporting the story, I did like think about the handful of times when I went on SparkNotes, you know, to kind of get the reading like that.
Really revealing your age here.
Cheating, you know, cheating is nothing new.
You know, cheating has been around forever.
But as one student put it to me, the floor of cheating is still there, it's just that
the ceiling has been blown off and the very definition of cheating is starting to change.
I shiver to think if I had this perfect cheating tool, I shiver to think about how my education
may have been different.
And in that sense, it made me think like,
this is not some crisis, this is a moral crisis, right?
Like, it's not about this younger generation
having these ethical lapses,
it's that they have this tool that's available to them.
And we as a society haven't figured out this tool's place
and how it should be regulated in the classroom
much less across society.
James, thank you so much for your time today.
Oh, thanks for having me.
That's all for today. Frontburner was produced this week by Matthew Amha,
Joytush Ngupta, Lauren Donnelly, Mackenzie Cameron, and Marco Luciano.
Our intern is Katie Teeling.
Our video producer is Evan Agard, and our YouTube producer is John Lee.
Our music is by Joseph Chavison. Our senior producer is me, Elaine Chao. I've also been filling in for
Jamie Poisson, along with Jonathan Moputi. Our executive producer is Nick McCabe-Lokos.
Thanks for listening to Frontburner. Jamie's back Monday.