Front Burner - ChatGPT in university: useful tool or cheating hack?
Episode Date: August 31, 2023The ChatGPT hype cycle has died down a bit lately. There are fewer breathless headlines about generative AI’s potential and its risks. But in a recent American survey – one in five post-secondary ...students said they had used AI to complete school work. Today, a closer look at what this means for the academic experience with Simon Lewsen, journalist and the author of a recent piece in Toronto Life titled ‘CheatGPT.’ We discuss if AI’s use really constitutes an epidemic of cheating, or if it’s simply a new technological tool for students to take advantage of. Plus, how post-secondary institutions might adapt, and what might be lost along the way. Looking for a transcript of the show? They’re available here daily: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National
Angel Capital Organization, empowering Canada's entrepreneurs through angel
investment and industry connections. This is a CBC Podcast.
Hi, I'm Tamara Kandaker.
So lately, the hype around chat GPT has died down a bit.
There are fewer breathless headlines around the potential and risks of generative AI.
But the technology itself, it's probably not going away. In a recent American survey, one in five post-secondary students said that they had used AI to complete schoolwork. experience? If AI's use really constitutes an epidemic of cheating, how colleges and universities
might adapt, and what could be lost along the way? I'm talking to Simon Lusin. He's a journalist and
teacher at the University of Toronto who recently did a deep dive on all this, titled Cheat GPT for
Toronto Life magazine.
Hey, Simon, thanks for being here.
Hey, Tamara.
Thanks so much for having me. So you're a journalist and you teach at the University of Toronto and you got to see firsthand the arrival of chat GPT into the post-secondary setting. And maybe you could
just start by telling me about how you felt when you realized, oh, students are probably going to
make use of this technology. Yeah, like so many people who teach in higher education, my first
response was panic. There was this sense that this might be an extinction level event for
post-secondary education. When you think about
what university students are asked to do, the essay is sort of the centerpiece. What do university
students do? They read widely, they synthesize thoughts in their head, and then they write those
thoughts in essays. And if you can now outsource that work to robots, what's left for students to
do and what's left for teachers to teach? So I had this sense of like, oh my God, is this the end times? Yeah. And for people who haven't played around with chat GPT,
fundamentally, what actually is the technology? And in simple terms, how can students make use
of it in university? So the way the technology, the best way to describe the technology is it is
a machine for guessing the likely next word in a sentence.
I give the example of, you know, everybody knows that the phrase I love frequently comes
before the word you or please make it frequently precedes the word stop.
And ChatGPT knows this too.
But ChatGPT has studied this massive, massive corpus of written text, including the entire
internet.
So we can make all kinds of similar predictions about all kinds of similar sentences. So it is a writing machine. Effectively, it can
figure out the likely next word in a sentence, but it's not thinking. It's not synthesizing ideas.
It's just coming up with words that sound plausible, but it can write like a pretty decent
academic essay, not a stellar essay, but the kind of essay that would in the past get, you know, a B or a B plus grade.
So this is obviously concerning for a lot of educators, including it sounds like yourself at first when it arrived on campus.
But tell me a bit about those concerns. Why are some profs so up in arms about students using chat GPT?
Yeah, there's the sense that if students are no
longer writing essays, what is it that students are doing? We've built our universities around
the academic essay. You synthesize ideas and you write them in an essay. And one of the beauties
of the academic essay is it forces you to order your thoughts and it forces you to think slowly.
Unlike an exam, you actually have time to develop your ideas and to be the best version
of yourself.
You can think through your ideas slowly over a period of ideally days or weeks and come
up with something that's genuinely novel and generally sophisticated.
What a lot of professors have been doing since the arrival of chat GPT is they've been
replacing essays with in-class exams.
But in-class exams, they're good at testing knowledge.
They're good at testing skills on the fly. They don't value the kind of slow thinking that I think is
also the highest quality thinking. And I think there's this real sense that if essays can just
be outsourced to robots, and if we're going to rely on exams as a way of assessing students,
or we're going to allow students to cheat on their essays, then there's no more space for
the kind of slow, careful thinking that is
the hallmark of good thinking. And I think that's why so many of us were so panicked about it.
Right. So it's seen as this kind of assault on intellectualism, which is what university is
supposed to be about. And then it's also seen as this new, really sophisticated way of cheating.
So if the computer writes this essay for you and you hand it in and say,
I wrote this myself, that sounds like cheating. But do you think it's correct to say that all
of its uses are cheating? I'm wondering what nuances you found there. Yeah. And this is what
makes the situation so complicated. There's so many different ways of using the technology. I
think you're absolutely right that I think most people who teach in higher education would say that putting an essay prompt into ChatGPT, having
it spit out an essay and then submitting that essay as your own, that is cheating. Then there's
the other end of the spectrum. There are things that people do that I think are actually probably
worth their while. There are students who use it as a kind of sparring partner. You use ChatGPT to
get the creative juices flowing, to generate ideas, to critique your ideas, and then that in turn forces you to think better. I think that's quite
a good usage of ChatGPT, and I wouldn't want to see that usage banned. And then there's a whole
middle ground there. There's this massive middle ground, ways of using ChatGPT that are neither
egregious cheating nor maybe totally okay. For example, can you use ChatGPT to write your work cited list
for you? That seems okay. I mean, writing a work cited list isn't particularly sophisticated work.
Okay, but what about transition sentences, a sentence that takes you from one paragraph to
another? Is that okay? What about having it write your concluding paragraph or your intro paragraph
or your abstract? There's so many different ways of using it. I think this is part of the crisis
we're in right now is there's, even if you wanted to create a set of rules around chat GPT usage, it's not
clear what those rules should be and where you should draw the lines. Yeah, we were talking
yesterday about how we wished we had chat GPT University to write our citations because
it was so annoying. But some of the students that you talked to had developed their own sort of personal codes
of conduct around this, right? Can you tell me a bit about that? Yeah, what I found out is that
there's a range of different ways that students are using chat GPT. And I think there are students
who are just egregiously cheating, who are entering prompts, assignment prompts into chat
GPT and having the robot spit out an answer. But there are a lot of students who are working with
the technology in much more nuanced or sophisticated ways. And I think what they're
doing is they're trying to balance incentives here. On the one hand, they know that other
people are using ChatGPT. And if they don't use ChatGPT, they could fall behind. So they have to
compete. On the other hand, they're aware that if they simply rely on ChatGPT all the time,
they could really cheat themselves out of a good university experience. They could cheat
themselves out of learning. So they're developing these very personalized sets of rules. One student
I talked to, he uses it only as a sparring partner. It's there just to get him thinking,
to critique his ideas, to get the creative juices flowing, and that's it. He then does all the
intellectual legwork himself. Once he's got an idea from ChatGPT, he writes the paper independently.
Another student I spoke to was a combined major in commerce and art history, and he really hated commerce. He was
doing commerce because his mom wanted him to do it and because she was paying for his education,
but his passion was art history. And so he would get ChatGPT to do his commerce work to free up
time for the art history work, and then he would work doubly hard on art history. And you might
say that he's cheating, and I think a lot of professors would say that he's cheating. He would say that
he's optimizing his intellectual experience, getting chat GPT to do the work he doesn't
want to do so that he can do the work that he finds truly enriching. And we see a lot of that,
students figuring out their own hyper-personalized codes of conduct. I think what we also see is a
lot of students who basically don't want to use chat GPT, but then get themselves into a tight spot and find themselves using it. That's common as well.
cheating, where we draw the line and say that feels wrong. But universities also have codes of conduct, specific rules that define cheating. And if you break those rules, there are, in theory,
consequences. So how are schools defining the use of AI like chat GPT and delineating between
what's cheating and what's not? Right now, it's pretty muddled.
And if you go on the website of most schools
to find their chat GPT policies,
what you'll find is these strange sentences that say,
chat GPT usage could be considered cheating,
but other profs might be okay with it.
And I think if you could summarize
the way a lot of schools have responded,
what the responses sound like a version of,
we think this is bad, but we don't know how bad,
so you're on your own, sorry. And I get why schools are doing that. I don't know how you
come up with a coherent chat GPT policy. I would be in the same position if I were an administrator
at a university. But the result is these very muddled policies, and it's not clear to anybody,
professors or students, what is allowed and what is verboten. I know there are some schools that have taken
pretty hard line stances, like the prestigious French university, Sciences Po, has taken a
really clear line here, right? It has. France in general has taken a harder line, Sciences Po
leading the charge. And their rule is that if you use content that's generated by chat GPT and you
don't cite it, that's plagiarism. And if you get caught plagiarizing, you could be banned not only from the school, but from the French higher education
system. North American universities have been really reticent to take such a hard line.
But like you said, even if universities and colleges do have clear rules about AI use and
when it's considered cheating, it seems like it would be really hard to enforce them. You tell
the story in your piece that I thought was quite
funny about a poetry TA who had strong suspicions that a student had used AI to write an essay for
the course. Tell me about that. What tipped him off and what happened in that case?
I love the story because it's such a classic example of what every TA has experienced.
So you get a paper and you look at it and it feels
robotic. It feels in some ways really sophisticated, like the student has cited a lot of sources and
they've really gone the extra mile. And yet the writing is so meandering and so repetitive. And
you think, what kind of a person wrote this? And then you think, probably it wasn't a person who
wrote this, probably a robot wrote this. But then what do you do? And one thing you could do is you could run the paper through one of the detection software
programs that exist out there, something like GPT-0, which will scan a paper and generate
a probabilistic verdict as to whether it might have been AI generated.
There are a couple of problems.
First of all, these applications are notoriously unreliable.
I put the Canadian Charter of Rights and Freedoms into one of those applications, and it told
me that it thought parts of the Charter of Rights and Freedoms were likely AI-generated,
which is strange given that the Charter of Rights and Freedoms predates the arrival of
generative AI by about four decades.
The other problem with these programs is that they're probabilistic.
They tell you what they think is true, not what's for sure true.
You have an unreliable application telling you that a piece may or may not have been generated
by AI. You really can't do a lot with that. This particular TA in my story didn't do that. Instead,
what he did is he called the student in for a meeting. And he and the professor said,
we think that you used ChatGPT on this paper. And the student denied it. And the student had
cited a pretty famous poem, Kublai Khan by Samuel
Taylor Coleridge. And I think the professor said, is Kublai Khan a poem or an essay? And the student
said she didn't know, which is funny because it's written about this poem, but she continued to
deny it. And so what do you do? I mean, it's your word against theirs. Right. So then if post-secondary
students are using AI in all these different ways and definitively
proving its use is so difficult, what can really be done here?
I understand the desire to enforce, to have harsh policies and to enforce those policies.
I felt that way myself, but I think it's impossible.
It's impossible to come up with a set of coherent rules.
And even if we could come up with a set of coherent rules, the rules are impossible to
enforce.
So I think we have to accept a policy akin to decriminalization
where we effectively allow students to experiment with the technology. And I think at the same time
that we do that, we have to raise the bar for what it is that we expect students to do. In the past,
a sort of bland essay that repeats itself a lot and rehearses very familiar talking points, that bland essay would probably get a B or even a B+.
We've now reached a point in time where a robot can write that bland essay.
So what's the value in that essay?
What's the value in writing an essay that a machine could do in 30 seconds?
I don't think there is much value.
So I think what we need to do going forward is we need to reward students for the extent
to which they exceed what a robot can do.
A robot has set the bar high.
Students need to go higher.
And that is what we reward students for doing.
So I think we need to allow chat GPT, but we also need to raise the bar considerably
from where it's at.
But we're also talking about students who, for the last little while, have been in school
while academic standards have been kind of in free fall, right?
The bar has really been lowered.
How does that factor in here?
Yeah, you've said the thing that we all, everyone who teaches sort of thinks, and not everyone wants to say it out loud, but it's absolutely true.
Academic standards have been falling at the same time that the chat GPT came along.
have been falling at the same time that the chat GPT came along. It's sort of a strange moment where the bar is getting lowered in terms of what we expect from students. And then the bar for what
a robot achieves has gotten higher. And now there's been that crossover. Robots can quite
competently do what we expect university students to do. And I think the positive reading on this
is maybe this is the wake-up call that we all need. Maybe this is the moment that forces us to
reassess what we're doing and to assess what we're asking students to do and to start raising the bar again and changing the direction on things.
Yeah, but it's not just students who might have to up their game. You talked to one prof who said if your students can complete your assignment adequately with chat GPT, it's not a problem with the student. It's a problem with your assignment. What did he mean by that?
Yeah, I love that quote. It's from Joshua Ganz, a professor at the Rotman School of Management at the University of Toronto.
And what he means is that we have to start setting assignments that really push students to do original work.
Professors can get quite fond of really banal assignments, you know, explain why the Treaty of Versailles failed. Explain the controversy surrounding gene editing technology.
Analyze gender in Shakespeare's Twelfth Night.
These sort of general questions for which a robot can come up with a competent, if not
exactly stellar, answer.
What we need to do now is we need to ask the kind of questions that require original thought
to do well.
It's one thing to ask a student to explain John Locke's theory of equality under the
law.
That's something a robot can do easily. It would be another thing to ask a student to explain John Locke's theory of equality under the law. That's something a robot can do easily. It would be another thing to ask a student to explain John Locke's theory
of equality under the law and then apply that theory to the recent spate of indictments against
Donald Trump. The connections there aren't obvious. You have to really read and think in order to make
those connections. To do this competently, you need to do a lot of original human-level thought,
which is something that a robot can't do very well.
So we need to set the kind of assignments that force people to think originally, that force people to do that human level work.
Yeah, that also just sounds like a way more fun and interesting assignment to do.
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization.
Empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here. You may have seen my money show on Netflix. I've been talking about
money for 20 years. I've talked to millions of people and I have some startling numbers to share
with you. Did you know that of the people I speak to, 50% of them do not know their own household
income? That's not a typo. 50%. That's because money is confusing. In my new book
and podcast, Money for Couples, I help you and your partner create a financial vision together.
To listen to this podcast, just search for Money for Couples.
So you open your piece with the student named Abhinash, which is a pseudonym, so he's not going to get in
trouble at school. But he made a comparison that really clicked for me. And it made me think about
this a bit differently. So he compared chat GPT to the calculator. Can you tell me that story?
So when he was growing up in India, attending high school, calculators were forbidden in math
class. Instead of using calculators as you would in a Canadian high school, you would have to write all of your equations out by longhand. And it was this
tedious arithmetic labor that didn't feel like it was particularly useful to him as a student of
math. He felt like if I'm understanding the concepts, why do I have to do all the legwork,
all the busy work myself? Then he arrives in Toronto for his undergraduate degree. Calculators are allowed.
He brings a calculator to an exam for the first time. And he feels the whole experience feels
really off. He feels like he's committing this sort of mortal sin. And then eventually gets over
his fear. He starts using the calculator in his exam. The world doesn't explode. And he starts
realizing that this ban on calculators was a little silly. The calculators are just a tool.
Then a few years go by and ChatGPT comes
on the market and suddenly everybody around him is using it in their schoolwork. And he starts
using it in his schoolwork. He starts, when he gets writer's block, he submits his ideas to the
robot and the robot helps him out of his writer's block. And he finds himself thinking, am I doing
something horrible here? And then his next thought is, well, people once said the same thing about
calculators. Maybe ChatG GPT is just a tool,
much like a calculator. Right. And you didn't hear this just from students like him. There were also
profs you talked to who weren't necessarily opposed to the use of chat GPT in schools,
right? And who saw it as something that students might want to be prepared to use when they leave
school. One of the most interesting interviews I had was with Benjamin Allery, who is a professor at the University of Toronto Faculty of Law.
He says that the question for lawyers of the future isn't whether they should use chat GPT,
it's how they should use chat GPT. And if you're a lawyer and you're using chat GPT to generate a
bunch of phony case law and shoddy legal analysis, then you're a bad lawyer. But if you're using ChatGPT
to suggest avenues for research, to do first drafts of your case factums, which you can then
copy edit later, and you're delivering high quality legal work faster than before, therefore
saving your clients money, well, you're a good lawyer. In fact, you should be ethically obligated
to use ChatGPT because you have an obligation to deliver high quality services as cheaply as possible.
So lawyers need to learn how to use chat GPT well.
And where are they going to learn that?
Law school, of course.
And so he allows his students to experiment with the technology.
And his feeling is, I don't care whether you use it.
What I care about is the final product.
And the final product is lousy.
You're going to get a lousy grade.
And if the final product is good, then you're going to get a good grade.
And the details of how you came up with it don't really matter to me.
Interesting. Yeah, we were talking yesterday as a team, and we were remembering how teachers were
so worried about us using Wikipedia when we were in school. But actually, Wikipedia is really useful
if you just learn how to use it right. Exactly. It's a great comparison. Wikipedia
is problematic if you use it poorly. And if you use it in a sophisticated way, it's wonderful. Yeah. I think that puts this whole
issue in more of a positive light. But before we go today, I was wondering if you could reflect a
bit on what's lost in this really rapid adoption of this new technology. I told you earlier on in
the interview that I've made my peace with ChatGPT in schools,
and that's true, but I'm not totally happy about it. I still think something profound is lost. I'm
still a little bit sad about the way things are going. I think what we're losing potentially is
that sense of radical intellectual independence that I think is such an amazing part of the
undergraduate experience. Your undergraduate year is at its best. There's something wonderfully
empowering about it. You are reading thought and thought from the past.
You're communing with the great minds of the past.
You're turning over their ideas in your head.
And then you are independently contributing to that conversation.
And that is incredibly empowering.
And I worry that students aren't going to have that anymore.
I worry that as our minds get more and more enmeshed with machines, we're not going to
be as independent as thinkers. And
I think there's something kind of sad there that's lost. And in addition to feeling, on the one hand,
I feel like we can make our peace with ChatGPT, we can coexist with ChatGPT, but I'm also grieving
something that I think is lost. And I think what I'm grieving is a sense of intellectual
independence that I really loved during my undergraduate years and that I want for my
students. Okay, Simon, I really loved your piece. Thank you so much for talking to us about it. I really appreciate it.
Thank you so much. I really love the show. It was great to be on here.
All right, that's all for today. I'm Tamara Kandaker. Thank you so much for listening,
and I will talk to you tomorrow.