The Decibel - Canadian professors on how AI is changing education
Episode Date: May 4, 2026A big issue hangs over university students and professors, and that’s artificial intelligence. There are some rules and guidelines, but professors are largely left on their own to determine how much... they want to adopt AI or not – and that’s created a wide range of opinions. Today, we hear from five Canadian university professors about how they’re thinking about education and students in the world of AI. We speak with Amanda Perry, professor of literature at Champlain College-Saint Lambert and Concordia University; Matt Dinan, associate professor and director of the Great Books program at St Thomas University in New Brunswick; Sarah Elaine Eaton, professor in the Workman School of Education at the University of Calgary; Adegboyega Ojo, professor and Canada Research Chair in AI Governance at Carleton University; and Mike Welland, professor of Engineering Physics at McMaster University. Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
The academic year has wrapped up for Canadian universities, and there's been a big issue hanging over classrooms.
It would be an exaggeration to say it's the only thing that we talk about, but we certainly talk about and think about it a lot.
For people in departments like English and French, it's hard to undersell how hugely disruptive this has been.
If you hadn't guessed it, the issue they're talking about is AI.
AI has been a huge deal in education since about January of 2023.
A few months after a chat GPT was released, it's become a pervasive topic across education at all levels.
I study academic integrity and plagiarism, and AI is permeating everything that we do.
It feels like AI has seeked into every corner of our lives now.
And we wanted to hear from professors about how higher education has changed.
If you ask me, the essay is dying a slow death.
It's palliative, and we should let it die with dignity.
So I'm here in the studio with decibel producer, Rachel.
Hey.
So you spoke with five professors with a range of opinions on AI in the classroom.
Yeah, I really wanted to understand how they're dealing with this technology and how it's changing what they do.
And the professors I spoke with were dealing with it in really different ways.
from skeptics who are trying to keep AI out of their classrooms entirely to those who are fully embracing it.
Okay, so Rachel, you're going to guide us through today's episode.
Yeah.
Today, the professors, their process and pedagogy.
I'm Cheryl Sutherland.
And I'm Rachel Levy McLaughlin.
This is the decibel from the Globe and Mail.
AI use has become almost universal among university students globally.
A study from Brock University found,
that in the 2022-2020-2020-school year, more than 72% of post-secondary students used AI.
That number hit more than 94% last year.
I should say there are some students who are turning towards AI in ways that are thoroughly acceptable.
This is Amanda Perry.
She's a professor of English literature at Concordia University and Champlain College, St. Lambert,
which is part of Sejep, Quebec's mandatory education.
bridge between high school and university.
I was talking to the student Mario the other day, and he was arguing that, you know, he speaks
three languages.
He writes his drafts himself.
And then he uses AI to do a little bit of a proofread at the end, goes back and forth between
the two, decides what changes he's going to keep.
And if I could trust every student to use AI exactly like that, it'd be fabulous, right?
And I would have no problems.
What really keeps me up at night, though, in terms of thinking about how I need to design my
class is that some students do this because they think it will get them better grades.
And the fear is that sometimes that might be true for those who struggle with their writing.
And of course, it's not genuine.
They're not actually improving.
But I really do worry about perverse incentives in terms of how we do our assessment.
She says AI is particularly challenging in literature classes.
A major part of our job is to evaluate.
their capacity to read and write. And now we're seeing tons of students who are outsourcing,
not just the writing, but also the reading, to things like chat GPT, things like Gemini. And so
it's caused this real crisis in terms of how much do things like composition, things like grammar
and spelling still matter. Should we still be evaluating this in the same way? But also,
how can we ensure that there's some level of fairness and that students are doing a minimum of this
work themselves. So this year, in the Sejep courses she teaches, she did more in-class assignments.
She did assign her students an essay they'd complete at home, but she added an oral defense
component where the students had a 20-minute meeting with her after handing in their essays,
where she could ask questions and give them a grade and feedback. And I saw it mostly as preventative.
The idea was that they would be less likely to lie to my face. I think that it did circumvent some
things. There were things I liked about it in terms of getting to give students feedback immediately.
There were things I hated about it in terms of students who had fabricated quotations and I had
to tell them to their faces. One of the defenses went off the rails and became more confrontational
than it needed to be. And so I feel very ambivalent about that mechanism. In a creative writing
class she taught this year, she allowed some AI use for the first time. She told her students that
they can use it if it's part of the writing process. So obviously, AI can't write the whole assignment,
but it can help with structure, syntax, that kind of thing. But the students had to disclose it.
And what I found in that class is that they barely ever use it. Most of them are like, no, this is a poem.
This is my soul. Why would I want to outsource that process to a machine?
I think that in 2020, we thought that the biggest thing that would happen in our lifetimes for teaching was going to be COVID.
And we were not right about that.
It makes sense to think about the arrival of the widespread use of LLMs as like a rupture within a rupture.
This is Matt Dynan.
He's an associate professor at St. Thomas University in Fredericton, New Brunswick.
He's the director of the school's great books program where his students tackle readings that hit on big ideas, from freedom to justice to friendship.
And he doesn't want AI and large language models or LLMs in his classroom at all.
Zero AI.
To be a bit polemical, I'd say that I do not see a role for LLMs, in particular in liberal arts education.
And of course, being mindful of the needs.
for various accommodations for students.
We also had no technology in the classroom this year,
which is to say that the 50 students in the class and I sat in a circle,
and we had our books and our notebooks out, and we talked.
His theory is that if the students see the value in their education,
they're less likely to use AI.
So he built his course with that in mind.
I changed the content of my first year course to be all about liberal education itself.
I'll just take a quick second here to remind you,
what a liberal arts education is.
It's focusing on critical thinking, communication, and problem solving, rather than specific
skills that will land students a job. It's more about teaching you how to think as opposed to
training you for a job. So we read a series of great texts from the ancient Greeks all the way up
to the present day, really diverse readings which don't agree with each other about the subject
of liberal education itself. He also used his classes to help classes.
close the gap in skills he was seeing with his students.
So I did a lot of assignments where I would, in the process of getting them to think about
the significance of the apology of Socrates, I get them also to work on specific academic
skills. So what is a thesis statement? How do I cite a text? How do I use evidence in making an
argument? We do these things together. I teach it explicitly. And then they get the chance to
practice them. He used something called specifications grading. So if the student meets the criteria,
they'd get full marks. And his students were able to rewrite their assignments. He describes it as
raising the standards, but lowering the stakes. Then he gave his students take home essays.
And he says that other than a couple of students, most of them did the work themselves.
Can you explain really quickly how you know that the students didn't use, or other than these two
couple students how you know they didn't use LLMs for their essays.
I know that they didn't use LLMs for their essays because I've been reading their
handwritten writing all year, so I know what it sounds like.
Also because I scaffolded their questions, I gave them the opportunity to develop their ideas
in class and time.
I met with all 50 of them and discussed their ideas before they wrote their papers.
We made a really strong case in this class together that what we were doing was,
And maybe I'm a fool, but I believe them when they tell me that they did their own work.
And he says that his class was having a great time. They were super engaged.
The vibes in the room among these students, the vibes are impeccable, as the kids would say.
Impeccable vibes.
Then you have professors who are leaning into AI in the classroom, understanding that it's probably inevitable.
I use artificial intelligence with my students.
My perspective is that I'm training future professionals, and AI is likely to be part of their future.
So I want them to be attentive not only how to use the tools, but also to think about the ethical complexities,
including things like privacy, bias, etc., when they're using them.
Sarah Elaine Eaton is a professor in the Workman School of Education at the University of Calgary.
She's also the director of the post-plagiarism research lab, which studies how to
to think about ethics and integrity in the age of generative AI.
One example of how I use AI in my teaching practice is that I'll work together with an
AI app to help me identify vulnerabilities in my own assignments to AI misuse or cheating.
And after I've asked for an analysis of the assignment, then I work together with AI to help
build an assignment that's more resilient to AI misuse.
The suggestions AI gave her were to build in oral assignments or turn the assignment
into smaller tasks or use local information that the AI wouldn't have access to.
Her students were allowed to use AI to help with assignments, but transparency was key and
responsibility.
I think what we want to do is think about how humans remain accountable for their work and
how they can use technology to their advantage while not circumventing their learning
if they're students, right?
So I often say we want students to use AI to supplement their learning, but not to circumvent it.
One of the things Professor Eaton did with her students to ensure they weren't circumventing their learning was meet with them one-on-one.
And this was something I heard from several professors.
Professor Eaton and her students would go over their questions, what their next steps are, and how they'd advance the project.
So it's more iterative and back and forth and a conversation with the students ongoing so that,
By the time they get to the end, I've seen that work a few different times.
For Adebuega Ojo, this whole issue really aligns with his own research.
He's a professor and Canada Research Chair in AI Governance at the School of Public Policy at Carlton University.
He teaches courses on public policy and research's digital governance.
So it was out of the question for me in terms of not engaging with it.
For me, it's always about how do I, you know, engage with it so that it benefits my,
It improves my teaching and also improves the learning experience and learning outcomes for my students.
The way he sees it, students are going to use AI, so he better help them do it properly.
And he designs his assignments with AI use in mind.
He gives his students an assignment, create a briefing note, let's say,
and he tells the students to do the first part of it with AI, give the prompt to AI,
and then come to class and talk about what the AI did wrong.
He said his students love this process, and honestly, I can imagine the fun of finding the holes this all-knowing robot left in your assignment.
I think once you provide the opportunity for students to really discuss in a classroom, all right?
Then you really reduce that risk of people just doing things, you know, without thinking.
He also uses AI to help make his classes more engaging, for example, turning his lectures into a podcast or a video or concept notes.
All of a sudden, I'm giving many modalities of just one content.
That's good, all right?
These are additional materials, all right, for students if they do want to take advantage of that.
He said his students really enjoyed the podcast version of his lectures, and it helped keep their attention in class.
And he sees AI as a tool they'll need to use in their futures.
One of the things that is fundamental regarding AI in the workplace is going to be the degree to which the individual workers,
are able to complement what AI is able to do.
One of the fields experiencing major displacements due to AI is coding.
So Mike Welland is thinking a lot about the world his students are heading into.
He's a professor of engineering physics at McMaster University
and teaches numerical methods for engineering and computational multi-physics.
Let me translate that for those of us who didn't study STEM.
the bulk of his courses are about what's happening when you ask a computer to solve a problem,
like modeling airflow over a car, for example.
He is open to students completing assignments with AI.
He cast the AI as the junior employee, gathering the initial facts, data, or code together,
and the student then becomes the manager.
However you get the answer, it's on you to understand why that answer works, if that answer works.
if that answer works, and how you would improve it.
That means it's still important for these students to know how these theories work or how to code
so that they can check what the AI creates.
First of all, what AI generates is plausible code, not necessarily good code.
He also does an oral component to the assignments to help ensure the students do understand the material.
And if a student gets a zero on one component, the assignment or the oral part,
They fail the whole thing.
In my previous job, I used to work at Canadian nuclear laboratories.
And what you would have to do is you would be tasked with a problem.
You would go away and you would work on that problem.
And never alone, by the way.
So the point of the management review then becomes to scrutinize and evaluate and say,
why did you do it that way?
So I kind of brought that into the classroom and thought,
anybody can hand in the written assignment.
I'm welcoming groups, I'm welcoming AI, welcoming any mode that they want to get that answer,
but then that has to stand up to scrutiny.
Professor Welland gives his students optional design projects that overlap their own interests with what he's teaching.
I had students who were doing tumor detection using electrical signals,
and another one that was doing gravitational lensing in the same course.
It's all enabled by the same tools.
It's all partial differential equations and numerical methods.
But by giving them the tools to do the things that they are interested in, they see the value to coming to that class.
Professor Welland also uses AI to help his teaching.
He created a chatbot for his students, where it draws off of his course material.
So students can ask specific questions and it'll link to the material with the answer.
And it can generate practice questions.
So they'll just go on to it and say, generate as many multiple times.
choice questions and identify the answer and the passage until I get sick of it.
And it'll just dig up again and again and again and again and again and again.
And it'll just keep going until they've memorized the entire textbook entirely.
He also created a podcast from his lectures.
Remember, Professor Ojoa at Carlton did that as well.
But his students were not into it.
I asked my students if they thought that it was useful and they categorically said no.
They hate it.
It's highly robotic.
It lacks a human soul, quite literally.
I was glad to hear in this that the podcast was kind of awful.
So I was like, oh, my God, they're making podcasts now.
Like, there goes my job.
And I was like, okay, great, they suck.
Yeah.
I can send you the link if you're curious.
I am curious.
And here's a smidge of Professor Wellen's podcast.
Today we're plunging into the really fascinating world of numerical methods for engineering.
We're drawing from an advanced engineering.
We're going to uncover the ingenious ways engineers and computers approximate reality, revealing some, well, surprising truth.
Okay, I think my job is still safe.
After the break, the death of the essay.
The end product that I get as an essay is no longer a representation of the student's learning process.
Okay, I did a lot of essays in my undergraduate degree.
I'd get an assignment from my professors, usually something like, read an essay using at least three texts we studied.
in class. I take my notes from the readings, compile them into a crazy document that I called
my garbage draft, and somehow turn that into an essay that I'd then handed to my professors.
While AI has changed the game for assignments like this, and professors have had to seriously rethink
them. The one thing that almost all of us have had to stop doing, I think, is giving them an essay,
idea, prompt, question, sending them home and then grading what they hand back to us.
This is Amanda Perry, the English literature professor in Quebec.
She says the process now has to be scaffolded, so maybe do some brainstorming in class.
Do a draft in class that they then take home and revise, so the students don't automatically
put the prompt into a chatbot.
What she's found is that she has to be okay with some mistakes.
it shows the humanity behind the work.
For me, one of the biggest shifts has been with the way that I relate to polish,
to students being able to produce things that are perfectly grammatically correct
and have elegant turns of phrases that used to be something that delighted me.
Now, sometimes it's something that makes me suspicious.
And I find a comma splice.
I find some awkward syntax.
And I'm relieved because I think, oh, look, here's someone who has indeed done this work themselves,
who is struggling to figure out how to say something,
and I'm a lot more generous in terms of how I approach that now.
Professor Eaton at the University of Calgary feels something similar.
One of the biggest things I'm struggling with as an educator,
and I don't think I'm alone in this,
is how do I let go of valuing perfection
and instead focus on valuing my students' process,
giving them a chance to make mistakes,
accepting work that may not be grammatically perfect.
I want my students to know that they can be authentic,
but that involves me accepting their imperfections.
The professors on the STEM side of education
are focusing more on elevating students' knowledge,
understanding that AI can do those initial tasks easily,
like the engineering physics professor Mike Welland.
AI can handle some of the lower-level learning construct.
So in education, there's something called Bloom's taxonomy, and it kind of rates the different scales
of learning and knowledge acquisition right at the bottom is memorization and regurgitation.
What AI has done in its capability of delivering precise answers is it takes it up more and more
notches up that scale.
And Professor Ojo at Carlton says that the introduction of AI means that learning has to be
bumped up the taxonomy.
We have to design the learning in such a way that we move.
the materials from just being able to remember things or describe things or even discuss things
to the point of higher up in the Bloom's taxonomy where you have to critique an assessment.
So you're actually challenging the students more at that point.
One of the big concerns with AI use is around academic integrity.
Are the students actually doing the work? Are they cheating?
These professors have put in place measures around responsible AI use,
but how do they ensure compliance?
For the two literature professors, Amanda Perry and Matt Dynan,
this whole thing is a burden.
Policing for AI use is incredibly emotionally exhausting
because it does put you in this situation of constant suspicion with students.
And it's not the sort of relationship any of us want to have with them.
The question that every educator should be asking is not,
How do I police my students, right?
How do I get better at detection and, like, making sure that someone's, like, not hoodwinking me or getting one over on me by submitting work that they didn't do?
Like, to me, that sort of attitude is fundamentally at odds with the relationship that should exist between a teacher and a student.
I'm not, I don't want to be a cop.
I want to be a teacher.
Professor Eden in Calgary, remember, she's the director of the post-plagiarism.
research lab, says that the traditional methods of detecting cheating are no longer sufficient.
You can't put an assignment into a program, in my day it was turn it in.com, to see if it's
been plagiarized because the chatbots create unique outputs. So we need to acknowledge that our
understandings of plagiarism have changed with new technologies. Artificial intelligence is the next
wave of technology that's challenging our historical understandings of what it means to plagiarize. So
thinking about flipping it around, and instead, how do I ensure that students can demonstrate their learning becomes a bigger question, a more complicated question, but that's actually the crux of the matter, if you ask me.
Professor Welland's theory is that AI hasn't actually changed academic integrity issues.
Cheaters were going to cheat. Cheaters are going to cheat. The people who see value in their education, who are in it for the love of the game, who want to.
to learn they were not going to cheat
with or without AI.
The ones who are there for a passing grade
and to move on with their life, they were
going to cheat anyway. Professor Eaton
says she's seen no evidence
that the rates of academic misconduct
are increasing. And
Professor Welland also feels like
it'll get harder and harder to enforce
rules, even in an in-class
setting. Previously,
I thought that if the student doesn't
have their computer, then
at least like whatever they're writing down,
they would have thought of that.
But all of a sudden, all the students have smart watches.
So now everything's coming out on the smart watch.
Okay, no watch is allowed, right?
Now all of a sudden, everybody has smart glasses.
And what if those glasses are prescription glasses?
You can't tell somebody to take off their glasses when they're writing an exam.
A lot of universities do have policies or guidelines around AI use,
but it's mostly about what cannot be done.
wholesale writing of assignments and professors using AI to grade, for example.
But the rest of it, here's Professor Eden in Calgary again.
Overall, there's a lack of policy direction in Canadian higher education.
And when it comes to designing and redesigning assessments, largely,
educators are left on their own.
And if they take the initiative to figure it out themselves, good on them.
But universities and colleges can do a better job of providing educators with some.
support right now. There are a lot of opportunities in terms of how we stare and change our
teaching and assessment so that it really prepares our students to be the ones riding that wave,
all right, instead of just being scared and being afraid, you know, and displaced, but they know
they actually have the skills already, know for different tasks, what they need to do, how they need to
engage, how they actually add to that. It's a challenge insofar as it has forced us to refraising. It's a challenge
insofar as it has forced us to refraising.
think, I think the meaning and purpose of what we do as educators at the university level.
But I don't think that that is bad.
I think that's actually been a wonderful opportunity for us to understand what we're doing
and why.
So what AI has done is kind of it's made it so that we can't pretend like the old ways
of doing assessments is still really valid.
It means that we have to consider different ways to evaluate what the students actually understand.
So there's this question in the universities.
Are students actually learning the material or are they learning to pass the course?
And that's really the whole point of education, students learning and preparing for their futures.
For many students, they show up at university as a sort of default setting after high school.
And they are not, they're told by, I think, universities that the reason why they're there is simply to train for a job that they might someday have.
And while I think that educated, liberally educated people especially make excellent employees in every field, that's not the purpose of liberal education.
We don't make employees. We help human beings become free human beings.
I think our job as educators is to prepare our students for the future.
And our students are the stewards of a future that we can't yet imagine.
That was professors Matt Dynan, Sarah Elaine Eaton, Amanda Perry,
Adebuega Ojo, and Mike Welland.
A big thanks to the professors for speaking with me.
That's it for today.
I'm Rachel Levy McLaughlin.
I produce the show with Madeline White and Mihalaj.
Stein. Our host is Cheryl Sutherland, and our editor is David Crosby.
Adrian Chung is our senior producer, and Angela Bichenza is our executive editor.
Thanks so much for listening.
