The Current - Will AI make us better writers? Or kill our critical thinking?
Episode Date: May 14, 2025If you’ve tried to write an email or opened a blank document recently, some kind of AI assistant has likely offered to polish your words — or even write whole sentences for you. Some advocates arg...ue that generative AI could open up a new frontier in writing, but others warn it’s dulling our creativity and critical thinking for the sake of efficiency.
Transcript
Discussion (0)
How did the internet go from this?
You could actually find what you were looking for right away,
bound to this.
I feel like I'm in hell.
Spoiler alert, it was not an accident.
I'm Cory Doctorow, host of Who Broke the Internet
from CBC's Understood.
In this four-part series, I'm going to tell you
why the internet sucks now, whose fault it is,
and my plan to fix it. Find Who Broke
the Internet on whatever terrible app you get your podcasts.
This is a CBC Podcast.
Hello, I'm Matt Galloway and this is The Current Podcast.
Hey AI, write me an opening line for a national radio program segment about AI's impact on writing.
Welcome, listeners, to a thought-provoking discussion on how the burgeoning world of
AI writing is poised to reshape one of our most fundamental human abilities, critical
thinking.
It's a mouthful.
Maybe, maybe our jobs are safe, at least for now.
We'll see.
Writing used to be just you and the blank page, but now you are probably seeing AI writing
prompts everywhere. Open up a blank email or a now you are probably seeing AI writing prompts everywhere.
Open up a blank email or a blank document.
There's often an AI prompt.
It says, help me write.
You might've been tempted by these generate
text buttons that whispered, do you really need
to write that eBay listing yourself or that job
application or that essay?
Does every word really need to be yours?
It is prompting a fierce debate between people who are worried that outsourcing our writing
to AI will erode critical thinking and creativity and those who think generative AI is opening
a new frontier in writing.
In a moment, we will hear from someone with serious concerns about the implications of
writing with artificial intelligence.
But first, I am joined by Jeanne Beatrix Law.
She is a professor of English and coordinator of the graduate certificate in AI writing technologies at Kennesaw State
University in Georgia. Jeanne, good morning.
Good morning, Matt. Thanks for having me.
Thanks for being here. This is such an interesting subject because as I say, it seems like it's
everywhere when you open these documents or applications. In your world, how are students
already using AI to write?
Well, and it's not just in my world, I think.
I think we see this nationwide in the US
and even internationally.
There's a lot of data out there that shows
students are using AI for some deeply nuanced tasks, right?
Like starting to write, brainstorming, editing.
So they're not just using it to get an answer,
let's say for a math quiz or a biology test.
They're really using it in deeper nuanced ways.
They may be doing that too.
I mean, using it just to get the answers.
Yeah, sure.
Do they, is your sense of, I mean, in your experience,
do they admit to using artificial intelligence?
So I would say at my university, we surveyed 1,700 first-year writers, and about 40% of
them said they use AI in their academic writing.
More of them said they use it in social media writing and their workplace writing, but yeah,
I think that number is way higher.
So you have written that this could be more of an opportunity for students than a threat.
Why do you think it is important for students to learn how to write with generative AI?
Well, first of all, what we know is that organizations that predict workforce development, like the
World Economic Forum, they really have gone all in with thinking
about human collaborations with AI, how we can use AI for critical thinking, for analytical
thinking and creative thinking. So I think for me, preparing students to thrive in AI
infused workplaces is important for me as a professor. But what I think is also important is teaching students
how to ethically and responsibly use AI. And I think that starts in a first year writing
course or a general education course at university.
So tell me how you would do this and explain this in a way that somebody who isn't familiar
with chat GPT or other AI writing would understand this. How do you go about teaching your students
to use AI?
So I talked to them first day, first week, right? Prompt first model is what we call it.
Always making sure, number one, that the human is driving. We're never going to offload our
creativity to generative AI. We're never going to single prompt. I'm never going to, like, like what you did when you first introduced the show, never just put a, never just put
an input in and then just take that output and use it. So what you might want to do is
you might want to precisely, rhetorically, effectively use your words to input. And then
you want to evaluate and read that output that you
get. Then you want to revise it, reflect on it, maybe you want to throw it away
and start all over and then you input again with different words. No, you didn't
quite get this right, let's change this, how about this, give me recommendations
for this. Get another output and you continue to converse, I guess would be the
way I would put it or iterate with that machine until that output combined with all of your inputs
that you put in aligns to the vision that you wanted originally to have in the writing.
This is the rhetorical prompting method that you've talked about?
to have in the writing. And this is the rhetorical prompting method
that you've talked about?
It is, yes.
And we teach that at Kennesaw State.
We have piloted that and we developed it in 2023.
Our research teams continue to test it and iterate on it,
both with our students and with students
on the Coursera learning platform.
And so we really have found that when we develop something
that looks a lot like the writing process already,
which the prompting method does, students remember, they bring that prior knowledge
of how they learned to write or how they were introduced to writing in grade school or primary
school, and they can engage with that process.
Again, the only difference is you're explicitly putting those things into the
chat box. You said earlier that this is a way to teach students how to use AI ethically, but also
responsibly, and that if you use AI responsibly, this can augment human creativity. What does that
mean? Absolutely. What does that mean, responsibly? Why do you use that word and how do you define it
in this context?
I would define responsibly as a use by a human, right?
The human decides how to use that output, right?
The AI is kind of agnostic to that output.
The human decides how to use it.
So what we teach our students is a four part method
where they have to ask themselves questions,
four questions.
Is this output useful to me?
Is it relevant to others?
Is it accurate?
So we've got to fact check that output.
And then is it harmless?
What is the degree of harmlessness
in regards to prompt engineering, right?
The prompt itself.
An example of this would be if I
am a staff member at a university and I have an agenda for a meeting, let's say,
right? And the meeting, someone asks not to be named at the meeting and I put
the agenda in and ask for minutes from a large language model and then the large
language model gives me those minutes and it names that person. That's the
type of harmlessness we teach our students to think about, not harmlessness
in terms of environmental impact or the larger ecosystem.
We're talking specifically about how they use it for writing.
So if any of those questions that they ask themselves, if they can't say with certainty, yes, I'm sure,
then they go back and continue to converse
and continue to refine and polish
until they get to a point where they can.
And I think that's important.
But I also think, Matt, it's important
to acknowledge the limitations of that method
and the idea that some students might over-rely
on generative AI.
And just use generative AI to create the essay for them.
They're not having a conversation.
They create a prompt.
The prompt leads the service or the technology
to write the essay for them and they're done.
Right.
And I think my response to that is what we've seen
at my university, I'm at a large public university with a fair number of first generation students.
And so we surveyed 1,700 first-year students and asked them about their thoughts on AI
and cheating.
And it was really interesting the answers that we got.
What we saw over from 2023 to 2024 was a dramatic increase
in the number of students who didn't think using AI
was cheating, but we also saw that more than 60%
of students surveyed said, oh yes, sometimes it's cheating.
And it's cheating if I don't have my voice in it.
It's cheating if I use it for this purpose, right?
Well, and making sure that their voice in it
is really important, because one of the real concerns
around this is that, A, there are people who believe
it's not writing, but also that you are dulling
critical thinking, and you are dulling,
if you allow AI to write your first draft,
rather than staring at the blank page
and trying to figure out something creatively
that will fill that page, that you reduce critical thinking
and that you circumvent what writing is, which is it's
hard. I mean, it's hard work.
Yeah, it is, for sure.
So what do you say to people who raise those criticisms?
I would have a couple of responses. My first response, I think, would be one of accessibility.
So for many writers who are neurodiverse, I'm one of those writers.
For neurodiverse writers, the tyranny of the blank page
can be part of what stops us, what paralyzes us
from being able to write.
And so using a large language model to give you ideas,
like if I already have my topic
and I'm just looking for ideas to scaffold it
or to organize it or to structure it or give
me recommendations on how best to talk about this idea.
That can be a game changer for folks who are neurodivergent in terms of writing.
The other thing I might say for students learning to use this is learning to use it as part
of the writing process recursively.
So it may be the beginning or it may be the
middle of your writing process, but it can never be the end. You always have to end it.
How do your colleagues react? I mean, as somebody who is an English professor who teaches people to
write, how do your colleagues respond to you embracing AI in that writing process?
I would say there are some who agree and some who disagree vehemently,
and I understand those disagreements. I think some people come at the disagreement from an
environmental impact stance, from other stances. I think we're getting better with some of, I think,
you know, the models are getting better in terms of environmental impact. But the, the, some of the colleagues who agree,
agree really because they are thinking about the
responsible use, right? Not one and done,
but actually making it part of the writing process.
Because when students get to workplaces,
they're going to be using AI as part of the writing process. I mean, in the US,
you know, there's some great statistics
that show that 94% of companies expect their employees
to use AI, but only a third of those companies
are training their employees to use AI.
So if they're not getting training,
but they're expected to know how to use generative AI,
maybe we ought to be helping them
on that journey in university.
I have to let you go, but very briefly, what is one piece of advice you would give to somebody
who wants to be a better writer while using AI?
Always make sure that you are the human at the helm. Don't offload your creativity,
use generative AI to amplify it.
All right, Jeannie, we'll leave it there. That's a concise lesson. I appreciate that. Thank you very
much. Thank you, Matt. I enjoyed it. All right, Jeanne, we'll leave it there. That's a concise lesson. I appreciate that. Thank you very much.
Thank you, Matt. I enjoyed it. Jeanne Beatrix-Law is a professor of English and coordinator of the graduate certificate in AI writing technologies at Kennesaw State University in Georgia.
Hey there, I'm David Common. If you're like me, there are things you love about living in the GTA
and things that drive you absolutely crazy. Every day on This Is Toronto, we connect you to what matters most about life in the GTA,
the news you gotta know, and the conversations your friends will be talking about.
Whether you listen on a run through your neighbourhood, or while sitting in the parking lot that is the 401,
check out This Is Toronto wherever you get your podcasts.
This is Toronto, wherever you get your podcasts.
Emily M. Bender is a professor of linguistics at the University of Washington,
author of the book, The AI Con,
How to Fight Big Tech's Hype and Create the Future We Want.
Emily M. Bender, good morning to you.
Good morning.
We just heard about the tyranny of the page,
how AI can help people confront that tyranny,
but also partially ensuring mean, partially ensuring
that they are still the human at the center of this.
This technology is everywhere.
What do you make of the use of AI
to make us better writers?
Is that possible?
It makes me sad to hear about students being told
that they need to learn this technology.
Your previous guest said that some statistics about all that they need to learn this technology.
Your previous guest said that some statistics about all companies expect people to use this,
but they're not training their workers in using it.
All of that is just part of this current hype bubble.
And I really wish that we could refocus in higher education on nurturing critical thinking
and self-expression.
And all of that is deeply wrapped up in the process of learning to write.
What's wrong with using that technology to be a better writer?
Well, there's a couple of problems. First of all, the technology itself is unethically produced,
so it's simply not possible to use it in an ethical fashion.
What do you mean?
Your previous – well, there's the environmental issues that your previous guests alluded to.
There's the fact that it's all built on appropriated, if not stolen data, and that was not consented, not collected with consent.
And there's an enormous amount of exploitative labor practices in the way
that the data workers are treated.
People in Kenya and Venezuela and other places who have to do really grinding
work looking at terrible outputs so that the users at large don't see those outputs.
So that's the systems that we're talking about. And to be presenting this to students as something
that could be used ethically, I find a problem.
That's sort of the first level.
Yeah.
Go, if you go deeper, do you have a moral issue
with, with schools teaching kids how to use this
technology in settings that are extensively about,
you know, uh, a tapping into or cultivating a sense of critical thinking, a sense of, of, about, you know, a tapping into or cultivating a sense of critical thinking,
a sense of, you know, just base level creativity.
You have a blank page, you need to fill it,
and you need to kind of put the gut work in
to make that happen.
Yeah, I wouldn't say so much that's a moral issue
as a professional issue.
Our job as educators is to help students learn to think,
and writing and thinking are inextricably
twined and if you say put your theme in and then outcomes, some notes, a first draft and
then keep working with that, that is part of the process. There are some skills that
are being trained there but it sort of gets removed from the person's own thoughts and
the work of taking your own thoughts
and turning them into the page.
I think you can't skip that part
and expect to learn how to do it.
Can you explain just very briefly
for people who don't understand this,
just how something like ChatGPT works
and how that feeds into your concerns
around this technology.
Yeah, so ChatGPT is built on something
called a large language model.
Sometimes people misspeak and call it a language learning model. It's not. All of these words like learning and artificial intelligence are anthropomorphizing terms that sort of muddy
the waters here. But a large language model is a statistical model of literally what letters go
where in a very large collection of text. So it's modeling words, but not what the words mean,
just sort of how they're spelled
and what spellings go next to each other.
And then when you use ChatGPT, what the system is doing
is repeatedly answering the question,
what's a likely word to come next?
And it seems like it's doing so much more
because what comes out is plausible looking text
in languages we speak, and then we make sense of it.
What do you make of the fact that if I were to open
the email application on the phone
that's sitting next to me right now
and go to send you an email,
it would prompt me to use the technology
to write you an email, to write you a polished email,
to write you perhaps a better email
than I could write at this hour.
Yeah, I mean, what's a better email, really?
Right?
So.
But it is everywhere, right?
Like it is everywhere, not just in education settings,
but in our personal lives and professional lives as well.
Oh yeah, absolutely.
And that's because there's enormous amounts of
money behind this.
Huge swaths of the tech sector are bought into this.
And in order to get their return on investment, need
to convince everybody that it's useful.
If we were able to train AI on data that was not, as
you said, perhaps stolen or at the very least, taken
without people's permission to help build the models.
And if we were somehow able to address the
environmental concerns, would it be possible to
collaborate with AI to make something that is
creative, do you think? To make something that is creative, do you
think?
To produce something that is creative and perhaps park some of those ethical concerns?
So collaborate is one of these anthropomorphizing terms.
I would never say that I collaborate with my calculator to solve a math problem, for
example.
I think you could imagine a collective of artists, let's say, or writers who decided to contribute
their writing to a system that would then make
paper mache of it and they would sort of collectively
be collaborating with each other through that system.
So, and then assuming that it was done
without environmental impact and so on,
then yes, that would be okay as a kind
of an artist's collective.
The paper mache thing is that you've talked
about this before, right?
That what people are creating isn't writing, that it's some sort of linguistic paper mache.
Yeah, I mean, what people create could be writing, but what comes out of these systems
is linguistic paper mache. And that's, I think, particularly important if you're looking at the
output of these things as a source of information, which it absolutely is not. It's the only
information inside of chat GPT is information about which it absolutely is not. It's the only information inside of ChatGPT
is information about which words go where
in its training data.
Can I go back to that idea of the tyranny
of the blank page, that there are people who would say
when they're trying to write, it is just too much,
and that if this prompt helps them get started
when they have been stuck,
that there is something beneficial to that.
It doesn't mean that you use it.
It doesn't mean that you use all of it.
And maybe you go back as a human being and revise
it and improve upon it, but that that is, that
that's the jump off point.
Um, is, is, is that, is that not better than nothing?
So, you know, it's interesting.
Oftentimes people will say, well, we're using
chat GPT because it's better than nothing.
And you hear that in all kinds of contexts.
So this context, but also someone who's seeking psychotherapy for mental health issues. It's like, actually,
in that case, it really is worse than nothing and frightening. But the question I always ask is,
okay, so why is the alternative nothing? What else could we create that's actually designed
for this particular purpose, rather than picking up the so-called everything machine that OpenAI
desperately wants us all to use.
And what would a special purpose system look like for getting over the blank page?
So if your blank page is an email that you've opened up, maybe you've got a couple of effectively
dice you could roll to get possible greetings and possible like opening sentences, for example.
And then the page isn't blank and you can go from there.
That that would be a useful tool.
I'm not sure that would be a useful tool, but that would be a tool that you could test
in that context and decide, is it useful for this particular issue?
As a professor, do you know whether any of your students are using AI?
So I am the faculty director of a professional master's program in computational linguistics.
And computational linguistics is the field that language models come out of.
So at the start of each year, we have a conversation about what these things are and how they relate
to our work and why they are not information access systems and why it doesn't make sense
to use them to do your homework. And I'm sure that some students do anyway. But my goal is not to be policing the classroom and
getting to absolute zero usage. But my goal is to help students understand what's going on and make
sure that as many of them as possible take advantage of as many learning opportunities as possible.
We've long worried about the implications of creativity and productivity when it comes to new technologies,
that this is going to corrupt us, this will corrupt us, this will do something. How is
AI different than any previous new technologies when it comes to its impact on our creativity,
do you think?
Yeah, so first of all, I never call this stuff artificial intelligence because that is a
misleading term that muddies the waters. But if we're talking about chat bots-
What do you use instead? What is the phrase that you use instead?
Well, it depends on what I'm talking about. So if I'm talking about things like chat GPT,
I'll call them chat bots, large language models, synthetic text extruding machines.
Sounds like something that's making noodles rather than words and sentences.
Yeah. And that's kind of the point to sort of make fun
of them a little bit, but also to name specifically,
you know, this is a machine for extruding text,
as opposed to a machine for being creative or
getting information or something.
And also it's a different piece of software from
something that might be doing image processing
in the context of a radiology lab.
I know.
So calling that all artificial
intelligence really muddies the waters. I think here you're interested in the synthetic text
extruding machines and how is that different from other technology? How is it different from
spell check? How is it different from being able to type instead of writing by hand?
I think the difference is that all of the previous technological changes in writing or composing, because written language itself is the first of these technologies, is all
of those were about different ways of expressing and sharing our own words.
And with the synthetic text extruding machines or these paper mache bots, it becomes a way
of appropriating other people's words,
we don't know whose, and it's being sold as,
this is an okay tool for you to use to make your words,
and that's just not true.
Do you worry that, I mean, the phrase is,
the horse is out of the barn.
Do you worry that you're pushing a heavy stone up a hill here
because this technology is everywhere,
and because people, for better or worse, people use it as a means to an end.
Yeah. And also there's a lot of money behind the marketing of it. I know that I am
swimming upstream here, but I think the inevitability argument is also a move on
the part of big tech to convince us to give up our agency. And I refuse.
What do we do about that? If you refuse, and as I said, the title of your book is creating the future that we want.
How do you create the future that we want?
Well, first of all, I can't speak for the whole we, and neither can my co-author Alex
Hannah.
So when we say create the future we want, what we mean is empowering everybody to have
agency in their own lives, in their own communities.
And the vision of the future we have is one where technology is something that is made much more locally and controlled by the people who are using
it and having it hopefully not used on them. In the meantime, it is everywhere though.
It is, but we can all, you know, opt out. You don't have to click the sparkles button. And you can also,
one of the things that we talk about in our book is
ridicule as praxis. That is when some brand, for example, uses a janky synthetic image in their
advertising, you can laugh at them and you can do it publicly and sort of help normalize valuing
authenticity and seeing the synthetic stuff as a sort of janky, labor-stealing, poor knockoff work that it is.
Before we go, can I just end with where you started, which is that it's sad to hear,
you said it was sad to hear in some ways, how prevalent this is in schools.
What does it do to our sense of education? Because there are professors who are using
this as well, not just to see whether kids are using AI, but they're using it to improve on their own methods of teaching as well.
Where does that leave how we think about education if this tool is there?
Yeah.
I mean, every time I hear about this education, I know that people are coming from a well-meaning
place, right?
We all care about our students.
We care about them being prepared.
We care about being more effective teachers. And I'm sad that the marketing
from the open AIs of the world has led us to believe that this has to be part of it. And I
really wish that higher ed as a whole could refocus on nurturing critical thinking and nurturing the
relationships that happen in the context of education, where it's person to person and not
person to machine. I'm glad to have a person to person conversation with you, Emily. Thank you very much.
Likewise. Thank you.
You've been listening to The Current Podcast. My name is Matt Galloway.
Thanks for listening. I'll talk to you soon.