Today, Explained - ClassGPT
Episode Date: August 12, 2024Students are returning to college campuses this month armed with generative AI tools. One professor who has banned them and one who has embraced them explain why. This episode was produced by Peter Ba...lonon-Rosen, edited by Amina Al-Sadi, fact-checked by Laura Bullard, engineered by Patrick Boyd and Andrea Kristinsdottir, and hosted by Sean Rameswaram. Transcript at vox.com/today-explained-podcast Support Today, Explained by becoming a Vox Member today: http://www.vox.com/members Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
It's officially mid-August, which in my mind is peak summer, but for college students across the country, it's time to start thinking about returning to campus.
And for college professors, it's high time to figure, both mine and others, about what the official university policy is, what a department policy is, what an individual classroom policy is around the use of chat GPT to do student academic work.
Sounds kind of stressful, to be honest. Does not feel very summery.
And so going into this summer teaching my own class, I decided to write my own AI policy prohibiting AI in my classroom.
How professors are readying themselves for the robots this semester on Today Explained.
BetMGM, authorized gaming partner of the NBA, has your back all season long.
From tip-off to the final buzzer, you're always taken care of with a sportsbook born in Vegas.
That's a feeling you can only get with BetMGM.
And no matter your team, your favorite player, or your style, there's something every NBA fan will love about BetMGM.
Download the app today and discover why BetMGM is your basketball home for the season.
Raise your game to the next level
this year with BetMGM,
a sportsbook worth a slam dunk,
an authorized gaming partner of the NBA.
BetMGM.com for terms and conditions.
Must be 19 years of age or older to wager.
Ontario only.
Please play responsibly.
If you have any questions or concerns
about your gambling or someone close to you,
please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
BetMGM operates pursuant to an operating agreement with iGaming Ontario.
You're listening to Today Explains.
Is it Today Explain or Today Explains?
Explain-da.
Explain-da.
Olivia Stowell is a PhD candidate and an instructor at the University of Michigan.
Where I study media studies, specifically reality television.
She also teaches a course on reality TV, which sounds like a lot of fun,
but she's banning AI in her classrooms, which sounds a little less fun.
We asked her when this became an issue.
Yeah, it was fall of last year, so fall 2023.
I was teaching a TV class.
I was the TA for a TV class with my advisor.
And they had an assignment where they had to write about social media reception response to a TV show of their choice.
How Vanderpump rules rules social media.
Modern family and the use of modern media to make modern points in a modern way.
Why breaking bad broke big bad social media norms badly. I noticed sort of the repeated use of phrases and repeated kind of sentence structures.
Repeated phrases.
Phrases.
Phrases.
I was like, this does not feel like student writing to me.
I was pretty certain that a student had used ChatGPT and they ended up admitting to it.
So, confirmed case.
Here is something to get you started.
Good enough.
At that point in the fall semester of 2023, did you have a policy?
Did your class have a policy?
Did your institution have a policy?
There was no institution-wide policy and there still isn't that I know of. So professors are kind of setting their own.
And this is kind of a common thing at various universities where, you know, the university
will present a range of possible policies or suggestions for possible policies.
Chad GPT is the new kid on campus, and some professors are banning it from their classes.
Others are encouraging their students to test the limits of how AI can help.
Duke University is doing this where they have kind of four levels of policy from like
totally prohibit, use with permission, use with citation, totally allow are kind of like the four
tiers. University of Delaware also has that same kind of scaffold available. And so at that point in that class that I was in, the use of AI was prohibited, but
I didn't write the policy.
I was the assistant in that class.
Okay, so you, seeing this sort of gap in a policy, decide to fill that space with one
of your own.
Yes, yeah.
Tell us about it.
How did you write it?
What did you consult?
What did you do with it once you wrote it? Obviously, as someone who is like terminally online and like
also studies media, I was like aware of ChatGPT, right? And aware of the possibilities. But when
it first came out and like when Dolly and all the other ones kind of first came out, I was like,
and I still kind of feel this way. My instinct was like, it will kind of be a flash in the pan
and then sort of be residual or recede kind of from the front stage of public conversation.
But when it became clear that that didn't seem to be the case for students, I felt like, okay,
I have my gut reaction, but I need to also be informed. You know, I spent a lot of time reading
a lot of academic papers, journalistic articles, media reporting, things like that.
And so I came to, after all of that research, my policy, which is that I prohibit AI.
And I sort of present students with five reasons that I think also represent kind of the five central ethical questions around AI for the university context.
A dramatic reading of Olivia Stowell's AI policy.
Reason one, this class is designed to improve your writing skills.
If you are not writing, you are not improving.
Really, the undergirding question there is
what's the purpose of college, right?
What's the purpose of being in the classroom?
To me, the job of the student is learning.
Rather than achieving perfection,
the goal is to learn process.
And so to me, if you're outsourcing that process,
there is a mismatch with the goal of what my classroom space is and what the student is doing.
Like if your task is to learn how to write an outline and you have something else generate an outline for you, there's kind of a problem.
So that's the first one, really. Like, are you learning, actually, if you have chat GPT or other AI forms produce your work for you?
Reason two, using AI opens up academic honesty issues.
A lot of people didn't consent to have their work
be used to train AI.
Taylor & Francis just recently,
which is like a big publishing company,
just recently signed like a $10 million deal with Microsoft
to allow Microsoft to train its AI
on Taylor & Francis publications. I think our company signed one of those kinds of deals too. million deal with microsoft to allow microsoft to train its ai on taylor and francis publications
i think our company signed one of those kinds of deals too openai said it has inc licensing
agreements with the atlantic and vox media the latest in a flurry of deals the startup has made
with publishers to support the development of its artificial intelligence products
as part of the deals openai will be able to display news from the Atlantic and Vox
Media, which owns... Yes, yeah, exactly. Right. And like, I have an article in a Taylor and Francis
journal, and I didn't get to say yes or no. Like, I didn't get to opt out on having my academic work,
which is the product of years of research and study. You know, I didn't even have the chance to
say no. And beyond that, I don't see any of that 10 million, obviously, you know.
And so there's all of these kinds of questions about compensation, about consent.
And beyond those kinds of larger questions of plagiarism or theft or, you know, access and training,
there's also the issue of like if a student misrepresents ChatGPT's work as their own,
that's kind of a different kind of plagiarism and theft.
Reason three, using AI does not produce reliably accurate results.
When like ChatGPT generates like a fake paper that doesn't exist
and cites it as a source in like a student essay that it generates,
this is a problem.
And some people call this like an AI hallucination.
But what I actually prefer, there's a good paper by Michael Hicks and James Humphries and Joe Slater that says that it's bullshit.
Their definition of bullshit is a claim without regard for the truth.
That ChachiPT mainly works to just produce writing that seems human-like and it has no regard for the truth or accuracy of those claims.
And all three of those, I think those first three reasons are really kind of about the
space of the classroom itself in some ways.
But I also have two reasons that are more about sort of the social context of the world
in which we live.
Reason four, ChatGPT has serious negative environmental impacts.
Microsoft's water usage went up like over 30% from 2021 to 2022.
And journalists and researchers think that that's because of the development of AI,
because a lot of water is needed to cool the AI systems, especially in summer.
So then there's also a global warming intersection that as the planet gets warmer,
the systems are more inclined to overheat, which means you need more water to cool them.
So it's kind of a circular sort of issue.
Reason five, OpenAI and other AI companies have exploited workers.
Time did a study about workers in Kenya.
Washington Post did a study about workers in the Philippines. There's lots of others about the really underpaid
and exploited workers often concentrated in the global South, but also in refugee camps and
prisons around the world elsewhere, where these are the people who are doing the image sorting
very often, and often for very, very little money, below minimum wage, certainly below U.S. minimum wage, but even sometimes below the minimum wage in their own countries.
Washington Post, if I'm remembering right, called it digital sweatshops.
And so there's all of these people in the global south upon whom AI depends.
And so I think there's an illusion that AI is this human-less machine. But actually, the human-less machine runs upon a ton of invisibilized humans
who are, you know,
struggling to make ends meet.
So that, you know,
university students in the global north
can what?
Not write papers?
It just seems wrong to me.
What did your students say
when you finally introduced it to them?
Did you do that this summer
or are you waiting until the fall semester?
Yeah, I did this for my summer class.
So I'm currently teaching a class this summer
that has about 25 students in it.
And so on the first day of class,
I was going over like all the other classroom policies,
like, you know, here's what the assignments look like.
Here's how participation is graded.
And here's why we're not going to use AI this summer.
And they were really, really receptive, actually. Yes, they were. And I
was really sort of excited by that. I have one student who's a computer science major. And,
you know, they were like, I don't even know what I think yet, but I like talking about why.
I'm curious, you went for this outright ban. You're taking this sort of hard line approach.
Yeah. AI tools completely prohibited in your class.
You know, I'm a professional.
I fraternize with fellow professionals.
And a lot of professionals I know use these tools in their work to write emails, to synthesize information, to look for potential people to talk to for a project they're working on, whatever it might be. Did any students want to make the argument that by making this sort of ban outright,
you were ill-preparing them for the work they might have to do one day in a professional space?
No students made that argument, but I think that's an argument worth considering.
I think that for me, and also,
I, you know, I want to be open to revising my position, but I think that part of the difference
there is college is the space to learn. This is the space where you want to learn how to write
an email. You know, this is the space where you want to learn how to set up a meeting. This is
the space where you want to learn how to find sources. Part of the difference there is like,
you know, presumably a professional who uses, you know, some kind of AI technology to
write an email actually does know how to write an email. Whereas some students come into, you know,
college not having ever learned how to do that. And so those are sometimes things we talk about
in writing classes. And, you know, I don't think, you know, I'm not naive. I don't think my students are going to all never use AI again or something after my class.
But I hope that in their use of it, they are more informed and not, you know, just using things uncritically.
Do you think there's a day in the near distant future where AI feels more ethical and you might be more willing to let students use it in the
classroom? I think that's possible. I think right now for me, part of it is that I feel like I've
yet to encounter a benefit that seems to outweigh the costs. And so that's part of it for me right
now is that there's a lot of downsides and very few upsides. I'm hopeful that we do build a better world and that that better
world might include the tech we have now. And so, like, I think my policy has a hardline stance,
but I don't think I have a hardline stance that's set in stone for all time. That was Olivia Stowell. It's hopefully not too late to register for her class this fall
at the University of Michigan in Ann Arbor. When we're back on Today Explained,
we're going to hear from a professor who wants to give the bots a chance in his classroom. From the director of The Greatest Showman
comes the most original musical ever.
I want to prove I can make it.
Prove to who?
Everyone.
So, the story starts.
Better Man, now playing in select theaters.
Today Explained is back with Dr. Antonio Byrd.
He's an English professor at the University of Missouri, Kansas City.
But he's also been thinking a lot about generative AI tools in the
classroom because he's on a joint task force of college professors from across the country
who are trying to develop standards around writing with artificial intelligence.
My own approach is really to give students a little bit more agency in how they want to use language models. I believe that students,
they have a right to learn about different types of writing technologies. If it's like a podcast,
for example, they should know how to be able to make a podcast because that is a type of writing
that happens there. But I think it's also really important that students kind of understand the risks and
rewards that do come with using language models. It's kind of like with using the internet.
Like these tools are pretty much everywhere. They are present. Right now they're present
even within Google Chrome. If you right-click
and try to copy and paste, you'll see an option to see that there is a Help Me
Write, which is their version of Gemini being able to produce text for you or as
it says, Help You Write. We are taking the next step in Gmail with Help Me Write.
Let's say you got this email that your flight was canceled.
You could reply and use Help Me Write.
Just type in the prompt of what you want,
an email to ask for a full refund,
hit Create, and a full draft appears.
So because it feels like a little inevitable
that students will come to use these tools,
I think you're going to see a
generation of students who are aware of generative artificial intelligence, and they're kind of
looking to faculty for some real guidance. How can I use this so I can be competitive
on the job market so that way I know that I am up to date on what some of these employers are
really looking for.
I think some people in our audience might be surprised to hear that you, an English professor, are on board with AI in the classroom.
Could you tell us, I mean, what that looks like in your class?
Have you integrated AI into your courses, I don't know,
this past summer, this past spring, this past fall?
So one of the things that I do is that at the very start of the class, I have a video that I
created where I explain to students the very basics of generative artificial intelligence.
So you should have a right to know what generative AI is. You should have a right to know also how it is actually really dangerous.
And that's something that I have done here in this video.
Then throughout my classes, I give students the option to use it.
Not copying and pasting the language that's generated from AI, but using it to think about your creativity,
to think about your critical thinking, to produce your own words. So that way you can try to be more
effective as a thinker on the page. So there's this online class that I teach for first year
students. It is the second course in first year writing where they learned research methods. And so I tell students,
you could go to the library database and you can do keyword search, very old school, traditional
way of looking for sources for your literature review. Or here is this tool called Research
Rabbit. You can go to Research Rabbit and you can put in just one PDF article
that you found and it's going to generate other articles that have cited that article or related
to it. And it shows up as a map on the screen. So you kind of like visualize the different types
of articles that might be useful for your own research.
But there is the caveat.
You actually have to read those articles.
You just can't pull out the citations and start putting them into your paper. You need to know if they really fit with your argument of what you're actually doing.
And this research rabbit is not actually a rabbit.
It's AI?
No, it's not like a rabbit at all. It is
exactly AI-empowered. I would even say it's kind of like Google Scholar. When you do a search on
Google Scholar, sometimes they list an article and they will tell you how many other people
cited that article. When ChatGPT arrived and became, you know, inescapable,
it felt like a lot of people were declaring the personal essay dead. Are you at all afraid that
students might go a little further? Oh, you know, I'm using AI in this class, you know,
for research. Maybe I'll just, you know, turn in a paper that was written by a robot too.
Yeah, I think one of the issues with that, though, is that language models, they still have a particular kind of wording when they're actually doing their output.
That writing teachers, many of them kind of start to pick up
that there's something kind of off or odd about this. Like there are no real specific details
that you would usually see if someone was actually telling a story of some kind. So,
you know, students might be able to use that, but the language is very distinct, I think, when you're actually
doing those types of outputs. So you're saying you're not too worried about AI fooling you?
You know, that's a really good question. I think when it comes to students,
if they haven't taken the time to really work with the language or work with that artificial
intelligence, then the language that they
give teachers will be kind of glaringly obvious, like, this doesn't sound like you at all.
So that's what I'm thinking where we are right now.
You know, it's clear that colleges and universities across the country, maybe across the globe,
have not yet figured this out, that it's sort of the Wild West, that professors are trying
to come up with
their own policies for their own particular classes. What do you want to see more of as,
you know, universities across the planet try to figure this out?
I think the biggest thing that I would like to see is a lot more,
I guess, kind of consistency on what we message to students. Because if you have within a single, say, within a hallway, in the same hallway, a student can go down to one class and that professor says, AI is completely banned.
And then that student can go two doors down to the next professor for a different class.
And that professor says, use AI. Here's how
you can use it. It's a great technology. Let's go for it. So when students get this inconsistent
messaging, they're not entirely sure, like, how am I supposed to think about AI and how it works
with my writing and how I can actually learn with it. I did have one student who took that research methods class.
And when she saw that here is an option on how to use AI,
she really appreciated that she was having a professor
who kind of tells them, here's how to think about it.
Here's the guidance on it.
Because for so long, all she really heard was,
don't use it.
Plagiarism is a real problem. I'm going to ban it. Because for so long, all she really heard was, don't use it. Plagiarism is
a real problem. I'm going to ban it. So some consistent messaging for students is really
important when they go from class to class. Dr. Antonio Byrd, English, University of Missouri, Kansas City.
He's also on the MLACCCC Joint Task Force on Writing and AI.
It's really called that.
I wonder if AI came up with the name.
Our show today was made by Peter Balanon Rosen with an assist from Toby.
We were edited by Amina Alsadi,
fact-checked by Laura Bullard,
and mixed by Patrick Boyd and Andrea Christen's daughter.
I'm Sean Ramos-Furham, and this is Today Explained. Thank you. Bye.