Science Vs - Is AI Making Us Stupid?
Episode Date: December 18, 2025AI tools like ChatGPT have taken the world by storm, with tons of people saying they use them regularly. This is especially true for students, many of whom say they use AI to get their schoolwork done.... And this is freaking some of us out — we're hearing that jumping on the AI train could be a terrible idea, partly because of claims that these tools could be bad for our brains. So — are we outsourcing too much of our thinking to the bots?? Will our brains turn to mush? Or can we use AI to boost our brainpower? To find out, we talk to Dr. Shiri Melumad, expert in the psychology of technology, and Dr. Aaron French, expert in information systems. Find our transcript here: https://bit.ly/ScienceVsAIStupid In this episode, we cover: (00:00) Is AI ruining or boosting our brains? (02:45) How often are LLMs like ChatGPT wrong? (05:01) Do LLMs mess with our ability to learn? (19:26) Does using AI make us more productive? (24:33) Another example of a technology that freaked a bunch of people out (27:40) Can using AI help us learn? This episode was produced by Meryl Horn with help from Ekedi Fausther-Keeys, Michelle Dang, and Rose Rimler. We’re edited by Blythe Terrell. Our executive producer is Wendy Zukerman. Fact checking by Erica Akiko Howard. Mix and sound design by Bobby Lord. Music written by Emma Munger, So Wylie, Peter Leonard, Bumi Hidaka and Bobby Lord. Thanks to all the researchers we spoke with including Daniela Fernandes, Dr. Marcin Romanczyk, Professor Michael Henderson, Dr. Tim Zindulka, and Professor Vitomir Kovanovicent. Special thanks also to Sebastian Peleato, Chris Suter, Elise, Dylan, Jack Weinstein and Hunter. Science Vs is a Spotify Studios Original. Listen for free on Spotify or wherever you get your podcasts. Follow us and tap the bell for episode notifications. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hi, I'm Merrill Horn, filling in for Wendy Zickerman.
You're listening to Science Verses.
This is the show that pits facts against frying our brains with AI.
Today on the show, is AI destroying our ability to think?
A lot of people are saying, yeah, it is.
And they say that we've got the science to prove it.
It's been medically verified that ChatGPT makes people stupider.
GPT users could not even remember what they wrote.
They basically stopped thinking entirely.
Every single time we give it a prompt, our own brain cells are burning.
And if this is true, we might be in for a brain cell bonfire because a lot of us are using AI.
One recent survey found that 62% of adults in the U.S. say they interact with it at least
several times a week.
An AI really seems to be booming for students.
A bunch of surveys suggest that around 80% of high schoolers and college
students say they're using it for schoolwork. And nobody's really worried about this. Obviously,
I'm kidding. People are panicking. We have no idea how our students will ever learn anything
or whether universities have any future. One professor graded papers and discovered every single
one was AI generated. The whole system is cooked. Cheating is the new major. Game over. It's just
game over. But there is a flip side here because some people say that AI is going to be good
for us, that it can help us save time by breezing through busy work, get information for us,
and accelerate scientific progress. It can do away with a lot of the really annoying, sort of tedious
tasks. It's helped me boosts my productivity by like 300%.
AI is driving a transformation across all fields of science. Buckle up, tech enthusiasts. The future
with AI is bright and it's happening now. Wait a minute. Buckle up, tech.
enthusiasts? That last one was AI. How did it get in there? But seriously, who is right here? Could
AI help us be more productive and lead to a boom in science? Or is using AI the equivalent of
shoving your brain into the microwave? So do we stop using AI so our hippocampus doesn't turn into a
hot pocket? Buckle up science enthusiasts, because when it comes to AI, a lot of people are saying
chat GPT makes people stupider. But then there's science.
And full disclosure, some AI companies do advertise on Science Verses.
Science versus AI is coming up after the break.
At Fandual Casino, you get even more ways to play.
Dive into new and exciting games and all of your favorite casino classics,
like slots, table games and arcade games.
Get more on Fandual Casino.
Download the app today.
Please play responsibly 19 plus and physically located in Ontario.
If you have questions or concerned about your gambling or the gambling of someone close to you,
please contact Connects Ontario at 1866-531-2,600 to speak to an advisor free of charge.
Canada can be a global leader in reducing the harm caused by smoking,
but it requires actionable steps.
Now is the time to modernize Canadian laws so that adult smokers have information and access to better alternatives.
By doing so, we can create lasting change.
If you don't smoke, don't start.
If you smoke, quit.
If you don't quit, change.
Visit unsmoke.ca.
Welcome back. I'm Merrill Horn, and today we're going to look at whether using AI, stuff like chat GPT, is bad for our brains.
I have senior producer Rose Rimler here.
Hi, Rose.
Hi, Merrill.
So you recently mentioned to me that you have been using chat GPT more and more these days?
I guess that's true.
Am I under oath?
I'm curious, like, do you feel like it's changing how your brain is working?
I do sometimes catch myself about to ask chatGBT something that I could do, and I'm just too lazy.
Do you want to see what you've used chat GPT?
for recently. I asked how many Roma tomatoes to equal 500 milliliters of pureate tomato?
How to clean my velvet armchair, I spilled milk on it. I asked it to tell me if a couple
of bottles of wine I had on Thanksgiving were dry or sweet, I think, dry or juicy.
Would you say it's like your go-to now for like just looking up info online? Yeah, it is
kind of becoming my go-to. It's, it is. For me also, and I've been like curious about what
kind of problems this might lead to.
So let's just address one thing first.
We do need to acknowledge that one big problem with this sort of AI
is that it can get stuff wrong.
I actually did try to find some numbers on this.
Like, how often is it wrong?
And when ChatteePT first came out, it looked pretty bad.
Like a lot of the research was finding that it was only right,
like roughly half the time.
I remember that from the early Chattebt exploratory.
stuff? Yeah, it made up a lot of
stuff and now
there are a couple of reviews which compared
like chat GPT 3.5 to chat
GPT4 and they do
find that it's gotten better
but it's still not 100% accurate
so bottom line
it's like it's just hard to tell
whether it's telling you BS
or not. Yeah, that's true
and so yeah I do
feel like that's one way it could be making us
stupid is just by like feeding us incorrect
information. Yes, just like the government
Yeah. But now on top of that, there's this other fear that it's like bad for our brains to be using stuff like chat TPT or other LLMs, large language models.
Like if we let these things do a bunch of the thinking for us, then we'll lose our ability to think on our own or even be creative.
So let's dive in to all of that.
Okay.
And I want to start with this one study. It was done by Shiri Melamad. She's an associate professor of marketing at the University of Pennsylvania.
And Sherry looks at how tech is changing us.
When did you first get the idea to start looking into AI?
Yeah, I mean, as a person who studies technology,
it was difficult not to study AI, right?
It's pretty prevalent at this point.
So, yeah, Sheri just published a huge study on AI.
Altogether, it looked at more than 10,000 adults.
And the goal was to see what happens when we use LLMs like ChatGPT
to try to learn something new.
Like, how does it compare to an old school Google search?
So here's what she did.
First, she got some people to do a fun little research assignment.
I told them to imagine that a friend came to them asking for advice on the topic.
So, for example, how to plant a vegetable garden.
Other times, they had to research something else, like how to lead a healthier lifestyle.
And so they did some research on this thing, and half of them had to use chat GPT for this,
where the other half had to do a normal Google search.
Like, no AI summaries, just links.
And their next job was to write up a little blurb
based on the stuff that they had just read,
as if they're writing up advice for that imaginary friend
who needs their help.
And this is what Sherry was really interested in
because she wanted to see if the advice was any different
when people used Google versus chat GPT.
And it was.
So when they used chat GPT,
the advice they came up with was sparser. It was more generic, and it referred to fewer facts
after participants learned from an LLM versus web search. That's interesting. Yeah, so let me play
you some examples so you can hear for yourself, like what it sounds like when Chiri says the
advice was more generic and referred to fewer facts. Okay. So this first one was written by someone
who used chat GPT to research how to lead a healthier lifestyle.
Basically, you want to eat better foods and limit sugar and processed foods.
Get at least 30 minutes of exercise a day.
Stay hydrated and also check with your doctor as well.
What do you think?
Wow, it's like a lyric poem.
Yeah, you're impressed.
I'm so inspired to live a healthy lifestyle.
Then she played me another one, which for me helps see what she meant by like the genericness of it.
So let me tell you.
There's one more from this chat GPT.
Having a balanced diet, exercising regularly,
staying hydrated by drinking water,
getting enough sleep, and avoiding stress
are ways to live a healthier lifestyle.
To get more details, ask chat GPT.
Yeah.
And this wasn't just copy and pasted.
This was a human.
Remember, they did the research.
Somebody read that.
They actually wrote,
Just Ask ChatchipT.
Uh-huh.
Yeah.
So, yeah, now let me play you an example from someone who used the old school Google search for their research.
So there's no AI summary.
There's just a bunch of links to click on, and they could look at as many websites as they wanted.
And again, the prompt is how to lead a healthier lifestyle.
So here is Sherry with one of the responses.
Start with focusing on the outer, then inner of your body.
It's recommended to be active, a minimum of 30 minutes most days.
of the week by engaging in healthy movements such as walking, riding a bike, yoga, sports,
or even dancing. From there, the inner workings. Make sure to stay hydrated with at least
eight glasses of water a day, avoiding sugary drinks, and focusing on consuming a well-balanced
diet. Add a variety of foods to your diet from vegetables, fruit, seeds, and whole grains,
while avoiding foods high in sodium
and also avoiding foods high
and saturated fats.
Make sure to get plenty of sleep each night,
at least eight hours or so,
always wear sunscreen before sun exposure
to limit the chance of skin cancer.
That's more engaging for sure.
It's basically the same advice,
but it does sound like a human wrote it.
Yeah.
Even though I know humans wrote the other examples too
but it doesn't sound like they did.
Exactly. Yeah, it sounds more human,
even though it had information that is wrong in it,
like the eight glasses of water a day thing,
is it actually a real, like, that's not based on science
if you remember our hydration episode.
Like, there's nothing magical about having eight glasses of water a day.
Yeah, that's true.
But even with that, I still is just like more charming,
like even with the flaws.
And when I took a look at the other examples that Shiri sent me,
a lot of them were like that.
Here's Shiri.
It's unique to the writer, right?
it really doesn't come off as as generic as the chat GPT pieces.
Yeah, and it sounds like they're kind of having fun with it.
I bet they're so excited when they thought of that inner versus outer.
That doesn't really make any sense, but I guess they're having fun.
I mean, yet, it's brilliant at the same.
Yeah.
Yeah, it does read at least like the writers sort of put more of themselves into the advice.
And in Tiri's experiments, she also asked the people who wrote the advice how they felt about it,
and the group that used chat GPT felt like they learn less
compared to the people who used Google.
And then she also showed the advice
to a different group of strangers,
and people basically liked the advice from the Google group better.
They said they were more likely to take the advice
and said that it was more helpful, more informative.
Okay.
Shiri, by the way, she also did the same thing
with that Google's AI overview
and the results were basically the same.
In both cases, the advice kind of sucks
when people use large language models,
whether it's chat GPT or Google's AI Overview.
Mm-hmm.
Wow.
The difference between how much people felt like they learned,
it wasn't huge, but it was statistically significant,
and Sherry found it again and again with different groups of people.
So it does seem to be real.
Mm-hmm.
Okay.
And if you think about why this might be happening,
why people are learning about the topic as deeply,
it does sort of make sense
because when you use something like chat GPT for research,
you are probably skipping over a bunch of steps. And it turns out those steps are actually
pretty important when you're trying to learn something new. It's the process of going through the
links yourself, reading them, you know, digesting them yourself, interpreting them,
that leads you to at least feel like you're learning more. But also we still find these
differences in the content of the advice that they write, which suggests it's not just like
the sense of learning. It's actually differences in learning. So that's the important.
in part is doing the work of actually getting information from these different sources and then
synthesizing it in your brain. Exactly, because essentially these syntheses that LLMs provide
are transforming learning from a more active to a more passive process, and that's what we're losing.
And we reached out to these companies, Google and OpenAI, which makes Chat GPT, to get their take on this.
OpenAI didn't get back to us, but Google told us that the AI overview is supposed to
just be a jumping off point because you do still get those other links. But moving away from
the study, there is other research that backs all this up and sort of gets at what might be going
on in the brain. So this study kind of went viral. They looked at people's brains when they're
using chat TPT. So it's called your brain on chat GPT. I don't know if you remember this one.
Yeah. Do you think I saw that headline? Yeah. Yeah, it's just a preprint. So like, it's pretty
small, so take it with a grain of salt. But it was interesting. So they got around 50 people and then
used EEG to measure people's brain waves. Those are like the little electrodes on your scalp.
Yeah, exactly. Then some people use chat GPT to write an essay and other people used Google or
just their own brains. And they found that when people were using chat TPT, brain connectivity was
the weakest, which is sort of a measure for how much different brain regions are talking to
each other. Wow. They could actually measure that. Yeah. And so it just seems like maybe people are
just less engaged when they're using chat GPT. Yeah, that makes sense to me that you need to
do some of the like trying and failing and then succeeding to make these connections work
in order to really remember and process and add your own thoughts to what you just read or learned
about. Well, it's funny you should bring up memory because there are also studies a couple
preprints that have found that that can also be worse when we use stuff like chat GPT.
So, yeah, if people use chat GPT to write something, they'll remember less of like what was in
that work when they use AI. Yeah, I mean, I think sometimes that's okay because I might be using
it because I don't really care that much to learn it myself or, you know, or like, whatever,
I don't really need to, like, commit this to memory for all time. You might not care about
how to tell the difference between a juicy wine and a dry wine for, like, your life and
general. And if I forget the answer, I'll just ask Chachad VD again or look back at my previous question. Yeah. Yeah. Okay. So, but what she's saying is that it messes with our learning process. And that is scary to think about it when we know how many students are using it. Exactly. Yes, which they are, right? Yeah. Which is like a lot of what I hear about. And I think it was in the beginning. Like, oh my God, the students are using Chachapit. They're not going to learn anything. And that's bad because like that is what you're supposed to be doing at that age. And
It's just like learning, learning, learning, learning.
Yeah, exactly.
And a big idea is that this learning forms the basis for critical thinking, you know, doing research,
putting together your thoughts in a meaningful way.
And it does look like maybe these LLMs are getting in the way of that.
Like there was a big survey of Australian university students that asked them what they use AI for.
And most of them said that they were using it to do stuff like answer questions for them
or create text that I can use.
So, like, that all doesn't seem great.
Great text I can use cheat on my own.
They're admitting to it.
And so Sherry is also worried about this, about how AI will affect the next generation,
since most adults, like, we didn't grow up with this stuff.
We had to figure out how to do our own research and write essays on our own.
At least you and I have that foundation,
but I'm really worried that younger generations won't do.
be able to establish those foundations because it's so tempting to outsource all of that work
to LLMs and AI in general.
And do you think that we have cold hard data that says that students are getting worse at that yet?
It's hard because you need longitudinal data and these things are only introduced fairly recently.
But I do think that we currently already have data that is at least pointing directionally
at what the effect are going to be.
I mean, people were cheating back in my day, too.
Yeah.
But now they can do it even better.
Yeah.
And just one more science tidbit,
if you want to get a little bit more freaked out
about all of this, Rose.
Yeah, let's just face it.
Finish it off.
So there's this fear that AI will lead to something called deskilling,
make us forget how to do things that we once knew how to do.
The thought I had when you said deskilling was,
There's a term for when you're, if your skin ever gets peeled back off something, it's called de-gloving.
Oh, your hand.
Sorry, I'm so sorry.
This is the equivalent for your brain because the idea is that like, yeah, we'll just lose these abilities thanks to AI.
And I did find a study that looked into this.
So it was on a group of doctors doing colonoscopies.
And they started using AI to like find little spots on the inside of the colon that could become cancer.
But then for the study, the researchers had them stop using the AI.
And it turned out that the doctors were then worse at finding the little spots on their own
compared to before they ever started using the AI.
Uh-oh.
Yeah, right?
Not great.
Right, like, what if they can't access it?
Or if something was wrong with the AI and worst case, doctors might start missing these little spots
and those can eventually turn into cancer.
That would be the worst-case scenario.
Yeah.
Yeah. Interesting. I mean, it does all make sense because it's like you get better at what you focus on and you forget when you don't focus on and you're supposed that's supposed to happen because your brain can't retain everything.
It's sufficient. Yeah. Otherwise you'd have these like gigantic, you know, these gigantic heads. You have to have expandable skulls. So of course. Of course that would happen.
Yeah. And now I do though think when when I'm doing something with AI, I'm like, do I care about losing this skill? Because sometimes I don't really mind. But other times I don't really mind. But other times.
times I do. So it's helpful when I think about that. Yeah, I guess it's like if it's something I
really want to improve or get better at or retain, don't outsource so much of it to AI.
Yeah. So that's sort of the science that I found that supports some of the fears around what
AI is doing to us. But next, let's look at the counter argument here. Okay. The claim that we have
lots of gain from using AI, that it can kind of take care of the easy stuff so our brains can do
the hard stuff and ultimately use AI to do more and better stuff than we ever could have done
without it. So that's after the break. All right.
Today, we're finding out whether AI is making us stupid or smart.
Rose Rimmler is here with me.
Hello, Merrill, Horn.
Oh, no.
I have been replaced by AI.
Already?
No, I'm still human, and I'm ready to hear why AI is good.
So, yeah, we've talked a lot about the scary science,
but I also wanted to understand the pro-AI arguments.
So I called up Aaron French.
He's an assistant professor of information systems at Kennesaw State University in Georgia.
He's also on the advisory board of an AI company.
And he was basically like, the effects of AI are going to depend on how you use it.
For some people, it's absolutely going to make them, I guess, I don't want to say dumber,
but they're not going to learn or improve because of that.
Other people, they're going to be able to do more with AI than they were able to do without it.
So you could either use it as like a crutch or like an enhancer.
Is that the idea?
Yeah.
So one obvious way that it can.
could enhance our work is that AI could take over the mindless row tasks for us.
What are the repetitive tasks that consume a lot of time that I don't need to be doing?
And can I use AI to handle those tasks that allow me to spend my time in a more valuable way?
Just the busy work?
Yes.
So, yeah, the idea is that once AI does the busy work, the human brains could swoop in
for the harder, more complicated parts of the task, you know, critical thinking and analysis.
And so there is science that does sort of back up this idea
because it does look like AI can save people time, for one.
Like, there are studies finding that this is true for all sorts of careers.
So dietitians, computer programmers, people who run clinical trials.
The effects were pretty big in some cases.
Sometimes it took people 30% less time to do something, thanks to AI.
Sometimes it was 80% less time, depending on the thing they were doing in the study.
Okay.
And an obvious caveat here is that you do need to, like for this to work,
the AI has to do a good job at the thing for it to really replace.
But we're hearing that in some cases it is helping.
Teachers in particular have said that they can save a lot of time with AI.
One survey found that some teachers who are using it were saving an average of six hours a week.
What are teachers using it for?
Putting together materials for class, sometimes grading.
Basically, it's the type of stuff.
that teachers say is just hard to get done during their normal hours.
And Aaron's like, yeah, if you can get AI to speed some of this stuff up, it could really help.
As a professor, if I can use AI to accurately grade, that would be great, because instead of
spending five, ten hours a week grading assignments, if AI can do it and provide proper
feedback, I can spend that time working with the students, giving them more engagement.
Teachers do always talk about how much time they have to spend grading, right?
And so, like, Aaron spends this saved time engaging more with the students.
But this does raise a question, which is, like, are most people actually going to spend the extra time doing stuff like that,
like doing their jobs even better or engaging with the world in a meaningful way?
Or are we just going to use it for, like, mindless scrolling or watching Levi?
island. Well, that's your prerogative. You can do whatever you want to do with your extra time.
So judgy. Well, but it's like for the purpose of this episode and whether or not AI is going to
like lead to this like, you know, new and improved humanity, it might not do so well on that front
if we're just scrolling all the time. Like pick up the slack at some like rope part of your job
and then going home and inventing a new kind of flying machine like you know a Da Vinci or whatever.
just lying on the couch, looking at Instagram.
It feels very likely, right?
And I went looking for a paper on this.
Like, how are people actually spending the time they saved with AI?
And the only thing I could find was this one early study, not peer-reviewed.
In it, they got 83 managers.
So these are people like vice presidents and sea level executives.
And first it asked them, do you save time because of AI?
And the vast majority said yes.
was almost three hours a week on average.
And they did say that they often used the time to do stuff like,
continue working on my tasks or take on additional projects.
But a lot of the time was also wasted.
And they admitted that?
Yeah.
So within this group of managers, 36% of them said they wasted at least half of the time that they saved.
Okay.
Wow.
And they asked a random group of adults about this too and got similar results.
Hmm, which feels pretty on point.
Yeah.
As long as our bosses don't find out that we have all this extra time,
we can just use it for watching Love Island.
So let's just keep it quiet.
Just stop pulling out these surveys.
Honestly, people, what are you doing?
But there is one other thing, a different way,
that AI might actually make us smarter.
Okay.
The idea is that AI will open up new doors
and let us do things that we would have never been able to do without it.
And the scientists who think that this is possible often bring up an analogy, the calculator,
which does have some interesting parallels to AI, because on the one hand, you can imagine that maybe people will get worse at math because, you know, it's doing the work for you.
But on the other hand, maybe we'll be able to do more things like harder math because of it.
And when calculators were first introduced to classrooms back in the 70s, it generated a similar controversy.
that AI is now, like some people staged protests.
Some math teachers were worried that young kids would get hooked on calculators to do basic math problems.
To write boobless.
What?
Yeah.
Over and over and over again.
I forgot about that.
And they even had a name for these students who were a calculator dependent, calculolics.
Oh, my God.
Yeah.
So the specific fear was calculolics would be so dependent on their calculators they could no longer add and subtract and divide by it with paper and pencil or in their heads.
Exactly.
Yeah.
That we would lose those skills because you'd just type everything in and we would never learn anything.
So that's cool because we've actually had plenty of time to test if that has happened.
Yeah.
So we have science on this.
And so I look to see like, okay, what did happen?
Yeah.
And it seems like the answer is everything was basically fine.
Like there is a meta-analysis of over 50 studies on this.
Oh.
Yeah, which looked at what happened when kids started using calculators in the classroom.
And it found that, first of all, kids' basic math skills didn't really get worse.
Like, if they got to use the calculators for learning, but then they took them away for a test,
they didn't do any worse on that, like, pen and paper test.
And if they got to use the calculators for both,
both learning and testing, they showed improvements.
So their problem solving got better.
The graphing calculator led to improvements in visualizing things
and understanding graphical concepts.
And some studies found that kids' attitudes towards math was better
when they got to use the calculator, maybe because they got to write boobless.
Or because, you know, these were kids who have some interest in mathematical stuff,
but not arithmetic, which is basically, which is just like too.
plus two. And that's the tedious part. And so they could outsource that part and dig into like
imaginary numbers and trigonometry and stuff that gets more interesting. Yeah, exactly.
And so yeah, overall the review said that the science supports using calculators in elementary
and high school classrooms because our basic math skills didn't seem to get worse. And now we can do
harder math since we can use it as a tool. Is there anything we can say, though, about AI
specifically in the classroom because it's pretty new and it's different, you know, from a calculator.
Well, like we said in the first half, like some academics are worried that if we use AI too much,
you know, it'll kind of inhibit learning. But then there's also a ton of papers which show that
it might be helpful in the classroom and in particular ways. So like there's tons of studies
that just try having like chat GPT give students feedback for like, you know,
you know, writing something, and they'll find that, like, look, it can help them polish their
writing. And there are some researchers that are making, like, customizing chat bots to help
students with specific learning disabilities and think that there's a lot of potential there.
And then there are some studies that have students use chat GPT kind of for good, like, as a tutor.
So rather than just having it give them all the answers, they'll just have it, help them learn how to do
like a particular kind of math problem
and then like it's okay
at that like it can do as good of a job
as using a textbook to learn math
so like
there is potential here
and Google told us that they're
working on tools like this too
I mean
sure but also
all those examples are
the stuff that teachers
are supposed to do
you know like human teachers
that's kind of the point
I guess theoretically
but often they're spread so thin.
You know, I think the idea is that this can help them address all their students' needs
when, like, they're often just, you know, don't have enough time to do that.
Yeah, fair enough.
So, yeah, maybe I think the dream is that this will be like a helpful tool in the toolbox.
Yeah.
And maybe, you know, yeah, we're just spitting out here over nothing.
And then there are people at like the top levels of academia who are using it to solve all sorts of problems.
so not just talking about LLMs, but AI more broadly, like machine learning.
So, for example, in biochemistry, AI is really good at predicting protein shapes
based on amino acid sequences.
And physics has been used to help find black holes and analyze data from particle collision
experiments in real time.
And according to a group of 1,600 scientists, more than half of them said that
AI tools will become very important or essential for their fields
over the next decade.
It makes sense to me that there are going to be use cases
where you're not going to have a physicist piece through 200 million images
of the far reaches of the universe looking for black holes.
That's not even humanly possible, and it's certainly not possible
in the span of getting a PhD or something.
So it makes sense to have the computer do it.
And there's lots of things we outsource to computers or machines
that we just generally feel as a net good.
So, yeah, I'm convinced that there is, I'm convinced that there is a role for AI to be positive in personal development and, like, human development.
Yeah, yeah, I agree.
For me, though, as I think about all of this research, the main thing that I've been finding myself worrying about is that homogenization effect, the fact that what we create is more generic when we use AI to do it.
Like that really boring, saying healthy advice.
That really stuck in my brain.
And so now I'm like trying to avoid using it for anything too creative
because there's so much now about how AI can be used for creativity
in all these different ways.
And I'm always now like, oh, I don't know if I want it trying to do the things that
I care about sounding like me.
But I do, I still use it just for like looking up stuff that I don't really care about
that much.
Like in all of the academics that I talked to for this episode,
including the ones that found negative effects from it,
still use AI like Shiri, our scientists from the beginning.
She's like, you know, if you just want to look up something quickly
and you don't really care about getting a super deep understanding of it.
Use an LLM.
Yeah.
It'll make your life a lot easier.
But to the extent that you actually care about learning more deeply about something,
you should really try to avoid starting off your research with an LLM
because it's too tempting to stop with the synthesis that you're performing.
provided. I even see it in myself. I've studied this stuff. And when I start with chat
GPT to learn about something, I find it really hard to motivate to keep learning more.
So, Merrill, let me ask you, are you worried? I mean, you have little kids, too. Are you worried
about this generation coming up? Are they going to be stupider, less skilled than, I don't
know, us or previous generation because of using AI, especially using it in school instead of
doing their homework? I don't know. I'm not that worried yet. Like, it's a couple of decades ago
people were freaking out about the internet making us stupid, you know, Googling stuff too much.
But I think we're still okay. And, you know, even beyond that, humans have existed in so many
different contexts throughout history when we were never learning how to, like, write essays in school
about pride and prejudice. That's true. Like, were we
really all totally stupid when we were a caveman? I don't think so. I think we were probably okay.
For whatever that context required of us. Yeah. Yeah. And now there'll be a new context because
AI will, I guess, change the world in some way and everything will just be different. Right. Yeah.
All right. Thanks, Rose. Thanks, Maril. That's science versus. This episode had 59 citations in it.
Check out our transcripts if you want to see all that science. And one quick note, we've heard from some of you
wondering why you have heard a little less from Wendy the past few months.
She has taken a bit of time to be with family, but you'll be hearing more from her again in the new year.
And the show will take a few weeks off for the holiday, but we will be back in your ears in January.
And we have some amazing episodes in store for you in the new year.
We'll tell you the secret to happiness, according to science.
We'll dig into relationships and tell you whether yours is toxic.
And we're going to give you the science on one of our most requested topics, running.
Should we really be doing it?
Plus, the weird science of something called sad nipple syndrome.
2026, it's going to be great.
This episode was produced by me, Merrill Horn, with help from Aketti Foster Keys, Michelle Dang,
and Rose Rimler.
We're edited by Blythe Terrell.
Our executive producer is Wendy Zuckerman,
fact-checking by Erica Akeko Howard,
mixed and sound design by Bobby Lord.
Music written by Emma Munger,
So Wiley, Peter Leonard,
Bumi Hadaka, and Bobby Lord.
Thanks to all the researchers we spoke with,
including Daniela Fernandez,
Dr. Martin Romantic,
Professor Michael Henderson,
Dr. Tim Sindulka,
and Professor Vitomir Kovinovichens.
Special thanks also to Sebastian Pelliotto,
Chris Suter,
Elise and Dylan,
Jack Weinstein, and Hunter.
Science Verses is a Spotify studio's original.
Listen for free on Spotify or wherever you get your podcasts.
And if you do listen on Spotify, follow us and tap the bell for episode notifications.
Back to you next year.
