Front Burner - Is AI making you dumb?
Episode Date: April 1, 2026Today we’re joined by Alex Panetta, journalist and former Front Burner guest host. You may remember him as a regular on this show when he was a CBC Washington correspondent.Alex is now on sabbatical... studying artificial intelligence and has been grappling with a lot of the big questions we have been thinking about too.So today we’re going to talk about the ways he’s been using AI in his own life and interrogating how this technology can impact our ability to think critically. Will AI make us all dumber?For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
Shopping for a car should be exciting, not exhausting.
But sometimes it can feel like a maze.
That's where Car Gurus comes in.
They have advanced search tools, unbiased deal ratings, and price history.
So you know a great deal when you see one.
It's no wonder Car Gurus is the number one rated car shopping app in Canada on the Apple app and Google Play Store.
Buy your next car today with Car Gurus at Cargoos.ca.
Go to cargoos.ca.com.com.com.com.com.com.com.
This is a CBC podcast.
Hey everyone, it's Jamie.
Today on the show, Alex Panetta is here.
You may all remember Alex as a pretty regular guest on the show when he was the CBC's Washington correspondent.
He guest hosted when I was on Matt Leave, too.
Well, Alex recently left the CBC and he's been studying AI,
grappling with a lot of the big questions that I've been thinking about.
But he has a luxury of space and time and proximity to,
lots of really smart people on the topic. So I've been wanting to bring him on the show for a while now,
and we're going to talk about the ways he's been using artificial intelligence in his own life.
But also, I want to interrogate how this technology can impact our ability to think critically.
Basically, will AI make us all dumber? And Alex has recently poured over a bunch of the latest research on that front.
So let's get straight to it. Alex, it's good to have you back on the show. How you doing?
Oh, it's like a homecoming. So nice to be here. How are you, Jamie?
I'm great. Thanks. It's good to have you. And good to catch up. So I want to start with how you have been incorporating AI into your life at the moment. Let's do this living library that you've created with your own notes first. What is it? Why do that?
So lately, I joke that I've become a hoarder. I've started hoarding textual data. Because increasingly anyone can make software. You can make a website. But the underlying information, the data that feeds.
these websites, numbers or text.
Like that's gold, right?
And so what I'm trying to do is I want to preserve every possible word that I hear, see,
or watch during this master's program I'm doing.
And I want to take all this text and build it into my own local LLM so that, yeah,
it's like a living library, always evolving based on my own needs.
So I'll just give you one example, right?
So let's say it's, you know, four years from now, it's the year 2030.
And I'm giving a public talk or I'm writing an article.
And I want to be able to query my own corpus of text and say, yeah, what was that analogy again from my, from my master's program, that analogy for misusing data.
It was something about water.
And, you know, bang, you know, here's where it is.
Here's where you got it from.
Here's a citation.
Yeah.
And where I can say, yeah, I want to build a slideshow.
What are the, you know, what are 10 ways AI causes bias drawn from the different papers that I've personally written?
So it's basically, it's like a notepad that can talk and draw on command.
That's really interesting.
And then also to help you keep track of what the latest in AI is, I know that you've automated this daily briefing note, right?
And so tell me about that and tell me how it's so much different from how you tried to keep up with info while you were working in Washington as a correspondent.
Like, what's the difference?
Yeah.
So trying to learn AI has been the steepest learning curve of my life.
It's, I mean, no, not a lot of people know this, but the first chatbot, you know, it was not obviously not Chad GPT in 2022.
do. The first chatbot was released in 1966. And, you know, I started looking into this field in,
you know, 2025, right? So I'm, I was a little late to the party. And so this learning curve was steep.
And, you know, I was thinking recently to the way I learned, the way I briefed myself when I was
in Washington, all right? And, and the way I briefed myself in Washington was I would ask myself
every day, well, you know, what's your purpose here? And I believe that, you know, my number one job
was to look for stories that mattered to Canada.
You know, simple math is Washington is the most heavily reported place on Earth.
You know, I think there are up to 10,000 people covering it every day.
Only a handful of care about Canada.
So I had this routine where every day I did about six Google searches.
I would search for transcripts, regulatory announcements for an agent registrations,
the congressional record references to Canada.
I even used to search for Canada site colon.gov.
And I would find every reference to Canada on every U.S. federal or state website.
Now look, it's nine months later, and this seems completely archaic to me.
It feels like tossing a carrier pigeon out the window compared to what you can do now.
Now you can rip through like a hundred times that data.
Public notices, federal state legislative records, the Justice Department,
court announcements, the Congressional Research Service, committee, white papers.
You can ask for Canada references.
Then you can get an AI to take all those Canada references and then do searches.
It's called retrieval augmented generation.
Go online, you know, find out why it might matter to Canadians.
And then you can sort of summarize it through another AI filter in bullet points, email it to yourself as a daily tip service.
So, you know, you read it over your morning coffee, like 15 things that might affect Canada.
Now, I'm not saying that this replaces other reporting, but it's this tip service that you can automate for yourself.
And that's kind of what I've done with AI.
I've created my own tip sheet of what researchers, programmers, companies are talking about.
And then I run it through a filter that translates this hacker talk to my level as a non-technical person.
So that's, you know, I found it very useful.
Mm-hmm.
I mean, obviously, if you find it very useful, I guess it works well, right?
Like, are you finding lots of mistakes in it?
Or do you feel like it's missing stuff?
Yeah, it makes mistakes all the time.
I mean, you know, there are errors in judgment.
It'll repeat the same story from two different sources.
You know, no, it's, it is absolutely imperfect.
But it's, you know, it's better than having to search 25 different websites every day and then trying to translate what programmers are saying to your language.
You get some other stuff too. That's pretty interesting. You made this app to help your daughter with math, right? You've turned 140 years of stock market data into a live dashboard about where the market compares to typical crashes. Just tell me briefly about these and like,
whether you could have done this before AI.
Absolutely not.
I can't code.
And even a year ago, with the AI tools that existed pre-Claude code, pre-LOPC4.5, I couldn't
have done this stuff.
And we're going to talk about the cognitive damage that AI can do.
And it certainly can.
But they're also real beneficial use cases.
And I mean, like look at hyper-personalized learning.
I mean really hyper-personalized learning.
Let's say you're a teacher.
You know, you've created online exercises for your class.
You know that in like five minutes on Claude Code, you can open up your computer terminal.
Type, you know, Claude slash online exercise.
Little Johnny struggles with long division.
Make his easier.
Susie likes Roman history.
Put that in her reading lesson.
I did something similar for my daughter.
I mean, she even helped me pick the design with heroes and villains, depending on whether
she gets the answer right or wrong, you know, a gold medal if she wins the game.
And I'm constantly tweaking it like every couple of weeks.
And she'll be like, hey, dad, it got harder.
And I'm like, yeah, that's the whole point.
So it's this hyper-personalized set of games and learning tools.
And no, by the way, I want to add a caveat that we read books every day and we draw, she draws every day.
And this is a bonus.
It's an alternative to TV, like, say, I'm cooking and she'll be playing a reading and math game.
What would you not use AI for right now?
Do you have any hard nose?
Well, I wouldn't use it to write something that I intend to publish, right?
And you want to know a good reason why, among the many good reasons, to do your own cognitive.
work. If you publish stuff that you just pulled out of an AI, you'll get busted for plagiarism
eventually. Like, this has been trained arguably illegally, by the way, this is still being litigated
in court on other people's work. And it outputs this work, this training data. So to play it safe
when writing, you know, it's best to think of AI as a Google search when you're writing and
you'll be fine. You know, with Google, you would never just, you know, click, drag, copy, paste
across a page of search results unless you want to get fired or sued. And in fact, the New York
times just dumped a freelancer over this for using AI.
Would you use it to like summarize a book that you were supposed to read instead of reading it?
Yeah, absolutely.
I mean, okay, a book you're supposed to read, no, right?
But a book you weren't going to read.
Or a book you want to read?
Yeah, a book you want to read.
Yeah, exactly.
I would not use it to summarize a book that I intend to read or could read, right?
But summarizing a book I would not have read to acquire additional knowledge, that's a different story.
I use AI probably a lot like I've used Google in some ways, right?
I ask it questions for research.
I ask it for some feedback.
Sometimes I ask it to organize a bunch of thoughts.
I have found it to be helpful at times and not at others.
I also find that it can get pretty sycophantic, and I really don't like that.
You know, I want to acknowledge there are people out there who will think I'm probably like a Luddite, right?
Especially listening to all the ways that you're using it.
And then there are other people at there who think I shouldn't even be using it as much as I'm using it, right?
Or frankly at all.
And one of the big reasons why I wanted to bring you on today is because, you know, I and people, a lot of people I've been talked to have been struggling with this idea of what it does to your ability to think critically about stuff, right?
That it might make your mind lazy and hollowed out essentially.
And, you know, just talk to me a little bit more about that idea.
Do you think that that is like a rudimentary way of looking at it?
No, it's not rudimentary at all.
As a matter of fact, you've intuitively landed on the cutting edge of research on the
cognitive effects of AI.
There are many good reasons to do your own work.
And by the way, you know, that sycophantic thing that you just mentioned, there's a
perfectly good reason for that.
It's called reinforcement learning from human feedback.
So the way these things are trained is, you know, one of the final stages of the process
before they release a new model is they just get human beings to give an output a thumbs up or a thumbs down, right?
Essentially the way we do on social media, right?
And, you know, human beings tend to give the thumbs up to something that's polite, right?
So we basically were the authors of our own sycophancy in that sense because we, you know, we like people that compliment us and the AI's been trained on that preference.
Yeah.
I just worry, like the more that I use it, you know, to your point, like the whole, like the whole.
harder it's going to be for me to be able to spot something that looks and sounds good and
smart, but is not, like, actually good and smart. I think that's what I'm really struggling with here.
You're absolutely right. There's a flattening effect, right? Now, just try to think of it as this
super confident tour guide who's read a bunch of stuff about a city, but has never been to this city.
So, you know, he might be right. You might learn something about Paris. Or, you know, you might wind up
insisting that this cornfield is the Eiffel Tower, right? So be careful.
Shopping for a car should be exciting, not exhausting. But sometimes it can feel like a maze.
That's where Car Gurus comes in. They have advanced search tools, unbiased deal ratings,
and price history, so you know a great deal when you see one. It's no wonder Car Gurus is the number
one rated car shopping app in Canada on the Apple App and Google Play Store. Buy your next car
today with Cargooros at Cargooros.ca. Go to cargueroos.ca. Go to cargueroos.ca.
That's C-AR-G-U-R-U-S.Ca. Cargueros.ca. On big lives, we take a single cultural icon.
People like Jane Fonda, George Michael, little Richard. And we pull apart the story behind the image.
And we do this by digging through the BBC's vast archives. Discovering forgotten interviews
that change exactly how we see these giants of our culture.
We're here for the messy, the brilliant, the human version of our heroes.
I'm Immanuel Jochi.
I'm Kai Wright.
And this is Big Lives.
Listen to Big Lives, wherever you get your podcasts.
Okay, let's get into some of this research because in the last couple of months,
we've actually seen some new research come out about what AI is doing to our ability to think.
I'm going to introduce three concepts here.
Cognitive debt is one.
Cognitive surrender is the other.
And epistemic debt is different.
is the third one. I think we're going to be hearing kind of more and more about these terms as time goes on. And let's
start with cognitive debt and this kind of newest study from MIT called Your Brain on Chat, GPT. And what is
cognitive debt? And what did the study find? Yeah, it's, you know, the long-term weakening of your
critical thinking skills. I mean, think of it as a credit card, right? The term debt speaks to this
dynamic, you know, buy now, pay later. And you walk into it.
Any store you want, you know, and leave with a designer handbag.
You know, just the same way you order any summary from AI or an email draft, any graphics, quickly and easily.
It's a piece of kick.
The bill comes later.
You might find, eventually, the collection agency at your door.
So just looking specifically at this study, it asked different groups of people to write essays four times.
Some people could use AI.
Some were allowed to use Google searches, and some used nothing but their own brain.
and every time participants were interviewed, they were graded,
and their brains were connected to EEG machines that monitored their brain activity.
And the results were like devastating for the AI users, right?
55% lower signal flow in their brains.
They struggled to quote their own essays.
People who use Google could quote their own essay.
People who use their own brain to write the essay, absolutely could quote their own essay.
I mean, but I mean, can you even say it's their own essay when they used AI to write it, right?
I guess.
Not when you use the credit card, right?
And you haven't paid for it.
And I guess that's the entire analogy, right?
Because in the fourth and final session, the tools were taken away from everyone, right?
Nobody could use Google or AI.
The AI group bombed the worst.
I mean, they failed at a rate seven times higher than other participants in quoting what they had written.
Okay.
If cognitive debt is basically your capacity for critical thinking, creativity, deep understanding, diminishing over time,
there is also this concept of cognitive surrender, right, which is related but different.
And cognitive surrender is, I think, this idea that you, like, abandon your critical thought
and you just trust the AI, right? Is that fair?
Exactly. And I wrote this piece on Substack the other day, basically talking about the different
papers that had come out and encouraged essentially parents to talk to their kids about this.
But so the second study, as researchers at Wharton, tested over 1,300 people on nearly 10,000.
tasks. And they let them use AI. And some people did. Some people didn't. But crucially, the researchers
fiddled with the AIs. So not everyone got the same quality AI. Some had better answers. Some were worse.
And the critical finding here is that among those who used AI, those who used more accurate
LLMs had better answers. I think the accuracy went up something like 25% or 15%. And then it got
and those who had worse AIs, so it diminished 20%.
I might be getting my numbers mixed up here.
But basically, you're seeing a pattern there that proves that people are surrending their
ability to make their own decisions to a machine because had they not, you wouldn't see their
answers get so much better or so much worse based on whether they had a good LLM at their disposal.
Right.
It's an indication of like blind trust.
Exactly.
Exactly.
Yeah.
Okay.
There's a third study that I know you want to talk about and that you wrote about.
and it looks at this concept of epistemic debt,
which is when you rely too heavily on AI to do stuff like coding
and you don't understand how it does it,
which means you don't actually own it
or have ownership over the thing that you're doing.
And just explain to me a little bit more about epistemic debt
and how it's different than cognitive debt
and cognitive surrender and what the study found.
Yeah, so the first study, cognitive debt,
it implies, I mean, it's monitoring brain activity, right?
This is not monitoring brain activity.
It's monitoring knowledge.
I mean, epistemic, it comes from the Greek word for knowledge.
And specifically knowledge of computer programming, because this is what the study was about.
So there's this machine learning scientist who until recently was a researcher at Amazon,
he monitored 78 people using AI for computer programming, vibe coding, all the stuff I was
describing earlier about what I'm doing these things.
So he, he test.
this theory that adding guardrails early in the process, some early friction, forcing people to
think early on creates real learning. Because what he did is he made some participants answer questions
about their project part way through, and others could just sail through using AI to continue,
you know, coding to their hearts delight. And then at the end of the study, he took away everyone's
AI and everyone had to answer questions or have to fix problems in the code. There were some bugs
he asked them to resolve. And surprise, surprise, the people who faced the friction earlier,
the ones forced to answer questions, did way better than those who had used AI the entire time
without facing any intellectual challenge along the way, with a 39% failure rate in the final
task compared to 77%. So it's like double the failure rate when you're forced to stand in your
own two feet if you've only used AI without, you know, being challenged early on.
And I, you know, I think he drew some conclusions from that about the value of early
friction. Well, just talk to me more about that. Yeah. So, you know, one of the conclusions that I've
drawn from the three studies we just discussed is that just adding some cognitive checkpoints
in the process is extremely valuable. Just by adding these little guardrails early on,
these friction points, challenging people early. And I think this is highly applicable in an
education setting. I think that if I'm a college professor today, I mean, I'm trying to
some class discussion in the curriculum and letting students know, by the way, 5% of your grade
is going to be based on class discussions and whether you can replicate what you had in your paper
on the fly. Right. And whether or not that's worth a huge amount of your grade doesn't matter
so much as planning that seat of doubt in the student's mind thinking, you know what?
I'm, you know, I'm going to get called on this, right? The collection agency is going to be
knocking on the door if I don't do my own reading. That's friction. Adding that friction early on could be
incredibly useful. You know, I imagine that some people's takeaways here would be that we shouldn't
be using it at all, right? And did these studies give you any pause on whether we should be using it
at all? Zero. Zero. You know, saying I shouldn't be using AI at all, and I should be the, you know,
starting shortstop for the New York Yankees. It's not going to happen. These tools exist. They're here.
And not only that, the authors of these studies don't even advocate for that. These
are people who are dedicating their lives to tracking the damage from AI, and they're not
suggesting you stop using AI. What they're suggesting is using it more mindfully, more carefully,
along the lines of what I've just discussed, you know, one of the most interesting things I've read
on it is, you know, something from one of the world's great cognitive psychologists and experts
on the science of learning, Paul Kirchner. You know, he's talked about how we're not just offloading
our thought to AI. We're outsourcing it. It's incredibly dangerous. The same way,
outsourcing our sense of direction to GPS has damaged people's sense of direction.
Same way using calculators damages our math skills.
The same way the printed word damaged our ability to quote Homer's Odyssey by heart, right?
But this is all of that on a grand scale, right?
AI is like the super tool that combines all these other tools.
And so he's saying it's very dangerous.
But I thought it was really telling that he says AI isn't going back into the bottle.
The real question is what we choose to outsource.
And this is what I'm trying to do more mindfully, more carefully and selectively.
Well, take me through that a little bit more, like what you're thinking about when you're using it and what kind of questions or asking yourself and what kind of guardrails you're throwing up for your own self.
Yeah, so two things, really.
One is a concept of cognitive debt.
And if I feel like I've accumulated it on something that I need to know to pay it back.
And to make a note, like I've, you know, I have summarized this, this paper, this book.
It sounds really interesting.
I'm going to go back and read it to the best of my ability.
But really, but the main thing I've done is I've created this sort of framework.
It's like a matrix on two axes.
Axis number one, you know, vertical axis is, do I need to own this information?
Is this something that, you know, I might get called on or something that I might need to know,
just in my, in my everyday life or, you know, in my work?
And on the second axis is, you know, the time to actually acquire this knowledge, the old-fashioned way, right?
And if the answer to that question, the both questions is yes, just read the damn thing.
Do the work, right?
If the answer to both questions is no, then, you know, that's kind of, that's the sweet spot.
Look, it sounds good, this framework, right?
But I just, I'm thinking when I'm listening to you, you know, does, does everyone have the ability to put this kind of careful thought into how they're using the technology?
every day. And I'm thinking, especially of younger people here, right? You are absolutely right. I'm not
even convinced it works on it, like perfectly, right? I'm just saying this is a mitigating strategy.
There's going to be damage from this stuff. It's going to do a lot of good things. And I mentioned some of
them. It's going to do a lot of bad things, including things we haven't mentioned today. Yeah.
That's one thing. The second thing is, yes, young people are most vulnerable. It's why I've mentioned
them a couple of times here. And very often what comes to mind is,
this deja vu of having worked as a news editor
because I feel that the cognitive skills that you develop,
the critical thinking skills,
you're calling BS on stuff,
checking sources,
the stuff you do at a news desk is extremely useful
at dealing with AI,
but not everyone has that skill set.
And so that's why I worry a lot
about people starting out in a profession.
It's why I worry a lot about young people especially.
I don't want to belabor this too much.
too much, but just like coming back to this idea you've been talking about where you have to make
decisions about what you have time for and you've got to do the work on the stuff that you need
to do the work on and then you try to outsource the stuff that you might not have done anyways,
you know? And I just keep thinking to myself like, that's, I feel like that's such a blurry line,
right? Like it would be so easy to just kind of talk yourself into into the argument that like,
oh, I can just offload. I should offload.
flowed this because I don't have the time to do it or it's not something that I need to know
or have to explain later, right? Do you know what I mean? This is like a slippery slope.
You're absolutely right. You know, I'm not like, this is what I'm talking about a mitigating
strategy. I don't know how well it's going to work. You know, and it's the, you know, the irony of this
is I'm studying this new technology. I'm no, by no means a technophile, right? There's never even
that interested in technology. I say until like it started being able to string together sentences.
I, which was kind of my turf.
And so, no, I, I worry about all this stuff.
I think, but it's incumbent on all of us to try to do the best we can to find ways to
limit the damage from the bad outcomes and try to encourage the good ones.
And, you know, there's certain things, you know, AI will never be able to tell you
that you'll enjoy something that you'd never expected, right?
So you throw something into a summary, you know, maybe it'll sort of, you know, the light bulb will go off over your head and say, hey, maybe I should read this book.
But the danger is that you never read that book.
So there are other factors matter.
Like, am I enjoying reading this, you know?
And if so, maybe that's what it's time to, you know, to ditch the AI because, you know, the joy of reading is precious.
And that's a human thing.
And I don't want to outsource it.
Are you thinking about this from a moral perspective as well?
kind of like, we just did an episode about AI data centers and we looked at the environmental
impact here. And is that kind of rolling through your head as you're kind of navigating all
of this as well? I mean, it certainly is for, for me. Yeah. There's an environmental case to be made
against using it in an edge case, right? If you're in a coin toss scenario, maybe just, you know,
uh, side, you know, error on the side of, uh, of just taking, taking the time to do it the old
fashion away. Yeah, you're absolutely right. Alex, you know, the last question that I wanted to ask you
is that history is really filled with examples of new technologies that people thought we're going to rot our brains.
Do you think that we're dealing with something fundamentally different here with AI?
Look, the discouraging answer to your question is yes. And not only that, Google did have an effect on our brains.
Yeah. Not only the GPS, GPS, but also just Google searches.
And writing did have an effect on our ability to remember.
And calculators on our ability to do basic facts by hand.
And AI could be more potent and more dangerous than all those technologies.
That's the depressing answer.
The more hopeful one is, you know, there's this great book I read a while ago called How We Got to Now by this science writer, Stephen Johnson.
I recommend it to everyone.
And it tracks these transformative technologies.
and there's this delicious anecdote in there
about Thomas Edison and Alexander Graham Bell
right Alexander Graham Bell events the telephone
Edison invents the phonograph
Thomas Edison in inventing the phonograph
his plan for the record player
was that you should be able to record your voice
put your record in the mail and send your voice to someone
literacy rates were much lower back then so this was basically like a voicemail
that you could send through the mail that's what he wanted to invent
and Alexander Graham Bell was planning on inventing
a device, a telephone, that you could hold up to a musician and broadcast the sound into
another house so someone could hear the musician somewhere else. So basically Alexander Graham Bell
thought he was inventing the record player and the inventor of the record player thought he was
inventing the telephone. And I tell that story to illustrate this idea that there is zero
chance that you and I know exactly how this story plays out. We may have hunches, but the history's
full of technological surprises. Okay. That feels like a good place for us to leave it.
Thanks, thank you for stopping by.
Appreciate it.
Thanks, Jamie.
Take care.
All right, that's all for today.
I'm Jamie Poisson.
Thanks so much for listening.
Talk to you tomorrow.
For more CBC podcasts, go to cbc.ca.
com.
