Your Undivided Attention - Rethinking School in the Age of AI
Episode Date: April 21, 2025AI has upended schooling as we know it. Students now have instant access to tools that can write their essays, summarize entire books, and solve complex math problems. Whether they want to or not, man...y feel pressured to use these tools just to keep up. Teachers, meanwhile, are left questioning how to evaluate student performance and whether the whole idea of assignments and grading still makes sense. The old model of education suddenly feels broken.So what comes next?In this episode, Daniel and Tristan sit down with cognitive neuroscientist Maryanne Wolf and global education expert Rebecca Winthrop—two lifelong educators who have spent decades thinking about how children learn and how technology reshapes the classroom. Together, they explore how AI is shaking the very purpose of school to its core, why the promise of previous classroom tech failed to deliver, and how we might seize this moment to design a more human-centered, curiosity-driven future for learning.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_GuestsRebecca Winthrop is director of the Center for Universal Education at the Brookings Institution and chair Brookings Global Task Force on AI and Education. Her new book is The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, co-written with Jenny Anderson.Maryanne Wolf is a cognitive neuroscientist and expert on the reading brain. Her books include Proust and the Squid: The Story and Science of the Reading Brain and Reader, Come Home: The Reading Brain in a Digital World.RECOMMENDED MEDIA The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better by Rebecca Winthrop and Jenny AndersonProust and the Squid, Reader, Come Home, and other books by Maryanne WolfThe OECD research which found little benefit to desktop computers in the classroomFurther reading on the Singapore study on digital exposure and attention cited by Maryanne The Burnout Society by Byung-Chul Han Further reading on the VR Bio 101 class at Arizona State University cited by Rebecca Leapfrogging Inequality by Rebecca WinthropThe Nation’s Report Card from NAEP Further reading on the Nigeria AI Tutor Study Further reading on the JAMA paper showing a link between digital exposure and lower language development cited by Maryanne Further reading on Linda Stone’s thesis of continuous partial attention.RECOMMENDED YUA EPISODESWe Have to Get It Right’: Gary Marcus On Untamed AI AI Is Moving Fast. We Need Laws that Will Too.Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Transcript
Discussion (0)
Hey, everyone, this is Tristan, and this is Daniel.
Welcome to your undivided attention.
AI is set to disrupt every part of our lives in the near future.
Healthcare, finances, the job market, you name it.
And some of this disruption is a few years away, but there's one place where it's immediate.
And that's the classroom.
Students can plug their homework into chat GPT, and it spits out an answer within seconds.
It can write their essays for them, give them personalized cliff notes,
even answer complex math and science questions.
And there's no way for teachers to tell.
It's like the rug has been pulled out from the entire system.
Yeah, even if these students don't want to use these systems to cheat,
they often feel like they have to,
or else they're going to fall behind their peers who are.
When your grade feels like it's the only thing that matters,
then all of the incentives push kids towards using and abusing these tools.
And, of course, teachers are struggling to figure out how to grade assignments.
The old way of running education seems suddenly and pretty fundamentally broken.
So in a way, AI is forcing us to rethink what education is for and what the education system does.
And that's critical because education is obviously the foundation of our society.
If we do it right, it will set up our society to thrive, but if we do it wrong, the consequences can be disastrous.
So we're at an inflection point where we can actually re-examine some fundamental questions about what is the purpose of education.
What is it actually for?
So to begin to answer that question, we've invited two guests on the show who've thought deeply about the structure and purpose of teaching for a very long time.
Marianne Wolfe is a cognitive neuroscientist and expert on the development of the learning brain.
She's the author of Proust and The Squid and Reader Come Home,
which explore how reading, writing, and thinking affect brain development.
And Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution,
where she's on the Global Task Force for AI in Education.
Her book, The Disengaged Teen, came out in January of this year.
So Marianna and Rebecca, welcome to your indivited detention.
And you have our very unnoticed.
divided attention, so please begin.
Great. So we wanted to have you both on the show today because you're both lifelong
educators who've thought a lot about the role of technology in education. And you've written
books on the damage that social media has done and that smartphones have done to kids' attention
spans and critical thinking skills. But you're also optimistic about the kinds of positive
role that technology could play in the classroom. So I was reading your work in preparation
for this interview. And both of you in different ways talked about a kind of Faust
bargain that technology is in the classroom, where you give up something of deep moral importance
in exchange for something of material importance. And I was really struck by that. So we're
wondering, like, what did you mean by that? And how have we seen this appear in education in the
past? Maybe, Rebecca, let's start with you.
A Faustian bargain. Okay, Daniel, you don't start with the lightweight questions.
Never.
So I think the thing I am most haunted by,
as I watch AI, generative AI, develop and think about how good it is getting, it's starting to reason, it can possibly deceive, and they learn how to do algorithmic thinking, which is basically logical thinking, and that is a very foundational skill that helps them be better and give better answers in other parts of life, and that is exactly what we want kids to do.
that's the reason we want kids to write essays it is really the process of learning to think
in a logical clear way a thesis statement with evidence underneath and and organizing it and
making an argument that transfers to life and being able to think across domains and subjects
and so i just keep thinking of little rooms around the world with incredibly smart developers
training AI to think that way, and then chat GPT, which students use a ton, it might be the
majority user of chat GPT as students, actually using it to de-skill themselves by having the
essay right for them. And so, like, what are we doing here? So that's the thing that keeps me up
at night. Mary Ann? So let me begin with what Rebecca said in the beginning. And that's
about thinking. How do we develop thinking? What child learns best under what conditions?
So in my book reader come home, the reading brain in the digital world, I suggested, not
unlike you, Rebecca, that there are several sets of skills that we want our children to learn,
but that for the reading brain to develop, I think the research points to it being developed best
in terms of deep reading as the goal on print.
And so I suggested that between zero and 10, 12,
all along we're working on what not is just decoding,
but the ability to develop these,
what I call deep reading skills that are analogical, inferential, empathic,
and most importantly, the sum of which is critical thinking,
that that happens best in print.
when we have this beautiful reading brain and I'll begin with reading but we can it could same thing
with math but I'm just going to use it because that's my easy point when we build this circuit
the circuit has a very basic function the whole point of learning is to use the effort the and this
is what I really want to say that is so worrisome to me about AI it's the efforts to build
the circuit's elaboration, to make analogical skills, to make inferential skills.
One of the most beautiful parts of learning is to pass over from the perspective of that
egocentric learner into the perspective of others. This is an affective cognitive process
that's going on. But that all adds to the ability of that learner to become critically
analytic about what they're reading. Now, the problem with AI for me is what we call
cognitive offloading, that in the interest of efficiency, we can do all this faster and better
if we're using these technological devices that augment and blah, blah, blah. The reality is
what we need as learners are the efforts. Even Emerson said when we are braced by labor,
That's where thinking begins.
And all of that is in the interest of the imaginative insights of the individual.
All of that should never be short-circuited in the interest of efficiency or the best grade.
If we could somehow model the importance of effort and labor, that's what builds the circuit of the deep reader.
But we have to dig in here because in some way this feels a lot like,
teachers talking about the advent of calculators in mathematics.
Like kids won't remember their multiplication tables.
And to a certain extent, I have to say, like, I'm all for that.
I'm all for not having spent three years of my life, you know, two hours every day of the week,
trying to memorize multiplication tables.
That just seems like a win to me.
So how do we tell the difference between things that are fine being dependent on our technology
versus the things you really don't want to be?
I would say we have over history.
as a species evolved through cognitive offloading, like none of us, I think, could be dropped
in the middle of the woods and know which berries are poisonous and which berries are not,
which is something we would have known many, many years ago. The things that I worry about
are sort of the core of what it means to be a learner. What Marion is talking about in terms
of deep reading is a skill that is not just about reading. It's about thinking. It's about
understanding yourself versus the rest of the world. It's about coming up with new ideas.
Those are things that we should not cognitively offload. And I'm curious, like, we're going to
spend a lot of time in this episode on artificial intelligence. But I'm sort of curious,
starting in the past. So over the last 20 years, you've seen this happen in a lot of different
classrooms in different nations, in different municipalities, trying to integrate technology.
And there's been this similar kind of Faustian bargain, right? This gain of efficiency,
but this real loss of social fabric or ability to teach kids what you're trying to. Can you talk
a little bit about what just happened and then we'll move towards what's about to happen?
Sure. I mean, over the last couple decades, we've seen several waves of technology come into schools.
So one of the big ones was hardware.
Let's get devices in schools, not in kids' hands.
This is like desktop computers, computer labs.
Remember the days of having computer labs?
And at the end of the day, and the OECD has done research on this across many countries, about 70 countries,
the education systems that really push getting computer labs in, desktop computers, technology in the classroom,
their kids didn't learn more than the...
the systems that didn't push it. And it's not that technology can't help with learning. It's
that you have to integrate it into the teaching and learning process. So we've seen this a little bit
time and time again. Certainly can talk about cell phones. I don't know if you want to go there.
That's a whole different topic and issue, but it permeates schools. But I think the thing that we've
learned is you have to be very intentional about how you introduce technology into education
it will not magically make things better.
So what I think I heard from that is that underneath this big skill that we call reading
are a bunch of cognitive development, right?
You have to learn what an analogy means.
You have to learn a bunch of cognition.
And I hear you saying that that is just way better done on print than on devices.
Have we seen that as schools have rotated towards devices?
Have we seen those skills draw?
I often feel that I'm in the Wild West frontier of knowledge in this area.
I have people in Norway asking me to testify about what is happening with the digitizing of their textbooks and their libraries.
And they're worried that like Sweden, they will do this massive digitization only to find the grade slipping and that academic performance is declining.
So Norway now and Sweden have decided not to.
Meanwhile, Korea is beginning to digitize their third grade textbooks.
And I'm literally on their NPRs and PBS, like, wait, you know, let us get more information.
But for heaven's sakes, do not do mass digitization.
So I think there's enough, if you will, data about this.
I think all of you know the Singapore study that was released last year, in which you have data from zero.
to eight from Singapore, McGill, and Harvard showing that the more digital exposure between zero and
eight, the less the attentional mechanisms are working in the same way and the difference in
academic performance. So we have these different databases at different developmental epochs,
some at infancy, some zero to eight, some young adults. But yes, I think the data to this moment in time
suggests that reading is best with print, not to be not complimented, but to be learned during
these very pivotal developmental times.
Daniel, I want to come in because one of the things that Marianne didn't say explicitly,
but I certainly have taken from her work, because I think it's actually very relevant
beyond reading, is that when you're reading on digital devices,
you're more often skimming.
Like, Marianne, you've found that there's like an F pattern.
You read the top and you go down a little bit in the second line or a Z pattern.
Like we're really not, we're doing what is called surface learning.
Yeah.
Which you're basically going quickly and sort of getting the top headline and you're missing a lot of nuance
and actually stuff that's interesting.
You really are trying to operate as like a little bit like a machine input in, input out.
let me get it on the test, let me get it on the worksheet, let me try to get the right answer.
And that's actually not exciting for kids.
And so one of the things that we found in the book research we did with my colleague Jenny Anderson
was that kids are super disengaged from school.
They are not motivated.
They are not engaged.
They are not enjoying learning.
And we found these different modes that kids show up in, one of them being passenger mode,
which is basically doing surface learning and coasting.
And that is not just reading.
It's across the board.
So I think there's something deeper.
Absolutely.
I can't be more thankful to you for making one of my major points,
which is that reading is so much more than decoding the surface.
And that first circuit, that's what it does.
Right.
And what we are actually teaching our children is to elaborate that circuit.
And that is engaging because it engages not only what a word looks like,
but what is connected to?
What are the thoughts that that evoke, that elicit?
So we're really teaching how to think.
Han Byeong-Chul, the Korean philosopher,
says that we are so accelerated
that we're moving from thing to thing,
stimulus to stimulus.
And that's what reading is like a canary of the mind for that
because it shows you when you skim,
you just don't have time to allocate attention.
We have to have time for,
beauty. We have to have time for your own connections to this content. Well, I love how live this
conversation is. And this is why I'm so thrilled to have you both in the same room because, Marian,
you bring this deep neuroscience of learning. And Rebecca, you have this big macro, how are different
countries thinking about this and what are the systems doing? So it's just a thrill to have you
both in the same conversation together. So one of the things I hear you talking about is we've replaced
not just the way that kids learn, but we've replaced some of the fundamental motivations for learning.
And I'm curious if we can dig into that a little bit.
Like, how has the motivation for students of what they're getting out of every moment of learning?
How is that change?
And how do we want them to be motivated?
I can definitely talk to that.
I'll kick it off, Marianne, and pass it to you.
Great.
So, Jenny and I, my co-author and I just did three years of research.
And we were looking at the question of why kids don't really like school.
And it's not actually that.
that they don't like school. They like their friends. They like going to school to see their friends. We saw that in COVID. They don't like what they do in school. That's what they don't like. We found that kids show up to their learning. They're sort of motivated and engaged in kind of four ways. They're in passenger mode, which I just talked about. They're coasting, doing the bare minimum. This is roughly the experience of half of the middle school and high school kids in the U.S. Or they're an achiever mode where they're really excited to do perfect,
on every assignment and get a gold star
and everything that's put in front of them.
We have thought for a very long time
in education and in society
that achiever mode was the top
of the engagement mountain. Getting the right answer
was the top of the engagement mountain.
We have learned from our research.
It is not.
And actually, kids who get stuck in achiever mode
are very fragile. They're risk-averse.
They're less able to adapt.
And they do not have the resilient skills
that if they have a bad day and get a bad grade,
they can just pick themselves up.
They're really focused on the outcome, not the process.
And then you've got kids in resistor mode.
These are, quote, unquote, the problem children.
They're avoiding and disrupting their learning.
But they actually have a fair bit of gumption, an agency,
because they are saying, often inappropriately,
class clown, not turning in their homework, skipping school.
These are all kids' way of telling adults,
hey, this is not working for me.
And they actually, those kids,
if you shift the learning environment can actually flip to explore mode,
which is the top of the engagement mountain.
And we know from two decades of research,
explore mode is where they get to explore their curiosity
and they are driven and they really do become unstoppable.
And when they get that opportunity,
they actually do better academically
and they're being prepared to swim in the AI world
that they are entering because they will be able to navigate
all the shifts.
and changes that come their way.
But less than 4% of kids we found in middle school and high school
get a chance to regularly be an Explorer mode in school.
And so that, to me, is what we need to hold in our mind
and figure out how technology and AI can help kids get into Explorer mode,
not reduce them to passenger mode.
I was going to say, does technology put us into Explorer mode or achievement mode
or when they have their modes?
It totally depends on how.
technology is used. There are times when it can be really good. So, for example, Arizona State
University a couple of years ago piloted a new approach with virtual reality in their Biology 101 class.
So this is an introductory to biology class. Everybody had to take it. Not a lot of people loved it.
Kids didn't too really well, except when they started introducing for 10 minutes each class,
sort of do a lecture on a concept photosynthesis or, you know, endangered species, I don't know, make it up.
And they go into this virtual reality, beautifully created world that they have to explore and go find the example of what they're learning in the textbook in this world.
That ability to actually explore in sort of a semi-embodied state, even though it's not, it's in virtual reality.
you know, kids did so much better on biology 101
that that's a methodology that they're using.
Yeah, the great promise of education technology
was always that you could let kids follow their interest.
You could let them explore the topics they were interested in.
Instead of reading one textbook, it would react to you,
and the more you show interest in something,
the more you could pull from it.
But that doesn't feel like what I've seen,
or certainly I haven't seen results from that in education.
Why didn't we live up to that great promise
of giving kids' tools that would let them explore more?
I think that we gave kids tools that were focused on adaptive and personalized learning, which is not exploration.
That is, I want you to learn fractions in third grade, and I will give you an adapted sequence where, you know, if you get the wrong answer, we'll give you a couple more questions till you master it, and then you could move on to the next, very effective in mastering third grade fractions, not hugely exploratory.
And the other thing I think is we made the mistake on the other end of just letting into our classrooms a wash of overwhelming technology from cell phones to the internet to, you know, Chromebooks.
I can't tell you how many times my seventh grader came home last year.
And I was like, how was school? What'd you do?
What did you do in math?
And he was like, I played Minecraft unblocked.
I played Zelda on the wall unblocked.
You can get any video game unblocked on any Chromebook.
Like kids will find a way around.
it. So I think we kind of never really nailed that piece. Well, so one question is, why are we just
throwing technology at children as if it's going to help? I mean, I'll just say when I was at Google,
I knew product managers who were building, you know, the Google classroom sort of suite. They were
not experts in child developmental psychology. These were just people who were trying to build
products and get marketing, you know, get the thing adopted by as many schools as possible. Not that they
didn't care about kids. They did. It's just that they weren't actually fundamentally developmentally
attuned. That was not their core education as they were making design choices that would influence
the developmental brain that Marianne you speak about so eloquently. So we just, should we talk for a moment
just about the incentives of, you know, schools don't want to look like they're behind and not adopting
the latest technology. As the other schools get the Chromebooks, we should adopt them to. The other schools
are getting the iPads. We should get them to, oh, well, kids are going to grow up in this phone
world. So we got to make sure that kids are using phones in the classroom. But all of this is just
guided and, you know, very naive thinking. Can we talk about some of these sort of social
pressures and what's driving this mass naive and sort of almost counterproductive adoption
of technology? I think there's multiple incentives. And Center for Humane Technology talks
about incentives a lot. So you guys know what they are broadly in the tech space. And I think
you do a very good job of uncovering them. So a lot of tech companies are trying to sell into
schools. And the incentive is to create products that will be easily adopted at scale and make
money. We analyze ed tech based on does it substitute for an analog function? Does it augment
what we're doing in real life? Does it modify or redefine? And so most of what tech does
is substitute or augment because that is the incentive is to sell into schools in a way they can
plug and play very easily and you can scale and get money. I did a large study several years ago
for a book called leapfrogging inequality and we looked at 3,000 education innovations across 160
countries and 1,500 of them were ed tech and 80% of those innovations were just substitution
and augmentation, that would mean if you're doing a paper multiplication worksheet and you
digitize it on a tablet, it could help, it could augment because it could automatically grade and
save the teacher time, but you are not profoundly changing what education is like.
That said, your point, Tristan, about the fact that a lot of ed tech is developed not by
educationalists is the perennial discussion in every single education conference I go to,
which is why can't we get educators at the table? Because we do know that when educators are at the
table, better products are made. I think about Clever, which is in a lot of classrooms, which if you
guys don't know or your listeners don't know, is a very simple, single sign-on portal for teachers and
parents and students. Other areas that I think do really well in ed tech are incredible work
on supporting neurodivergent kids with technology. There's incredible dyslexia software.
My son uses from text to speech, speech to text. And it's developed by educators. So when it is
developed by educators and solves a problem in education, it can be quite effective.
Well, so I want to build on that, though. It's easy to talk about the supply size.
and say, you know, teachers aren't enough at the table
or the people, to Tristan's point,
the people who know how to build tech
aren't the people who know about human development.
But there's another side of it too,
which is, you know, since the 90s,
we've gone from a period of real information scarcity
to this flood of information.
And the other side of it that I've seen
is educators, teachers, parents even say,
you know, we need to educate kids
for the world that we have now.
And that means making sure
that we educate them for the flood of information.
What does it mean? Forget about what exists now and how broken it is. What does it mean to actually educate the next generation for this absolute tidal wave of information, this confusing, often contradictory, often overwhelming information we get through the internet?
Well, I think we have, for good reasons, good intentions, believe that if we had test scores, we could see how well education was doing. And when it isn't doing well, there's this, if you will, almost.
reflex, okay, let's do something more. And the something more inevitably in the last decade has been
technological fixes. The reality is that we put so much on the backs of teachers who have to
use all kinds of flexibilities to move from one thing to the next. And they're expected to
whatever it is that year, they're expect to do it to do it. So that third grade, fourth grade, and
eighth grade scores show how well they're doing, when in fact, we're all doing poorly.
The NAEP scores of the country, which were released just, you know, last month, show
abysmal results that if my goal is deep reading for the world, only one-third of our eighth
graders in the United States are even close to that. And of that one-third, only one-half of the
kids of color are in that third. Even worse, 40% of our eighth graders are not at a basic level
of reading. Now, technology is not going to fix that. I mean, I think the top note you're talking
about is kind of an exasperation. All this technology has moved so quickly. And educators and parents
and teachers are all struggling to integrate it. And Marion, to your point, integrating it has
meant this very linear approach to education. And so that's what just happened to us. And we're still
recovering from that, but now AI comes into the picture. How will AI change education? How do you
not take away the essay when you have an essay writing machine? What does that mean? Because you don't
want to take away the essay, but the essay is now broken. I think, Daniel, what you're bringing up is
the purpose of education. And the purpose of education in schools is profoundly shaken to its core
because we're moving from an age of achievement
where the purpose of school has been primarily to rank and sort kids
to what I would call an age of agency
and lean into a lot of the other purposes of school
that have evolved over the years.
There are many purposes that are crucially important.
One, custodial care.
What would we do?
We knew.
We actually found this experiment in COVID.
schools are the number one ways in which governments provide child care.
Number two, socialization, which it doesn't have to just happen in school, but that is a big way.
And if you live in a democracy, it's about citizenship development.
Everybody having a similar school experience, because if we don't have a shared understanding
and experience, we will devolve and lose our democratic way of life.
Let me add a fourth.
I think what we have right now is this almost bifurcation or trifurcation of information, knowledge, and wisdom.
And when we are only after information which AI is so good at and its translation into knowledge, which we hope it will complement us, we nevertheless must never forget what does that all mean for humanity, the future of the species, and that's the wisdom part.
And so what I hope that the school can give is this sense of translation that we are taking information.
We are transmitting it to you so that you will have knowledge from which you will help propel us wisely.
So I completely agree.
And I just want to acknowledge that the stakes of this are really high, right?
We're talking about democracy.
We're talking about wisdom.
We're talking about not losing deep human skills.
And yet I feel like less and less sure that I know what wisdom.
means in an AI empowered world, right? And so we are really sitting at this precipice of a really
deep change to what it means to grow up inside of a world imbued with artificial intelligence.
And to my ear, some of these solutions sound great, but I'm wondering, they're really much
focused on sort of the continuation of an old tradition. So, for example, you talked about,
you know, school used to be a place where you memorized things because if it wasn't with you
in your brain, it was really hard to look it up. And then,
schools became, I forgot how you said this, but it's where you find it, not what it is.
We increasingly had to live with the internet in knowing where I can look for things or knowing where the knowledge sits.
And with AI, I think there's a new skill that's coming online, which is almost how to manage.
Like, it used to be management was something that you learned late in your career, because really, for the first five to ten years of your job, you were just an individual contributor.
And now I'm actually seeing management skills in kids the age of 10 because they need to manage their AI in doing certain tasks.
And I'm curious as people look at education as a set of metacognitive tasks,
as a set of learning how to do certain things.
What tasks that we haven't been teaching kids that we suddenly need to teach them in a world of AI?
I think there's an opportunity, Daniel.
I've spent my life looking at education innovations and how to transform education systems.
And so in some way, I'm very excited about AI because
it will, it has to move us from the age of achievement into the sort of age of agency,
as I call it, where you could have schools break open that sorting and ranking and really
bring much closer together knowledge acquisition with knowledge application. Many schools are
trying this. There's great models around, which is, okay, we, what are we going to do for this
quarter, we're going to try to solve the problem of trash in the streets. This is an example
from a conference I was just at with the former Undersecretary of Rio de Janeiro who did this in
the favelas, radical agency in their schools. And they actually learned a lot better on the
content. Because when you're trying to solve a problem that's meaningful and relevant,
you have to, it included everything, included math, included geography, and social sciences. And
they had to survey and they had to do interviews and they had to look at the history of
trash and they remembered that so much more.
Right, than a skills-based curriculum, right?
Rather than input in and put out because we learn things when we make meaning of them
and they're relevant to our lives.
So can AI help us do that more?
Maybe if it's used in that way and in that sort of explore mode, that would be a great
example of explore mode, yes.
But I also think just because AI is here,
doesn't mean we have to force it on our kids.
Like, we should feel free to say no.
We don't want to use this for our young kids at this moment and time.
So one of the big promises of AI is obviously tutors,
that we're going to have individualized tutors.
And that's what we're being sold.
There's a story that AI is going to enable that for the masses.
We're about to enter this age of abundance,
the best education we've ever had.
And obviously, there's conflicting views on this.
But recently, the World Bank ran a program in Nigeria
where they used an AI tutor to help students learn English.
And the early results were extraordinary.
Just after six weeks, the students achieved the equivalent of something like two years
of instruction.
And students who had those tutors perform much better in their end-of-year exams.
And the longer that they worked with those tutors, the better they did.
And I'm just curious to sort of dissect this example
because it sits on the optimistic side.
Yeah.
So part of it has to do.
with the context. I have worked across the globe, and in many countries, there are 100, 150 kids
in one class in a first grade, second grade, third grade class. And teachers teach their heart out,
but there is no way they are able to reach every last kid. So the Nigeria example, six weeks
after school, it was twice a week with an AI tutor. It was about a point three standard deviation
improvement, which is quite good in education. We also saw a 0.3 standard deviation improvement
during COVID in Botswana 12 weeks, not through AI. It was through teachers sending text
messages on flip phones to parents, parents opening it up, kids doing its math problems, and then
teachers would call and say, can you put the kid on speakerphone, let's talk through this math
problem. If the kid got it, they would send a harder problem the next week. And kids also improved
point three standard deviations. So what you're seeing is that there's very little instruction
going on. So it is not a replacement for teachers in education, all these contexts, and there's
many more examples that get that. And we're all four teaching those early precursors of literacy,
which is what they were doing in Nigeria. But it's what happens now.
and what happens next can have all kinds of differences. I work in Johannesburg,
sometimes where the one school, Bella Vista, teaches the schools and the settlement with 100 kids
in a classroom. These are wonderful apps. They're in 45 languages now. There is no question that
those technical aids are essential. There is never a binary here. It's what works best in what
context for which children. I think it's never been more important.
100%. And we need to watch out, right? That we're not pulling up the ladder of education
behind us. These tools can supercharge adult learners and people who have those cognitive
skills. And the worry is that actually for early learners, you're not just not helping,
you're actively hurting them having the abilities that you want later. Yeah, yeah.
Basically, if you give chat GPT to someone who does not yet have the critical thinking skills,
they're doing more cognitive offloading versus if you have someone who, let's say, goes all the way
through high school, has a full developmental paradigm.
Then they use chathevity.
Then they're getting the uplift and they're getting more enrichment.
And it's not totally lasting.
There's still some offloading, but it's like there's an enriching.
There's a non-diminishment process.
And I feel like when we talk about what an ideal world looks like,
I feel like landing that distinction is very important.
Absolutely.
We really have to look at adult users versus children users very differently.
Is AI helping adults in the education system do their work?
better and more efficiently? Is it making bus schedules more efficient? Yes, it's amazing. Is it doing
calendaring, which is always a pain in the butt for schools more efficiently? Yes, it is. There's an
incredible examples from around the world of walled garden, GPs being given to teachers
to who are just experimenting, coming up with really interesting things that make their lives
better, like being able to assess kids who are learning English for the first time much more
quickly and saving a lot of time. And that is because those adults in the system have critical
thinking skills and have their hand on the steering wheel. They have agency over the AI.
Rebecca, you're running a pre-mortem on AI in education. Tell us about that, because I imagine it has
to do with these skipped skills or these places where we're just going to do it wrong. What does it
look like to run a premortem on AI? So a pre-mortem, there's a pre-mortem. There's a
a science behind a premortem. It is the opposite of a postmortem where you move the debrief,
the autopsy forward. We should have done this when social media rolled out a decade ago.
And we learned our lesson. And so our task force is collectively with many, many people across
the globe asking two questions. One, what are the possible risks to AI and children's learning
in education and get those all out on paper. And really imagine. Use your big imagination.
And then question two, what can we do today to mitigate those risks and harness the really
exciting possibilities of AI to help kids learn and grow? And so that's what we're doing.
And do you have any intermediate findings or we just stay tuned?
Some of the things we are seeing is that people are feeling like AI is inevitable.
and that they can't say no.
We don't want to use it in our classroom at this point or that point,
which I think is worrisome because it isn't.
We are agentic people.
We can decide what we want to do with technology.
So that is one thing that has come through loud and clear that concerns me.
I think our parents need to know that, like, last year, JAMA comes out and shows
the more digital exposure, the less language development is happening.
And what is happening between zero and five is this massive distraction.
Who was it?
Linda Hunter or someone called this 20 years ago,
she called a continuous partial attention of our children.
Linda Stone, it's a continuous partial attention, I think, right?
Linda Stone.
Yeah, it was 1998.
I mean, it was a long time ago.
But it was right.
And the reality is that our kids are constantly being,
bombarded by the iPads, which I love on a plane and nowhere else. But this is not enhancing
their ability to have focused attention, to have a better memory for things that will be consolidated
and used later in all these other processes. So zero to five is also part of this. We've got to
really think about what we can do with our parents, even before school. Just to pick up on
Mary Ann's point about interactivity and socialization of young people and what technology does.
You know, one of the things that people are quite worried about is young kids are being socialized
to interact in society with other people. And when they're interacting with an AI, you can
interrupt it, you can be rude to it, you can call it names. And that is a form of socialization.
And kids have a hard time understanding, do I do that with a chatbot, but not my brother,
or not my friend, and we saw the damage that social media did to kids' interaction.
And I think we risk, if we don't do it right, really scaling that more broadly.
But, okay, I will ask you a question.
Where are both of you headed with this topic?
What will you do next with it?
With just this conversation?
Well, I'll tell you one thing, which is we here at CHT are very worried about the last thing that Rebecca brought up,
which is it changing the very nature of what it means to relate to each other,
that not just for children, for adults as well as AI begins to insert itself
into our relationships, into our institutions.
How do we as human beings deal with that?
And how do you design the technology such that it's not inadvertently creating huge harms
to commons that we haven't even named yet?
In the 20 teens, it was about the attentional commons.
We destroyed all of our attention.
And as side effects, we began to not only polarize but destabilize, you know, democracies and so much more.
Well, what are the side effects of this AI wave?
And can we learn what they are and can we educate people on what they are before it's 10 years later?
And we're just learning what we did to ourselves.
So that's what keeps Tristan and I up at night.
And so having people on like you who can speak to this and trying to make this conversation progress at the speed of change.
And everybody has kids no matter which political party, their part.
part of and they see the impacts of technology on their kids.
Yeah, 100%.
With children, we really recognize that we have a duty of care.
We need to protect our children.
We need to design for our children.
And that's why we often focus on it at CHT.
I love this sign.
The duty of care, because the Pope has been so ill of late,
I wanted to quote him at some point about children.
And he said that children are our world's best diagnostic
for the health, not only of our society, but of our whole world.
And I think that's, you know, that duty of care is part of that.
Well, that's a good place of any to end it.
I'm so thrilled to have both of you on as deep experts here is an amazing conversation.
And thank you for coming on your undivided attention.
Thank you so much.
Thank you for having us.
This was fun.
It was.
Your undivided attention is produced by the Center for Humane Technology,
we're a nonprofit working to catalyze a humane future.
Our senior producer is Julius Scott.
Josh Lash is our researcher and producer,
and our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudakin,
and original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team
for making this show possible.
You can find transcripts from our interviews,
bonus content on our substack,
and much more at HumaneTech.com.
And if you liked this episode,
we'd be truly grateful if you could rate us.
on Apple Podcasts or Spotify.
It really does make a difference
in helping others join this movement
for a more humane future.
And if you made it all the way here,
let me give one more thank you to you
for giving us your undivided attention.