The Decibel - Machines Like Us: AI upending higher education

Episode Date: September 30, 2025

Today marks the National Day for Truth and Reconciliation. In observance of this day, The Globe and Mail is not publishing a new Decibel episode. We hope to encourage learning, reflection, and meaning...ful conversations about the history and ongoing impacts of colonialism in Canada.Just two months after ChatGPT was launched in 2022, a survey found 90 per cent of college students were already using it. But students are no longer using artificial intelligence for writing essays – AI is used in generating ideas, conducting research, and summarizing reading. In other words: they’re using it to think for them. What does this mean for higher education? And what are the real costs of AI in critical thinking?Machines Like Us Host Taylor Owen, welcomes two guests – Conor Grennan, chief AI architect at NYU’s Stern School of Business and Niall Ferguson, senior fellow at Stanford and Harvard, and the co-founder of the University of Austin.Subscribe to The Globe and Mail’s ‘Machines Like Us’ podcast on Apple Podcasts or Spotify  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, it's Cheryl. Today we're bringing you a new episode of Machines Like Us, the Globes podcast on technology and artificial intelligence. Hosted by Taylor Owen, it's a show about how AI is propelling and shaping our lives, for better or worse. To kick the new season off and right in line with the new school year, this episode focuses on AI in the classroom, how it has infiltrated university and college classrooms. Is this bad for higher education, or could it actually be a good thing? You can subscribe to Machines Like Us wherever you listen to podcasts.
Starting point is 00:00:38 New episodes come out every other Tuesday. Hope you enjoy the episode. I'm Taylor Owen. From the Globe and Mail, this is Machines Like Us. I spent my entire adult life on university campuses. And until recently, I thought I had a pretty good sense of what they were about. Universities are places we go to develop our minds, to learn how to think. But AI seems to be changing that.
Starting point is 00:01:25 Just a few months after the first version of chat GPT was released, a survey family that 90% of college students were already using it. And honestly, I'd be shocked if that number isn't closer to 100 by now. Students aren't just using it to write their essays. They're using it to generate ideas, to conduct research, and to do their readings. In other words, they're using it to think for them. But when this comes up in faculty meetings, I get a sense of paralysis. Some worry that if we ban tools like chat GPT, we might
Starting point is 00:02:00 might leave students unprepared for a world where everyone is already using them. But others think that if we go all in on AI, we might end up with a generation that can produce work, but not necessarily original thought. I'm honestly unsure which camp I fall into. So I wanted to talk to two people with really different perspectives. Conor Grenin is the chief AI architect at NYU's Stern School of Business. He's helping students and educators embrace AI, and and has ideas for how it can actually enhance education. Neil Ferguson is a historian. He's a senior fellow at Stanford and Harvard,
Starting point is 00:02:38 and he's the co-founder of the University of Austin. Lately, he's been making the opposite argument, that if universities are to survive, they have to return to their origins, cloistered spaces where students have to learn without the aid of technology at all. Whichever path we take, the consequences will be profound.
Starting point is 00:03:00 because this isn't just about how we teach and how we learn. It's about the future of how we think. Neil, Connor, welcome to the show. Thank you. Good to be with you. Neil, I want to start with you. You wrote a provocative essay in the London Times recently where you expressed a great deal of,
Starting point is 00:03:26 I would say, alarm about the emergence of AI in higher education. First off, what are you seeing that makes you so worried? Why this clarion call and this moment of alarm on your perspective? Well, I think I am seeing what many people are seeing. And so I based that article on what I had heard and read from multiple institutions. To put it very simply, within a very short time of the release of CHAPGT, roughly 90% of American undergraduates were using it. and the way they were using it was essentially to cut corners. And so I began to mask around and form the impression, which was then corroborated in a rather good article in New York magazine, that an immense amounts of undergraduate assignments in universities all over North America,
Starting point is 00:04:23 are being completed by large language models rather than by students. And I think it's fairly clear that that's bad, because if you're delegating, reading, thinking, and writing to chat GPT, you're not learning to do those things. And then, of course, there was a nice paper somewhat controversial that came out from MIT, your brain on chat GPT, which certainly was interpreted by some in the media as meaning or showing. that this kind of behaviour is really bad for young brains. So that was the kind of starting point for the essay, and I'll cut a long story short. My argument is not that we should sort of burn the machines, but that we have to create a period of time in the student day,
Starting point is 00:05:19 and I would say it should be about six or seven hours long, during which they don't have access to AI. I'm going to assume that they do AI all the rest of the time, but for six or seven hours, they shouldn't have access to it. They'll have to read and think and write for themselves. Last point, you'll have to abandon now decades-long practices of allowing students to do assignments in their own time with their laptops, far from the supervision of professors.
Starting point is 00:05:53 We've got to abandon all that and go back to written. and oral exams under invigilation, which all of which sounds probably to Connor are terribly reactionary, but I actually think we need to do something along those lines to avoid a generation doing even more harm to their brains than the previous generation did with smartphones and social media. Look, I want to get into every element of what you just outlined there,
Starting point is 00:06:20 and we will. But, Connor, first, I mean, what are you seeing from your perspective, of also at another large American university. And are you as worried as this? Well, I mean, yeah, I'd love to sort of like, you know, turn this into like a flaming talk show debate kind of thing where we're at each other's throats. I don't really disagree with anything Neil just said, to be totally honest.
Starting point is 00:06:42 That MIT study was a little bit maddening, but also right. I mean, it's, in a way, the most obvious study in the world, which is if people are using chat GPT instead, it's sort of like saying, you know, people can have tutors, and if their tutors are writing their papers instead of them, they're not going to learn. And it's extremely obvious. I like that it started the conversation, but I got mad at that article online because I thought it was a little sensational, but I'm sensational, too.
Starting point is 00:07:05 So I'm very forgiving of them. But, you know, the truth is I really agree with Neil. So let me come at it from the other standpoint. And I think that he and I are probably going to end up probably in the same general area, though. Hopefully we'll find some areas of hard disagreement where we can end up to screen at each other. But here's the thing, right? I mean, I was, so I'm at the, I'm on the MBA side, so the graduate student side.
Starting point is 00:07:29 So I, so on one hand, if you're paying this much for a business school degree that we all sort of know how much these things cost, and you're using Chad GPD to get over on it, you're out of your mind. So there's that. However, I also have teenagers, right? So my son is 16. My daughter is 14. And, you know, I do a lot of this with my son. Finn and I like went out to Nepal and taught AI to schools and everything else because I think there's a fundamental aspect of this which can really help people learn in a way
Starting point is 00:07:58 that they were never able to learn before. It allows people who never had access to tutors or bespoke learning opportunities. I mean, the reason why teachers have such a hard time is not because of the tool. It's because of how brains work, right? You can't be a teacher in front of 25 students and get into the heads of all 25 and know their exact learning ability and they can all learn in radically different ways. Of course, the teacher can't do that. The teacher has to teach in the same way the teacher is always taught, which is, you know, using their frameworks in the lowest common denominator
Starting point is 00:08:27 and what they've found works the best for the broadest range. So from that standpoint, I think it would be foolish to sort of throw the baby out with the bathwater, so to speak. I think that AI is incredible with this, which is why I started to keep bringing up Finn, but I think that the voice that's missing from that is, in this case, the high school student. We could also argue the college student, but I would say even the high school student and below even more because they are really incentivized to do exactly what Neil Singh, which is cheat,
Starting point is 00:08:53 which is to sort of say, you know, we've kind of given them this structure of the only thing that matters is grades. And when people say students are so much better at this than adults, I'm like, yeah, because students are incentivized, as Neil sort of point to, in a way that senior tenured people and organizations are not. If they have Chatubit, you write a paper for them very quickly and it's very, very good, that helps their future prospects in life. So they're extremely incentivized. But let me leave off, too, with kind of this, you know, the olive branch over to nail on this, too, which is, you know, I was doing a thing for Google where they were saying, well, in the MBA program, which again, graduate business program, how should we now teach
Starting point is 00:09:30 marketing in an age of AI? I'm like, I think we should teach marketing the same way we've always taught marketing. Because if people are using AI instead of critically thinking, then how are they going to determine what quality looks like when they get out to the workforce? It's, it doesn't work that way. They have to build the muscle first. And also, I also, as Neil said, sort of like, I would find it horrifying about people writing in class six or seven hours a day by hand. I do. I find it horrifying, but only because I see the pain on my kids' faces. But I don't have a solution. I don't know what else you can do. But you have some pollutions in practice, right? Like you want people using this within the classroom and within a pedagogical context,
Starting point is 00:10:12 right? So can you just lay out a few of those best case scenarios of product abuses here? Yeah, absolutely. I think that in the framework that we're building out was this idea of you have sort of like a lockdown, which is there are certain skills that people absolutely need and you have to get rid of AI. But otherwise, I just want to say that this gives us the potential of using the best potential learning tool that has ever been created in history to really advance and augment critical thinking in the moment. That's going to require a very serious rethinking of how we teach and a very serious rethinking of the proxies for grading. But I really do think that this can take young people so far beyond where they are. And I'm talking in terms of skipping
Starting point is 00:10:58 entire grades almost with the ability to, if used properly, to go home, work with AI and then have the teacher say, okay, our expectations for you are much, much higher. Can I just, you both use the word cheating? You know, what is cheating with AI and is the way we've been thinking about cheating in universities and even the term sort of plagiarism? Do we have the right framework for thinking about this? I don't think that's a difficult question
Starting point is 00:11:28 because obviously if you were to submit an essay that had been written not by you but by a tutor or a parent, be cheating. It's no different if you claim that an essay written by Gemini is your work. I think this is straightforward because the act of writing involves really some quite important cognitive muscle flexing. I don't think one has really thought a problem through, certainly in my experience, until one has had to write down what your solution to the problem is, or at least your analysis of it is. So that's the easy bit. Can I just push you on that one, one little thing? Is it the act of writing, the end state act of writing, or the entire process of creating that
Starting point is 00:12:21 essay? So what if someone, a student uses AI to develop their outline or to do a brainstorm their structure or something like that? Is that cheating as well? What if? That's what they all do now. The problem is... I think they're using it for all stages, right? But like, But that's very hard for us adjudicating that to determine at what point they have. This is why Connor said something very important in what he said before. We're going to agree a lot, actually, Connor. When you said if you use it in the right way, it's potentially the greatest teaching and learning tool ever. The wrong way to use it is the way it's currently mostly being used,
Starting point is 00:13:06 which is to cut corners so that you don't have to read, think or write. And these stages, you know, reading, absorbing information, then thinking and then writing, are tremendously important brain muscle actions. And if you don't learn how to do those things, then you really aren't educated. And that's a problem. The right way to use the large language models, just to, focus on those for a bit, is the way that they're using them at a remarkable school, the Alpha School in Austin, Texas, just down the road from our new university.
Starting point is 00:13:48 And there, McKenzie Rice, with the support of Joe Lehmant, is doing, I think, what Connor has in mind. That is to say, using the LLMs, rather in the way that Neil Stevenson describes in his wonderful book, The Diamond Age, where the student has the ability to develop in a kind of customized way, a question and answer relationship with the LLM. And this can greatly accelerate learning because, of course, it does tailor the process to the individual in a way that the traditional classroom just can't. Let me be clear. The traditional classroom, the way we have been doing things pre-chat GPT was already broken in a whole range of ways.
Starting point is 00:14:39 I've been saying for years, I cannot understand why professors give lectures. Lectures are an incredibly bad way of teaching. The Socratic method has been around since, yep, Socrates, it's better, but it's hardly used. And I could go on and on. So things sucked already. And that's partly why students cheat, because if you're confronted. with a system that sucks where you go to a lecture, some guy like me drones on for half an hour or an hour, and then you're given this list of things to read, all of which are kind
Starting point is 00:15:15 of slightly turgid, and then you're expected to turn in an assignment, which regurgitates them in some way. Right. I mean, of course, people are going to take the line of least resistance once it appears, because none of this was particularly satisfactory before. So I think there's an opportunity here, and I'm beginning to see. that it can be done radically differently at alpha school we put our seven-year-old son into that school for a week and it was clear that it had a tremendous benefit for him so I think we need to be as
Starting point is 00:15:47 innovative as they're being there we kind of have to reinvent education in order to make this work if we leave the old system and just patch on large language models like chat GPT we're going to end up with the worst possible combination of a little bit like, you remember the Hungarian economy under the socialism, they pretend to pay us and we pretend to work. We pretend to work. That'll be it. That'll be university. We will pay the tuition and then everybody's going to pretend to work, including the professors.
Starting point is 00:16:15 Or even worse our AIs will mark their AIs. Yeah. I mean, that's already happening. That is already happening. So, look, I think, just to push you on one thing, I mean, I think people who have read your essay will be a little surprised that you're putting your child in a school that uses AI. Can you say what Alpha School is doing differently than what you see happening on campuses and what maybe higher education could learn from that? So Joe Lehman and McKenzie Rice have a common
Starting point is 00:16:40 view that school, as they initially experienced it, was dreadfully boring. They've created a system which is very much using AI to allow students to learn at their own pace. And they've created incentives along the lines of, if you get this all done in two hours and really smash it, it, you're done for the day, you can go play outside. So they're really changing the way that we think of education instead of everybody has to sit there for six, seven or eight hours. They just can say, get through the work. If you do it really well, you're done. And so that was something that our son Campbell found exciting and disconcerting, but liberating. I think what's important here is the idea that you don't say here's an assignment and then they go off and get
Starting point is 00:17:31 chat GPT to do it. You actually say here are a set of things that we'd like to master. Maybe it's a set of mathematical concepts. And you're going to play with the problems. You're going to do a whole bunch of problems and AI is going to see how quickly you learn. It's going to see how you get along and it's going to respond to the way you do in the first run of problem sets, and that will generate the next set accordingly. And so instead of the student using the large language model to shortcut around an assignment, actually the student ends up working a lot more intensively
Starting point is 00:18:14 to achieve certain goals interacting with a kind of living encyclopedia. It's also probably more fun, right, Neil? I mean, like, it's a more enjoyable way of learning as well. The Alpha School is all about making going to school fun, and I do think they're really onto something here to the point that I think we, the university in Austin, have to learn from what they're doing. I am certainly struck by the fact
Starting point is 00:18:41 that artificial intelligence requires us to reinvent education fundamentally to make use of these tools. If we don't do that, then I think the tools are going to, in fact, be misused and the net educational impact will be very negative. But I can see from what's happening at Alpha School that this can work extraordinarily well, particularly for smart kids,
Starting point is 00:19:04 but also for kids who struggle because it's the fact that it can be customized for the individual student that seems to me so potent. Anybody who doesn't quite know what I'm talking about and who can't make a trip to Austin should just read Stevenson's book The Diamond Age because it tells the story of a little girl from a totally deprived background,
Starting point is 00:19:22 who happens to stumble on what we would now call an N-A-I. In fact, Stevenson's kind of ahead of his time. He's writing the 1990s. But this is essentially a kind of living, talking book that evolves with her. There's a relationship between the little girl and the book. And whatever her question is,
Starting point is 00:19:45 whatever is she's interested in, it helps her learn about the world. So I remember loving that book. I think it's Stevenson's most brilliant book. But that was, of course, inconceivable in the 1990s. It was a sort of vision of a science fiction future. The extraordinary thing is that future is now here. And little girls all over the world, including, as Connor said,
Starting point is 00:20:06 in places where educational provision is barely existent, now can access without too much trouble, a world of adaptive knowledge. knowledge that comes to you in the right way, at the right time, in the right volume. I find that hugely exciting. Last thing I'd like to say, the original title I gave that essay was that the cloister and the starship. And the idea I wanted to convey was that we need to spend time in the cloister with just
Starting point is 00:20:39 our brains to learn certain foundational skills of cognition and communication. But when we come out of the cloister now, unlike the monks of the Middle Ages, we can get into the starship. And that's an amazingly exciting combination. So I don't want to give anybody listening the impression that I'm a Luddite. I think Connor and I basically agree. And the key question, which we haven't really addressed, is why are the established institutions so slow? I asked the other day somebody quite senior at Stanford University where I spend part of my time at the Hoover Institution, hey, what's the university policy on the use of AI?
Starting point is 00:21:18 There isn't one. Right. There isn't one. Part of the challenge here is that, I mean, you guys broadly agree, I think, on most of this. But there's a lot of devil in details on how this is rolled out. and we are essentially, and as you say, a lot of it's just being pushed down to faculty, because I do agree institutions are in a mode of avoidance here. But we're being asked to sort of rebuild a 2,000-year pedagogical model on the fly
Starting point is 00:21:59 using a technology that is evolving by the week. So, Connor, one of the things I think we're touching on here is this moment of real cognitive development using a technology that allows for cognitive offloading. And those are really in tension with one another in a university. So, Conor, how do you think through that? Like, how do we use these tools without that risk of cognitive offloading that can be so damaging to exactly what we're trying to do at university? Yeah.
Starting point is 00:22:30 I mean, the short answer is, I don't know. But the more sort of, like, hopefully, robust answer here is, yeah, I mean, it's easy to sort of say that we should reinvent education. I totally agree with, you know, with Nielans. but like, you know, how? I mean, I think that the relevancy here is who's incented. So when I see people out incented in the working world, it's very limited. It tends to be sort of small startups.
Starting point is 00:22:50 People are like, oh, my gosh, I need every tool that I need, you know, I have because I have to do too much work and here's one. But that is not the huge majority of people out in the workforce. And so when we think about education, the idea is that this requires a tremendous amount of educational, political will on the education level, whatever that is, educational will. And that's not the education system in, certainly in our country, in the United States, for example, or in Canada, or I would even say Western Europe, right? It's just not how it works. We're like, nope, got this. And why is that? Because faculty have spent many, many years doing something the exact same way.
Starting point is 00:23:25 And they've been voted faculty of the year and all that kind of stuff. And they know how to do it and everything. We sort of saw the same thing a little bit during COVID. When everybody went online, they just tried to move their everything online. And then the real innovative people are like, well, what's a better way to learn now that we have these new systems or something like that? But so I think the first thing we have to remember is that in the way that I teach, so I have a company called AI Mindset, we do generative AI, we do AI adoption, but we don't do it through teaching tools.
Starting point is 00:23:49 We do it completely through understanding how the brain works and why the brain struggles with with this. It has everything to do with the brain. So even as we're saying, the tools are developing, like when I go out and talk to companies, I talk to some of the biggest companies in the world on this, my presentation hasn't changed in two years because it has nothing to do with technology. It has everything to do with how our brain operates. and I think that's really critical in education
Starting point is 00:24:09 because there's a lot of people invested in how education works and there's not a lot of people and I love our teachers I come from teachers like I work with teachers but I don't see a ton of teachers being like all right can't wait to change the way I've done everything in the way I've gotten
Starting point is 00:24:24 my PhD in the way I've done this my entire life I just don't see that and so that has to be incentivized I think incentives are everything so that's number one and then I sort of like want to pivot into something that I hope this doesn't get clipped as a sound bite because I can imagine this headline. But what are the skills we actually really need?
Starting point is 00:24:41 Right? I mean, like, and by the way, I'm a writer. I've, I've written books. I'm a publish author, all that kind of stuff. So I care very, very deeply about writing. But I have to look in the collective global mirror here and say, do we still need to know how to write? Giant question mark, by the way, this is not Connor saying, do we still need to? But what I mean by that is, obviously, the calculators are kind of a little easy example. But if we think about the calculator, the calculator, all it did was democratized math. But it's not like kids don't have to learn math. And so I think that's probably going to be what we need to do. And gosh, my kids are going to kill me for saying this. But I think they need to write by hand or on an air-gapped computer so they learn
Starting point is 00:25:17 how to write. Not because writing is intrinsically important in the same way learning long division is not intrinsically important. But you don't see people working at NASA. It's not like because you have a calculator, you can work at NASA. Or you can be a quant and a hedge fund. It requires skills beyond just the democratization of math through a calculator. But the important part, I think, is do we still need writing in the same way we still need math? So what I mean by that is, I was just having this conversation yesterday with somebody I really respect to CEO of a company. And she was saying, you know, I don't know that I still know how to write.
Starting point is 00:25:49 I'm using Claude and chat GPT and things like that. And I'm feeling awful about it. I'm like, yes, but you did learn how to write. And so you're recognizing good quality. And I think, and I don't want to put words in Neil's mouth. But where I come from on this is at the very root, kids need to learn what good writing looks like. Otherwise, I think that we are going to come to a point of where everything is just AI slop. Can I just pull on that writing thread a little bit here? Because I mean, I feel like
Starting point is 00:26:15 there's writing as an output and something we consume. But there's also writing, as Neil, you expressed at the very beginning as a form of thinking. And so what happens when we detach writing from learning? And can we do that at all? Is writing core to how we, particular in that phase of our brain development learn to think well let me put it like this uh when uh conveyor belts were invented uh we could have eliminated walking uh we could actually have made it possible to go everywhere on conveyor belts and uh we could probably have made them quite fast i see quite fast ones at some airports or hoverboards you know either one but the point is that actually we we we all go, I bet all three of us go to the gym quite frequently. And we actually embark on
Starting point is 00:27:12 physically difficult activities that are pointless except for the fact that they keep us fit. And most students that I see at Stanford look to be in pretty good shape physically. But for some reason, we don't apply the same rules to our brains. Now, the point about writing is not that everybody should write a novel. In fact, I wish I could stop people writing novels. Far too many novels get written. And I wish I could also cut down the number of op-eds that have written. If we could do one thing for the world, it would be to decrease the number of op-eds. Please, people, write less. Most of you really don't write anything that interesting. But the point is that in learning to think on the basis of what we have read and then to write,
Starting point is 00:28:02 we're getting our brains fit. I'll give an example. Connor will probably recognize a YouTube tailor. I used to find that until I had taught something, stood up in front of a class and taught something, I wasn't quite ready to write the book. And this is all about getting your brain fit. Because if our brains are obese,
Starting point is 00:28:24 we kind of watch some TV and we kind of get involved in conversations. We have this rough idea about, let's say, AI. And we can have a conversation about, oh, yeah, I heard about AI. Yeah, it sounds really scary, but also kind of sounds kind of good as well. Yeah, you know, that's the obese brain. Hasn't really absorbed anything about AI, hasn't thought about it. It can have a conversation about it, but the conversation's entirely vacuous. So what we really want to do in education is to have very, very fit brains,
Starting point is 00:28:56 brains that can very quickly absorb lots and lots of complex data, not necessarily in the form of words. It might be just the form of data or lumps of pottery, but they can absorb data in large quantities. Then they can think analytically, what does this signify? What's the pattern here? And then they can communicate to other human beings by writing or by speaking what they think they've inferred from all of this.
Starting point is 00:29:19 These are the things that make our brains fit. And there is no doubt in my mind that in a world of very powerful computers that can not only be large language models, but can also do scientific research. Our brains need to be super fit if we're to have purpose, if we're not simply to become Yuval Noah Harare's cow-like creatures milked for our data by AI.
Starting point is 00:29:46 So I think just get into the mental gym people, I say to the students at the University of Austin, one day I'm going to come in here, I'm going to tell you you've got two days to read war and peace and you're going to be just shut in the library with the book and then you're going to come out and I'm going to ask you, what's the meaning of this book?
Starting point is 00:30:04 That's the kind of thing that a smart person can do. Yeah, but Neil, so let me ask you this because this is what I wrestle with all the time, right? Which is, and I love your two examples of exercise. So the example I sort of give is sometimes getting on the treadmill. And the reason that we may get off the treadmill very quickly is because our limbic system prioritizes, you know, quick rewards and conserving energy, right?
Starting point is 00:30:24 It's sort of like this is why behavior change. is so hard. So what I would posit here is that I see young people, by the way, so fit these days too. High school, college, and it's part of the culture, I think. But also, there is a huge incentive for them to get fit, right? They will look attractive to other people. It's almost like what drives us as a species, et cetera. And the challenge that I find that I'm trying to figure out, like, how do you incentivize students? Because students aren't incentivized by critical thinking. And they're not incentivized by learning. They're incentivized by, will they get the grade that, I mean, what you hear all the time like I'll just get into the great college and then I'll figure it out or I'll
Starting point is 00:30:57 just get into the great law school and then I'll figure out all that kind of stuff but we have set up a system that you were referring to earlier which is grades are the holy grail like I mean everything else and I'll figure out everything else later it's the incentive structure so I'm wondering when you think about that and like hey guys like you have to do this it makes sense uh but the I feel like the internal incentive structures is broken no I think the employers have incentives to the elite employers know that, for example, the Harvard degree can't really be worth what it used to be worth when the only grade that's given at Harvard is A. So the perception that grade inflation has caused a very serious decline in standards at the established institutions incentivise the best
Starting point is 00:31:42 employers to find other ways of assessing ability. So you don't get hired by the big tech companies or the big Wall Street companies just on the basis of your GP. EPA anymore because they figured out over the last 10 years that that's not a good signal at all. So I think one of the interesting things that's happening is that recruitment is becoming more and more creative. I mean, I think of some of the quant hedge funds and how they recruit, it's actually by setting a whole bunch of examination type challenges to the would-be entry-level people. So I think the system's changing because there's an incentive if you're an employer to find out the really smart people, as opposed to the people who are graduating Sumer, who took all the
Starting point is 00:32:27 soft courses. I think what we're talking about here is partly how do we make young people care as much about their brains as they care about their bodies. Now, my perception of academic life of university life is that while, of course, the athletes may attract a certain number of members of the opposite sex, there is still something sexy about being smart. I mean, I think. Oh, Neil. Maybe I'm just dreaming here, but I always felt it was my witty repartee. I always thought it were, I mean, I used to think it was the jokes.
Starting point is 00:33:13 Anyway, but I think that's part of it that we're not just interested in people's bodies. Somebody can look like a supermodel or an Olympic athlete. But if what they say is just unbelievably dumb, it's not going anywhere. The other question is, how do you persuade people to take the same attitude towards academic success as currently exists in the military towards the elite combat formation? So I use the phrase the Navy Seals of the mind to describe the graduates that we want to produce at the University of Austin. I want to convey a sense that there is an elite quality in the realm of intellectual life that we've not been valuing for the last 10 or 20 years.
Starting point is 00:33:59 But now it's time to change and say, no, no. What we care about is brilliance, is real intellectual brilliance in the same way that the Navy SEALs care about people who are physically extraordinarily courageous. So I think changing those norms, like it's happening, like you want to push in that direction so that young people, are interested not in the perfect GPA, which you achieve by cynically taking the easy courses and telling the professors what they want to hear. Now we need to say, no, no, no. That's so 20 years ago. Now what we want are people who are just drop dead brilliant. And they can play a game
Starting point is 00:34:36 of chess while at the same time coding, while at the same time doing math problem sets, while at the same time writing sonnets in ancient Greek. I mean, those people do exist, but they aren't valued as much, certainly not in the established institutions as they should be. I mean, that's the key point, right? Not in the established higher education universities. And I, to say something a little bit provocative here, like, I think that norm is changing outside of universities, the emergence of long-form podcasts, the way YouTube is allowing people to go deep on topics that they couldn't before, is incentivizing a kind of intellectual curiosity, I think, that is not dissimilar to the fitness craze that's having.
Starting point is 00:35:17 happening in those same worlds, right? I think there is something going on there that people crave more. They might just not be getting it from universities. So let's just touch on that for a moment here. Conner, so universities are large bureaucratic institutions that are incredibly hard to move and to evolve. How do we convince these institutions that rethinking what they do in light of this new technology is existential for them. I mean, I think they know that intellectually. So first of all, I think we have to determine what problem are we trying to solve here, you know, and I think that it has to be done pretty slowly and pretty carefully because, you know,
Starting point is 00:36:01 Taylor, what you're hitting on is exactly right. Like we are, and I think you phrased it exactly right, which is we're talking about giant bureaucratic institutions. These are not driven by, well, you know, will I earn more money if I can produce students who are real critical thinkers. And by the way, again, family of teachers. I work in university. Like, I'm very passionate about education faculty.
Starting point is 00:36:19 I'm a faculty member myself at times. But we have to understand that most people have this very deep commitment to how they have learned and how they've always taught in the past. And by God, like AI is not going to change that people. So if that means that you have to come into the classroom and just write it out by hand, it's just a colossal, colossal missed opportunity. If you can sort of extrapolate from Neil's son's school, that sort of, you know, is really kind of thinking about this.
Starting point is 00:36:43 on the young age and making education more fun, I think maybe it turns from fun into more incentivization. I think we have to be realistic about incentives. But the idea is how do you actually incentivize faculty members in a state and in an institution that doesn't work like that? I mean, you can have an existential threat to this. This is why you see companies like meta spending billions and billions of dollars because that's an existential threat to their business. It's P&L. It's money. The market drives this. That's not the case in institutions. So I think that there, first of all, I agree. Like when Stanford doesn't have a policy on AI, when NYU, my schools are struggling
Starting point is 00:37:19 to find their policy on AI, there has to be a new way of thinking about it. So instead, I would focus much less on the tech and much more on how do we get people excited about using this? Because once people start using this and start using it as a learning tool, and then, and I don't know where Neil falls on this, I'd like to ask him, I think that we have to put the onus on the guardrails. I think that it has to be, look, you can. not learn this way because I just think there's too many teachers who will say, well, I've
Starting point is 00:37:45 always taught this way. It's great. And too many students are like, yep, I get it, but I'll learn when I'm out of law school and it doesn't matter anymore. But right now the incentive is getting this grade to get into a better law school. I think that the only saw, and by the way, this is a bit of a ludite, very limited viewpoint, which I hate to have, but I don't know another solution. I think the guard rules have to be in place so firmly, which is where Neil started this conversation, which is what if it was, he said six or seven, I would say more like, you know, three or four hours a day where you have no access to AI. You have to learn. So that's the problem I'm wrestling with.
Starting point is 00:38:14 You know, you are maybe in the singular unique position of having spent decades inside the oldest university institutions and now being a part of creating a new one. I can only imagine how the older institutions would respond to your proposal. And I know it wouldn't happen quickly or maybe even at all. But how has the response been within a brand new one? Is this happening this year at the University of Boston, your Cloisters and Starship model? I need to answer that question after I pay my next visit, which is in a couple of weeks, then I'll be able to say if it's working.
Starting point is 00:38:51 The challenge, even in a university, is to get the professors to change the way they do things. As Connor rightly says, at the heart of all universities are tenured faculty. With academic freedom, including on how they teach. Not only can they not be fun. but they get considerable autonomy about how they go about things. And the reason that a university doesn't have an AI policy is that the default setting is to say we leave it to the professors. And that's what you'll almost certainly hear at most institutions.
Starting point is 00:39:24 But what does that mean? That means that men and women in their 40s, 50s and 60s are essentially allowing the students to misuse AI. because they themselves don't really understand what's going on outside the classroom, and they have grown accustomed to do things in ways that are very easy to game. So we have to change that. It's just easier to do at a small institution with fewer than 200 students than it would be at Harvard or at Stanford.
Starting point is 00:39:59 I only became involved in creating a new university because I just thought the established institutions couldn't change themselves, that the incentives, internal incentives, are just, all pointing in the wrong direction. We have to reinvent higher education. That is very clear. Even before CHAPT, it was clear. And that's what we're trying to do in Austin. I hadn't fully realized until I looked at Alpha School that the reinvention could be even more radical than I thought. And I'm beginning to see how the new education, the educational institutions of the future are going to work. And I still like my cloister and starship analogy, because I like
Starting point is 00:40:38 the idea that my kids are going to spend some of the time learning the core skills, including how to do calculus and read Tolstoy, but then they'll be unleashed and let into the starship to use large language models and all the other things that AI provides, equipped with the mental discipline you need. Let me put it, see if you agree with this con, let me put it like this. At the heart of using a large language model, well, is the way that you write the prompts. My view is that somebody who has not learnt to think properly is not going to write good prompts, is not going to really be able to use the tool at all or will use it badly. Do you agree with that, Conner, because it seems to me that part of what we're trying to do when
Starting point is 00:41:25 we get people mentally fit is to equip them with the kind of cognitive skills that will enable them to use AI optimally. Yeah, I actually may take a different tack on that. So I kind of compare it much more to sort of like a managerial expertise. Like if you know how to get the best out of somebody rather than writing. So I think of it more, probably less about the prompt itself and more about how would you instruct a new colleague or a new employee? How would you get the best out? And there's good managers and bad managers and you can, you know it when you see it a little bit. So maybe I might flip it on its head and say, I think the critical thinking is to take the output rather than the input and see is this good. And also, and I want to see if you
Starting point is 00:42:07 agree with this. It's actually, the output can be very, very good, but it doesn't have to be right. So I just did a thing for Masterclass, if you know, the brand masterclass. And like, people always talking about, you know, hallucinations. I was trying to talk about how to get over the problem of hallucinations, which is when it lies very convincingly. I'm like, that's, I'm not even sure that's the biggest problem. Hallucinations, you can sort of spot much easier than you can spot the synchofancy, like, oh, that's a great idea, when in fact it's not a good idea. And the second part of that is, what if it's giving you outdated information? Do you know how to draw in the right part of information? So anybody can look at an output,
Starting point is 00:42:37 be like, that's awesome. But the problem is, do you have the critical thinking to ask the right questions? Like, where is this getting into this information? Is it just telling me I'm right? Because blah, blah. And then the third thing, I think, is the hallucination. So I'm with you on the critical thinking is critical. I would probably put it on the output, judging the output rather than the input.
Starting point is 00:42:53 Yeah, I agree with that. My impression thus far, and I, you know, continue to run these experiments, is that there's a real problem in the fact that the models have not really been trained on the full corpus of high quality knowledge because Google books lost their case and a lot of extraordinarily important literature is not accessible online. And so what I notice when I ask, say, Gemini or deep research
Starting point is 00:43:24 to answer a question is that it comes back rather thin without the kind of depth of scholarship and knowledge that you would have if you had access to, all the books in the Bodleian Library or Widener. And I think that's because of that very important case that Google lost. Google wanted to put every book ever written, including all of mine, on Google Books. Now, they kind of won with YouTube because every single piece of content I ever did on television is free available on YouTube now, but they lost the books case.
Starting point is 00:43:57 And that means that it's hard to train a large language model on the real quality literature that's been published in our time. So I'm still, to be honest, I'm underwhelmed by the outputs. I read them and I think, eh, B plus, maybe at best. And it's never original. It can never, ever come up with anything original, except when it's making it up, which is obviously not what we want. I think we might have actually found our point of disagreement here.
Starting point is 00:44:26 And I think I'd love to keep going down this path. It took us an hour, but we got to this point. I suspect a little bit of disagreement about what you just said there, Neil. But I want to bring this to a wrap. And like maybe just to get you both to reflect a bit on the stakes here. Because I do think there's a lot at stake in how we learn to think, how we train future generations to think and learn. And that's really tied to who we are as humans, right?
Starting point is 00:44:50 I mean, it has been for 2,000 years anyway. And I don't expect it to change in its importance. But Neil, in your essay, you said something quite striking, that strict prohibitions on devices will have to be insisted upon if the rapid advance of pseudo-intelligence is not to plunge humanity into a new dark age. Can you just, why so stark there and what's at stake? Well, pseudo-intelligence is Neil Stevenson's joking name
Starting point is 00:45:20 for artificial intelligence in the book, The Diamond Age. It's one of the little jokes he slips in. Oh, we call it P-I, not AI. So credit where it's due. John Haidt has written, very compellingly about the damage we've already done to young people's brains by allowing them to have smartphones and social media or what used to be called social media and is now
Starting point is 00:45:43 actually AI media rather than social media. And I think the next level damage is what we're currently doing because we're essentially cutting off a generation from learning the key skills of absorbing data, thinking about it analytically and then producing convincing communications about it. Henry Kissinger, whose biography I'm in the midst of writing, without the help of AI, said after he had large language models explained to him, he wrote a brilliant essay for the Atlantic saying, this is the potential to take us back to before the Enlightenment and the scientific revolution, because things will start to happen around people that they can't explain. I mean, you don't really understand how the AI arrived at its answer. And that was a very
Starting point is 00:46:32 I thought profound insight for a man in his late 90s. He saw early what it implied. And I think he's going to be vindicated if we carry on down this road. I think we're going to have a generation that is even more cognitively harmed than the generation that just was
Starting point is 00:46:49 hit by smartphones and social media. Connor, how do you frame the stakes? Do you agree? It's such a, it's a great question. And so I think the actually the John Hight analogy is sort of apt. He wrote the anxious generation. He's a colleague of mine over at Stern, so we get to do some stuff together over there. But it's funny because my kids just started back at school. And they just instituted
Starting point is 00:47:11 that policy that John has been pushing, which you can't have your phones during the school year, or sorry, during the school day, which we're all in favor of as parents. But interestingly, like in John's research shows all this, the kids want that too, right? The kids are happier when they don't have their phones. So what does that mean for this? This is the question, right? Because it's even if the kids even the kids themselves if we said hey you said you're happier not having your phone yeah absolutely okay then don't have your phone it's like but that's kind of what addiction is right and maybe to sort of like take a slightly different tack on this are we talking about addiction now I'm not sure AI is a bad addiction I think I mean from a adult standpoint like I don't think
Starting point is 00:47:50 my addiction quote unquote to AI is a bad thing I think it helps me produce much better things but I'm also like learning I don't have I have a pretty good incentive structure in my life but kids don't And I guess what I would say on this is even when they know that they need to critically think. So maybe I'll say this. The way that Finn and I teach this together is we talk about like a mountain, right? And from the bottom of the mountain, the kids just think, okay, the top of this mountain, I just got to get up to the top. And one is a trail and the other is a gondola.
Starting point is 00:48:18 The winner gets to the top. And then they get to the top and they realize it's not a mountain, it's a plateau. And they have to now, it's the beginning. It's the starting point. It's not the end. And now they have to go into this workforce. And the people in the workforce are going to be like, well, who, has the muscles to handle this, right? And so that's the problem. So even if we tell them,
Starting point is 00:48:34 and even if they know intellectually, the incentive structure is all screwed up, I think that we probably have to force kids, even though they understand, even though they want to critically think, we have to force them to critically think because otherwise the incentives, in the same way, if you just give them Snapchat and Instagram and everything else, they'll use it, even though they don't want to, I think we have to have guardrails in place. And I don't know how else we do this. Maybe that's short term. I don't know, but I agree with Neil. I think that critical thinking is that important that it deserves our attention and putting the guardrails in place for their own protection.
Starting point is 00:49:06 Look, I think that's a pretty great way to end this conversation. And thank you both for both talking about this, but also the thinking you're putting into this moment of transformation. And I started by saying, I think this is transforming the university. And I think it fundamentally is. And it's our responsibility to figure out how to keep that model alive in some capacity, I think. So thank you both for the work you're doing on that and for talking about it. It's been a pleasure. Thank you.
Starting point is 00:49:39 Machines like us is produced by paradigms in collaboration with the Globe and Mail. The show is produced by Mitchell Stewart. Our theme song is by Chris Kelly. Host direction from Athena Karkanas. Our executive producer is James Milward. Special thanks to Angela Pachenza and the team at the Globe of Mail. If you like the interview you just heard, please subscribe and leave a rating or a comment. It really does help us get the show to as many people as possible.
Starting point is 00:50:10 Machines Like Us is supported by the Max Bell School of Public Policy at McGill University. The Max Bell School offers public lectures and seminars across Canada and online in addition to their 11-month accelerated Masters of Public Policy program, which is accepting applications now. more at McGill.ca slash max bell school. Machines like us is also supported by CFAR, a global research organization proudly based in Canada. From leading Canada's national AI research and talent strategy to advancing research in AI safety, CFAR is the catalyst for the exceptional thinkers reframing our future. Explore their work at CFAR.ca.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.