Ideas - Science fiction isn't fact, no matter what Big Tech tells you
Episode Date: April 15, 2026Some of the biggest minds behind AI may have you thinking a Terminator-like robot is coming for us. But literature professor Teresa Heffernan says tech giants have taken their readings of science fict...ion plots too far, and failed to provide strong evidence for grandiose claims that originated on the pages of science fiction. She argues there are many reasons to fear AI, but an android uprising isn’t one of them.Heffernan is a professor of English language and literature at Saint Mary’s University in Halifax. She delivered the 2026 Wiegand Memorial Foundation Lecture at the Jackman Humanities Institute | University of Toronto.
Transcript
Discussion (0)
This week on two blocks from the White House, we're talking about the Trump factor and exploring how Trump's touch is backfiring in some places.
Trump has loomed large over Canadian politics since his re-election. And this past weekend, the Trump factor was felt in the landmark Hungarian election.
Join us every week as we break down U.S. politics from a Canadian perspective.
Find and follow two blocks from the White House wherever you get your podcasts, including YouTube. New episodes drop every Wednesday.
This is a CBC podcast.
We're bits and pieces of information, but what we know for certain is that at some point in the early 21st century, all of mankind was united in celebration.
We marveled at our own magnificence as we gave birth to AI.
AI.
AI?
You mean artificial intelligence?
A singular consciousness that spawned an entire race of machines.
Welcome to Ideas.
I'm Nala Ayy.
For decades, fictional works, novels, short stories, television, and movies, like the 1999
mega hit The Matrix, have shaped our idea of what artificial intelligence could be, and how it might
impact the human race.
We don't know who struck first, us or them, but we know that it was us that scorched the sky.
That scenario, where humans create machines.
machines, machines then outsmart their makers and ultimately turn on them, launch dozens of franchises.
Throughout human history, we have been dependent on machines to survive.
Fate, it seems, is not without a sense of irony.
Today, some of the most well-known creators of machines, the so-called godfathers of AI,
are saying without any irony that Hollywood's ferns,
Favorite science fiction storyline is closer to becoming fact than ever before.
Joshua Benjillo, welcome to 730.
What is the worst case scenario as you see it?
Well, there are many bad scenarios.
You know, humans using AI, but there's also this possibility that AI tries to escape.
We've seen that they already don't want to be shut down.
So where would that go if they had more intelligence?
Some people think that this could lead to human extinction.
if they are really smarter than us and they escape our control,
we really need to figure out these risks.
As dire and as imminent as it sounds,
this scenario isn't universally accepted.
Please welcome to the stage, Dr. Faye-Fei Lee,
Sequoia Professor in Computer Science
and co-director of Human-centered AI Institute
at Stanford University.
Experts are divided about whether or not
were on the eve of an AI doomsday.
What is the thing that you worry about the most?
I worried about catastrophic social risks that are much more immediate.
I also worry about the overhyping of human extinction risk.
I think that is blown out of control.
It belongs to the world of sci-fi and just compared to the actual social risks,
whether it's the disruption of disinformation and misinformation to our democratic process,
or the kind of labor market shift or the biased privacy issues,
these are true social risks that we have to face
because they impact real people's real life.
Pondering about it, there's nothing wrong about...
So where is the line between science fiction
and the science of artificial intelligence?
It's a question that Professor Teresa Heffernan
has been thinking about for a long time.
The early part of the century,
I was seeing these heads.
headlines about how science fiction was coming true. You know, they were all over the papers.
And I saw the tech industry also using fiction to market. What is the pseudoscience? And so
there was a lot of stuff about, you know, we're going to merge with machines and we're all going to be
immortal. And it was all kind of ripped from the pages of fiction. Can I preempt you by announcing
the new dialogue? Okay. Literature versus the AI industry. Techno-monerical.
and the drive to reduce the world to numbers.
Please join me in welcoming.
Professor Heffernan.
Professor Heffernan recently delivered the 26
Wigan Memorial Foundation Lecture at the University of Toronto.
So thank you all for coming out,
and thank you, Alison, for the introduction,
and also thank you to the Wegan family for supporting the humanities.
In this talk, I want to consider how both this relatively
new technology disrupts the humanities and is reshaping society, and to suggest that using the
humanities to disrupt the AI industry might in fact be a better plan. In the 19th century, I also
welcome Professor Heffernan into our studio shortly after she delivered that lecture. So first of all,
just so we have it on tape, can you just introduce yourself? Yeah, my name is Teresa Heffernan. I'm a literature
language and literature professor at St. Mary's University in Halifax. And I've been working for about
almost two decades on that relationship between science and fiction, particularly the science or
the AI industry. In those interviews and that research that you did, what was the question you,
what was the basic question you were asking? It was a little more investigative. Like I was just like,
I wanted to know, well, what do you mean by intelligence? And what do you mean we're going to have
super intelligence. And what do you mean the robots are going to take over? You know, like,
how is that working and how are you producing human-like robots? They're big in science fiction,
but so we're talking lions. You know, it is fiction. So, right. What Professor Heffernan found
was that well before AI or even computers were a reality, fiction already had a lot to say
about our relationship with science and technology and where it was all headed.
You can go back to the kind of origins of the field with people like Ellen Turing, one of the fathers of AI.
He was great at Breaking Code and helped decipher the enigma machine and helped defeat the Nazis.
But he was a very literal reader of fiction.
And he read a novel, which is a 19th century novel, called Aeroon.
And Aeroon is a novel by Samuel Butler.
It's a 19th century Victorian novel.
It's a satire.
If all machines were to be annihilated at one moment, so that not a knife, nor lever, nor rag of clothing, nor anything whatsoever were left to man but his bare body alone that he was born with, and if all knowledge of mechanical laws were taken from him so that he could make no more machines, and all machine-made food destroyed, so that the race of man should be left, as it were, naked upon a desert island, we should become extinct in six weeks.
A few miserable individuals might linger,
but even these in a year or two would become worse than monkeys.
Butler had read Darwin and then as a sort of,
what he referred to as a specious analogy,
in other words, this kind of playful applying
of the theory of evolution to machines.
And if you read the novel, it's quite funny
because they are like, you know, this whole society
that destroys all technology,
including women's washerbrought,
boards because, you know, because they might evolve.
Man's very soul is due to the machines.
It is a machine-made thing.
He thinks as he thinks and feels as he feels
through the work that machines have wrought upon him.
And their existence is as much as sine qua non for his as his for theirs.
This fact precludes us from proposing the complete annihilation of machinery,
but surely it indicates that we should destroy as many of them as we can possibly dispense with
lest they should tyrannize over us even more completely.
But when Turing read it, he read it literally,
and he references it in his science papers.
It seems probable that once the machine-thinking method had started,
it would not take long to outstrip our feeble powers.
There would be no question of the machines dying,
and they would be able to converse with each other to sharpen their wits.
At some stage, therefore, we should have to expect the machines to take control,
in the way that is mentioned in Samuel Butler's Erdogan.
Post-war, he wrote an article that explored the possibility
of building what he called a mechanical brain
and birthing a child machine that would evolve,
but he lost interest in the subject just as computers started to sell.
Despite his war service, he was persecuted by the British government for being gay,
and in 1954, just shy of his 42nd birthday,
he very likely committed suicide by ingesting an arsenic-laced apple.
He has since been dusted off and resurrected as the father of AI,
and his paper on computing machinery and intelligence published in 1950
is one of the founding texts in the AI industry,
although the term was not coined until four years after his death.
This article famously begins by asking can machines think.
But importantly, thinking immediately gets replaced by the imitation game,
where Turing proposes that if a computer can convince a judge it is a man,
the computer will have reached human-like intelligence.
Could machines ever think as human beings do?
Most people say no.
You're not most people.
In the 2014 film The Imitation game,
Alan Turing is played by Benedict Cumberbatch.
Of course, machines can't think as people do.
The machine is different from a person, since they think differently.
The interesting question is, just because something thinks differently from you,
does that mean it's not thinking?
We allow for humans to have such divergences from one another.
You like strawberries, I hate ice-skirts.
you cry at sad films.
I am allergic to pollen.
What is the point of different tastes,
different preferences, if not to say,
that our brains work differently,
that we think differently?
And if we could say that about one another,
then why can't we say the same thing for brains?
Of copper and wire is to you.
After he's arrested for gross indecency,
a police officer questions him.
About the paper he published.
This big paper you wrote.
What's it called?
The imitation game.
Right, that's what it's about.
Would you like to play?
Play?
It's a game, a test of sorts.
For determining whether something is a machine or a human being.
How do I play?
I want to get kind of to the heart of it, and you mentioned Alan Turing.
He asked,
right at the beginning of this paper, he says, can machines think?
Absolutely.
I think I know your answer to this question, but I'm curious.
I want to ask you, can machines think?
No.
If you look at the AI industry, and by industry, I mean the kind of long history of computing
and the infrastructure required to run it.
But if you look at that long history, there's all sorts of kind of abuse and misuse of language, you know.
So if you look at Alan Turing and the paper starts out with can machines think and then he quickly jumps to, okay, that's too complicated.
Let's go to the, can a machine imitate a human?
Well, imitation is a very different.
There's no etymological link between thinking, the word thinking, and imitation.
So it's those kind of tricks.
I mean, you hear a lot from the AI industry about the, you know, these Gen AI being able to reason.
It doesn't reason. There's no reasoning going on. There's no understanding going on.
Criticism of Turing's ideas isn't new. In the 1950s, there were dueling BBC lectures
in which computer scientists and public intellectuals took Turing on, and then he defended his arguments.
Can Digital Computers think? Alan M. Turing, 15th of May, 1950.
Digital computers have often been described as mechanical brains.
Most scientists probably regard this description as a mere newspaper stunt, but some do not.
One mathematician has expressed the opposite point of view to me rather forcefully in the words,
it has commonly said that these machines are not brains, but you and I know that they are.
The actual recordings of Turing's lecture no longer exist, so the BBC had an actor voice the text.
give most attention to the view which I hold myself, that it is not altogether unreasonable to
describe digital computers as brains. I think it is probable, for instance, that at the end of
the century it will be possible to program a machine to answer questions in such a way that it
will be extremely difficult to guess whether the answers are being given by a man or by the
machine. Hello, I am AI-chering. It is my great pleasure to give this lecture on can digital
computers think. I gave this lecture on the 15th of May, 1951. This is a different Alan Turing
than the actor you just heard. Given Turing's belief in the potential of computers to imitate
humans and think like them, perhaps he wouldn't be surprised to find an AI-generated version
of himself reading parts of his seminal lecture on TikTok. Digital computers have often been
described as mechanical brains. Most scientists probably regard this description as a mere newspaper
stunt, but some do not. In order to arrange for our computer to imitate a given machine,
it is only necessary to program the computer to calculate what the machine in question would do
under given circumstances, and in particular what answers it would print out. The computer can
then be made to print out the same answers. Professor Teresa Heffernan sees a direct
line connecting Turing's belief and his literal reading of Samuel Butler's Victorian satire,
Arawan, to present-day tech giants and how they literalize fiction.
And I'll tell you a little bit just about the purpose of SpaceX.
It's like, we want to make Star Trek real.
Okay?
We want to make Starfleet Academy real.
so that it's not always science fiction,
but one day the science fiction turns to science fact.
And then you can kind of follow that line of thinking through up to people like Jeff Bezos or Elon Musk who say, you know, watch Star Trek as kids, I guess, and think, like, I'm really frustrated with the state of intergalactical travel, you know.
But, you know, when you look at what Gene Rod Berry's idea was, his reference was Jonathan Swift's scholar,
rivers travels, an 18th century work.
And we have spaceships going through space, big spaceships,
with people going to other planets, going to the moon,
and ultimately going beyond our star system,
to other star systems,
where we made me aliens,
or discover long-dead alien civilizations.
I don't know, but we want to go,
and we want to see what's happening.
and we want to have epic futuristic spaceships.
If you had to use an adjective to describe,
is it untethered from reality?
How would you describe this relationship
between AI and literature?
Yeah, as I say, it is completely untethered.
You don't want your science to be based on fiction.
That's the kind of basic point, you know?
Like, science works very differently than fiction,
and they both have their uses.
But I don't want my science to be based in fiction.
How would you characterize very broadly how fiction and AI have been in dialogue?
Yeah, I mean, if you take the Terminator for an example, right?
I mean, every newspaper for a while had the picture of the Terminator when it was talking about the AI industry.
The SkyNet funding bill is passed. The system goes online on August 4, 1997.
Human decisions are removed from strategic defense.
Skynet begins to learn at a geometric rate.
It becomes self-aware at 2.14 a.m. Eastern Time, August 29th.
In a panic, they tried to pull the plug.
Skynet fights back.
Yes. It launches its missiles against the targets in Russia.
If you look at James Cameron, who's the director of the Terminator,
who produced it in the 80s,
what he talks about the Terminator being about
was not like this actual future of warfare or of robots,
but about the military industrial complex.
And this kind of concern that, you know,
the military and academia and the tech industry
were far too embedded.
So you think it's a fundamental misunderstanding.
Absolutely.
You call that kind of explanation a dangerous myth
that's perpetuated by the AI industry.
What persuades you that it's a myth?
Why are you so certain?
I guess because I've, you know, as I say,
spent a long time. I mean, I've been working on this stuff for decades. So I spent a long time
looking at the history of the industry, reading a lot of the papers, talking to people,
reading the books by the people who are very invested, like a lot of the tech billionaires,
invested in these myths of transhumanism. And all I can say is that I keep seeing it coming
back to fiction. And as I say, fiction doesn't come true by definition. So tech leaders like
Elon Musk, Jeff Bezos, Sam Altman and others have been, as you say, inspired by these kinds of works and the world of science fiction in particular.
But you take issue with their interpretation.
And I'm just wondering, you described this interpretation as taking things very literally.
Why does that need to be interrogated?
I guess, you know, the way that fiction works is with metaphor and with allegory, with simile.
And so you can't just take it, it's not meant to be literal.
You know, in the Lion King, when you have Talking Lions,
it's not really about talking lions.
And so you need to kind of like unpack it.
And fiction kind of opens up to those debates
so that there's no absolute interpretation of a novel.
Those are always, you know, shifting
and people bring new material to the kind of.
of reading of literature. And so the literal, you know, if you just say, well, this equals this,
no, it doesn't in fiction. What are the people you interviewed, who I recently interviewed as well,
is Jeffrey Hinton. And he's described as one of the godfathers of AI. And we spoke about machines
and consciousness on a show that aired just not too long ago. He's concerned about things like
terrorists using AI to create viruses and the threat to democracy.
from deep fakes.
But here's what he said about his biggest concern,
and we'll play some tape.
But the thing that worries me most is still this long-term risk,
which seems to be fairly inevitable,
of AIs getting smarter than us,
and we don't know how we can then coexist with them.
We don't know whether they will actually take over from us.
So already we've seen AIs that want to survive.
They have the ability to create sub-goals,
and they're intelligent,
and they quickly realize they're not going to be able to achieve those goals if they don't continue to exist.
And then if you try and do things that will threaten their existence, they try and defeat those things.
So they make up plans to blackmail you, for example.
And we've seen that happening already.
They make up plans to blackmail you.
I mean, in your thinking, what are we to make of these kinds of warnings from Hinton and others?
that are we just to ignore those kinds of warnings?
It's not that I don't think that a lot of the technology
go up between kind of the surveillance and the war tech and automating war.
It's basically, you know, if you go back to the origins of AI,
it's basically about automation, automating things.
But where I don't understand in Hinton where he gets the idea of a they.
Where does the they come in?
You know, life or beings are there.
these very complicated beings that are in the world, in an environment,
we're dependent on all sorts of things.
Here he's kind of saying, look, look, I birth this.
You know, and you can go back to Ellen Turing, and he talks about his child machine, right?
There's all this kind of fantasy of male birthing going on.
And when he says, well, they have this agency, I still, you've got a bunch of numbers.
you've got massive mathematical computation going on.
Where is the agent?
There's no, I just don't see how it's there,
except if you have this kind of faith that somehow,
and a lot of these people do have this faith,
and it is a religious-like faith, in numbers, right?
And that somehow numbers are at the core of the universe.
So if you have that faith, but I would say it is a faith,
then maybe you think that, you know, by manipulating numbers, you're going to have some sort of superintelligence emerge.
So if I would ask you this, would you put this in the category of like a true belief in myth in their own myths rather than kind of raising false flags?
Yeah.
Where would you?
It's really difficult to understand.
You know, when I interviewed Hinton and that would have been about 2016 maybe, but when I interviewed him, I asked him what's your version of intelligence.
How do you understand intelligence?
And he didn't answer that question.
What he did was say, as soon as I get enough of my neuronets connected,
my machine's going to spring into consciousness.
And I thought, okay, consciousness, we don't even know what that is.
And when you say that your computer works like a brain, well, we don't even know how brains work.
So you take this hugely complicated field, you know, biology, environment,
all these neurology, all these different fields, and you collapse them all and you say,
look, my neuronets can do this.
How did you simplify?
It's such a reduction, you know, of what we are as humans.
But I would say that that, I mean, from Turing on, that idea that we've somehow, the AI
industry is somehow producing human-like machines has been there from the origins.
This is Ideas.
I'm Nala Ayyad.
My guest is Professor Teresa Heffernan,
who delivered the 26 Wigand Memorial Foundation lecture
at the University of Toronto.
It was called Literature versus the AI Industry,
techno monarchists and the drive to reduce the world to numbers.
Heffernan is Professor of English Language and Literature
at St. Mary's University in Halifax.
A lot of short daily news podcast focus
on just one story. But right now, you probably need more. On Up First from NPR, we bring you three
of the world's top headlines every day in under 15 minutes, because no one's story can capture
all that's happening in this big, crazy world of ours on any given morning. Listen now to the
Up First podcast from NPR. Teresa Heffernan spent the past couple of decades studying how the
artificial intelligence industry has borrowed concepts from fiction.
She argues that the industry is now using those ideas to sell both the promise and the peril of AI back to us.
The mission of Demasasabas, the director of Google's Deep Mind, is to quote, solve intelligence and then use intelligence to solve everything else.
All right, everyone, welcome to Deep Mind. We are embarking on what will turn out to be the greatest adventure in human scientific history.
When we started Deep Mind, we looked at machine.
learning in neuroscience, and there was a good chance that this was going to open up the possibility
for full artificial general intelligence. Our goal is to have an algorithm that can do everything
and turn by itself. He has recently admitted, however, that large language models cannot
achieve true scientific breakthroughs. So he is betting on, quote, a simulation engine that understands
how reality actually works. Deep Mind was inspired by Deep Thought, the supercomputer in
Douglas Adams' Hitchhiker's Guide to the Galaxy that began as a 1978 radio comedy.
Well, you're really not going to like it.
Tell us.
The answer to the great question of life, the universe, and everything is...
Yes.
Deep thought, after seven and a half million years of calculations, spits out the number 42.
Forty-two?
Is that all you've got to show?
After theories started to pop up about the meaning of 42,
Adams had to intervene and point out that it was a joke.
To understand how we got here,
Teresa Heffernan goes back even further than Douglas Adams in the late 1970s,
much further.
Early on in the Republic, Socrates, one of Plato's dramatic characters,
interrogates the usefulness of fiction in education.
Worried about the feminizing effects of poetry,
he was concerned that Homer's depiction of the traumas of war
would encourage men to grieve, fear the battlefield,
and discourage them from taking up arms,
and that stories about sons overthrowing their fathers
would upset patriarchy and authoritarian rule.
Socrates first recommended censoring fiction in its life,
but in the end concluded it was best to ban the poets with their God-sent madness altogether from the Republic,
so the philosopher kings could rule unhindered. Plato's writing, however, is replete with poetic devices.
So why does he ban the poets? When, if you read Plato, he's using all sorts of dramatic monologue and allegory and myth.
So he uses all these kind of literary tropes, but he bans the poets. What is that about? That's about a stout
a sort of authority where Plato's going to say, you know, he's still going to use the nobile,
which is a myth, basically, in order to kind of maintain an elite. And you can see the same thing
going on, I think, today with a lot of the tech industry. You know, it's this group of billionaires
who say the most outrageous things that have no basis in science. And yet they get picked up,
you know, and I just want to keep pointing out, no, they're trying to sell a product.
It's like going to the oil industry or going to the tobacco industry and saying,
here, tell me how great your product is.
They get quoted in the media all the time without any qualification.
Rebranded as the digital elite, all big money and big egos,
headline-grabbing ideas about the future were disseminated in places like TED Talks, Wired Magazine, and The Edge.
Popular technoscience bypassing peer review has promoted the ideas.
that with innovative technology, we can engineer our way out of any problem, even death.
The latest from this clever crowd are the large language models, or LLMs, for short.
They are not about truth or accuracy or originality or expertise,
and there's no fixing what the industry calls hallucinations,
which are not hallucinations but the built-in error rate of a device that works something like a magic eight ball.
One study reports 60% average error rate across models.
Retrieval, augmented generation, and agenetic coding may improve the reliability
and the usefulness of elements for individuals.
But chatbots can also be easily gamed or introduce new problems like slopware, basically negligent code.
Importantly, they also exasperate the litany of problems of these models
for the collective, LLMs work by the automated uploading of the internet, the largest public
digital archive in the world, with the goal of privatizing and commercializing it. Models
disassembled text by translating words or images into numbers, mapping the statistical relationship
between words, and then reassembling them based on probability. A massive undertaking that requires
billions of mathematical calculations that rely on enormous computing costs.
You don't allow your students to use AI in the classroom. How do you manage that?
So I have for the longest time banned laptops and cell phones from my classes right from the very
beginning. And I don't do it because I'm like, oh, you know, this is a terrible technology.
No, I do it because all the research, independent research, shows that it really
impede students learning.
And the students have to bring, just as an example, they have to bring hard copies.
It's like we have to actually read physical books, again, because of the science.
What the science says, if you read a hard copy, your brain is much more active than it is
reading something digitally.
What we do is we spend a lot of time discussing and debating, and it has to be evidence-based,
you know, if they're going to make a claim about a novel and say, okay, show me where that is in
the novel.
Right.
So it is, yeah, it is very.
school.
Yeah.
Originally, I got some resistance to it, but, I mean, now the students are so happy to come
to a screen-free class.
So the same thing with AI.
I give them a sheet that explains how it works and we spend time talking about it.
I talk about all the kind of repercussions of it from environmental to labor to a deskilling
that's happening.
And I talk about, you know, this big industry behind it and who are trying to colonize the
future.
And, you know, it is your future and you need to make those choices.
If you kind of talk to students, and not in a penalize you if you use chat GPT,
but in a way that lets them have the freedom to learn.
And then I haven't had any problems with people using chatbots.
Is there a place in humanities, do you think, for AI?
I haven't seen it.
I don't know.
I mean, I'm always open from a labor point.
environmental point. There is nothing, you know, you hear about ethical AI. There's nothing ethical
about it. And I think that that really needs to be, you know, taken into consideration also.
Because it is their future, right? What are they signing up for? And while the AI industry
markets itself is very new, it's not really. It's using the same, same corporate model as the
tobacco industry or the oil industry.
So money-burning language models, with no clear route to profitability, are built from the
non-consensual scraping of books, music, art, and posts, while an exploited global labor
force of micro-workers operates behind the machines, cleaning, tagging, annotating, and
classifying the raw data.
From rural South Asian women, filtering violent pornographic material, and, and, you know, and
so it doesn't show up in chats.
To workers in Kenya or the Philippines pretending to be chatbots,
the AI industry is about hiding and degrading labor rather than it is about replacing it.
In addition to race and gender biases,
study after study shows that chatbots cause cognitive decline,
deskilling, and psychological damage.
The technology and the infrastructure on which it relies have been,
heavily funded with public tax dollars, but is owned by tax-averse big monopolies,
further facilitating the upward transfer of wealth and consolidating the power of billionaires.
And as the world grapples with climate devasters, loss of biodiversity and global water shortages,
chapbots and image generators depend on an unsustainable resource-intensive infrastructure.
from Starlink's short-lived satellites to children mining gobolts in the DRC
to undersea cables to e-waste and space junk.
Data centers alone drive out pollution and energy costs keeping the oil industry alive
while guzzling millions of liters of water a day.
What do you see as the biggest threat from AI specifically to the humanities?
I think, you know, when you look at the humanities, what happens when the humanities get sucked up into this engine, turned into numbers, spit out as a kind of probabilistic or statistics, it erases any of the historical or cultural context.
It erases any of the kind of specificity of how this emerged.
So when I teach fiction, for instance, I'm always teaching in its cultural and historical context as a, as a, as a,
a kind of starting point. That gets lost if you just start generating, you know, fiction from
a machine or history from a machine. And I was listening just the other day I was listening to,
he's the CEO of, I think it's called Superhuman, and it runs, it also owns Grammarly.
What the CEO was saying is, you know, grammarly, people tell me it's like having a grammar teacher
there writing with you. And he wants to.
wants to do the same thing with other experts.
Now, first of all, I don't know any grammar teachers.
I know, you know, literature, linguistics, those kind of things, but I don't know any grammar teachers.
And if you think about it, grammar is like a set of rules.
So yes, you know, you can kind of automate these rules.
Of course, talk to any linguist, grammar has always been, you know, open and changing and debated, as has spelling.
Okay, so nevertheless, but that's, you know, that's what he said.
what he wants to do is have a whole bunch of these experts. And one of them, he said, was it like a history
expert there by your side while you're writing. How could that possibly work? History is, you know,
an event is something that gets interpreted and reinterpreted and debated and approach from different
perspectives. There are no set of rules. How do you automate that? But if you think you can
automate that and you have this one version, this is the answer,
What is that? That's an authoritarian model.
And once you start recognizing that these tech industries can easily manipulate answers that they're generating,
you know, this is getting to a very kind of authoritarian model.
A pushback would be, well, it's just another interpretation of history.
But you're saying it's being imposed.
It's being imposed, absolutely.
And it's like, you know, if you look at Grock, for instance, which is Il-A-Muss,
terrible chat bot.
What's being pushed there?
Is white supremacy, misogyny?
You know, and that, you know, he says,
oh, this is all about finding truth.
It's not anything about finding truth.
So we have to understand that, you know,
like the thing that keeps democracies alive
is an openness to debate,
an openness to be able to come at a event
from a different perspective.
I'm not saying that there aren't events that are happened.
All I'm saying is that they're open to interpretation, and that's the basis of a kind of democratic society.
So when you look kind of way down the line in the future and you think about this threat to humanities, what do you imagine?
What could happen?
Like, what's the most dangerous moment in that future for the humanities?
In fact, we can go back to Plato and this kind of questioning of the humanities, you know, but there's always been this pushback because authoritarianes do want to kind of control the needs.
narrative about the world. That kind of slide into fascism, the slide into an authoritarian
model is very, is very disturbing. And people won't know. I've talked to computer science
students and, or I've talked, like, given lectures, and computer science students have come up
to me and said, I had no idea that the military was behind all the funding for the AI industry,
which is actually what Hinton said, that he came to Canada and kind of, you know, saved his soul
because he no longer had to take money from the military.
But the military is using all that surveillance technology, right?
You seem, tell me if I'm wrong about this,
but you seem a bit skeptical about, in particular,
Hinton's turnaround, from being a creator of AI to someone who's sounding the alarm.
Is there skepticism there?
You know, it's hard for me sometimes to kind of be able to distinguish,
like, okay, who are the true believers?
Because I think there are true believers.
And I ask people who have PhDs in ComSan.
all the time this. How many are there true believers?
You know, and it tends to be a fairly small minority.
The other thing that PhDs and Compsi will tell me on the side is things like, yeah, well,
we don't mind this because then people, you know, give us a lot of money.
You know, there's tons of money in this industry.
Because the prospect of duplicating human intelligence is an incredibly enticing idea.
Yeah, sure. Sure.
But, I mean, it's hard to duplicate something when you don't have a definition of it.
Does this skepticism extend to the fact that there's funding flowing into universities from big tech in the AI industry specifically?
Absolutely.
I mean, when I was at a conference at Cambridge and I was in England, so it was on religion and AI.
And I was quite shocked at the amount of money.
But then I could start seeing all this Google money and this Facebook money.
And if you go back to Nick Bostrom, he wrote a book called Super Intelligence,
he was part of this Future of Humanities Institute at Oxford.
And it popped out of nowhere.
And I was like, how does that happen at Oxford that an institute just pops out of nowhere?
But when you look back to the funding, you can see it's all coming from big tech companies.
It's like MIT, same problem.
So there is, like, huge amounts of money coming in from industry.
Perhaps this drive to impose AI-owned society is not that surprising,
given the corporate ties of the backers of these institutions.
Stephen Swordsman, a Trump supporter and CEO of Blackstone,
one of the largest American private equity firms,
is also one of the largest investors in Asia.
AI infrastructure. Hinton sold his company to Google in 2012 for 44 million and worked there
until he resigned in 2023 when he changed his mind about AI and wanted to talk about some of
its harms and he also expressed some of his regrets about his life's work. Nevertheless,
American big tech companies continue to prop up the Canadian industry. The recent announcement of a Google
sponsor chair in AI at U of T is only one small example.
The term AI itself is a bit of a marketing ploy that impedes effective regulation.
Many diverse technologies fall under this banner, encompassing anything from reinforcement algorithms
that operate elevators to large language models that generate texts like Gemini or ChatGBT.
I use the term AI industry to refer to the long history of computing, the infrastructure required to service it,
and the way it has from its origins used fiction to market machines as possessing a human-like intelligence.
In this talk, I want to consider how both this relatively new technology disrupts the humanities and is reshaping society,
and to suggest that using the humanities to disrupt the AI industry might in fact be a better plan.
The AI industry disrupts the humanities by reducing them to computation.
Stripping knowledge of its historical or cultural or personal situatedness,
the humanities get sucked up by the engine and transformed into a vast network of numbers,
subjected to a number of mathematical calculations,
and then remixed and regurgitated as a statistical output where any specificity is erased.
This erasure of history, an authoritarian strategy, creates an endless present,
like the party in George Orwell's 1984 that is always right.
If literature foregrounds the ambiguity of language, opening the world to interpretation,
translation, and evidence-based arguments, when it's,
reduced to data, the stark determinism of code and calculations, authoritarian rule soon follows.
Sam Elkman, CEO of OpenAI, who wants to cover the Earth and data centers and maybe also space,
casually commented that Gen AI blurs the boundaries between the authentic and the artificial,
so that soon we won't be able to be able to differentiate between the two. And here is the point.
art, history, science, research, and democratic societies cannot function without truth,
reality, expertise, and facts.
So given all these problems, other than a cynical play by a scaling-obsessed industry that
markets an AI-first world, what are the origins of this drive to convert words and images
into numbers?
Having reduced human-generated work to computable numbers, the industry has to,
indeed, been marketing AI-generated products, buying up media and independent news, circulating
lies, forcing their way into education with Teal, Musk, OpenAI, Google, and Microsoft at the helm,
taking over government services, and putting politicians like Trump and Vance in power.
Peter Thiel, Mark Andreessen, J.D. Vance, amongst others, want to return to the Rule of Kings
and a neo-monarchy, with fiefdoms run by tech corporations.
As the Harvard historian Jillipor asked, quote,
isn't replacing democratic elections with machines owned by corporations
that operate by rules over which people have no say,
isn't that, in fact, tyranny?
In 2010, Teal was explicit about the plan.
The basic idea was that we could never win an election
based on certain things because we were in such a small minority.
But maybe you could actually unilaterally change the world
without having to constantly convince people and beg people and plead people
who are never going to agree with you through technological means.
You wrote an open letter to Evan Solomon,
the Federal Minister for Artificial Intelligence and Digital Innovation.
And in it you suggest that the great promise of AI has not been born out
and that he needs to, quote, cut through the current well-funded AI hype cycle.
Dear Minister Solomon, congratulations on your new post as Minister of Artificial Intelligence and Digital Innovation.
Above all, I hope you will fight for a democratic and sustainable future,
now under threat by the authoritarian technocrats that own the AI infrastructure.
Gen AI technology is being aggressively marketed by,
companies like Google and OpenII as saving time and money.
But studies indicate that companies that have adopted it are finding little benefit.
The most glaring example of...
Just for the record, Professor Heffernan did not receive a response to the letter.
She wrote to Evan Solomon, the Federal Minister for Artificial Intelligence and Digital Innovation.
As a professor of English literature, do you have any suggested?
readings for him that might help him on that path?
I have quite a few.
So first of all, I would say, you know, if you read Emily Bender and Alex Hanna, they have a book
out called the AI Con.
Emily Bender is a computational linguist.
Alex has worked in the tech industry and also as a sociologist.
They explain how the technology works.
So I think that that's the first thing.
Before you start being a cheerleader for this, you actually have to understand.
how the technology works, they do a great job. And then I would also recommend Karen Howe's
Empire of AI. Really will give you a sense of all the kind of environmental and also labor
exploitation that's going on in the industry. And then there's another one, Adam Becker,
more everything forever. He's a physicist and he's also a journalist. And what he does
is he just kind of completely takes apart the science behind this.
You know, he exposes it as the mythology that it is,
and a lot of it, as I say, gets into transhumanism and all sorts of things.
But it's a great read.
And then on the fiction side, Will Eves Murmur is a fantastic novel about Turing,
Ellen Turing, and Sydney Padwas,
I love the thrilling adventures of Lovelace and Babbage,
the mostly true history of the first.
computer and she has just done the most amazing research is just phenomenal amount of research.
You describe the term AI itself as a bit of a quote, quote, marketing ploy that impedes effective
regulation. How does it do that? You know, if you ask people what is AI, they don't know.
If you ask people how a computer works, a lot of people don't know. And that includes people who have
PhDs in computer science. And there's this sort of encouragement of this mystification of this
technology. And I think that that mystification then leads to this sense like, oh, it's some
kind of magic, because what it does is conflate all these different technologies. So, you know,
you have computer vision, for instance, or you have pattern recognition, you have optimization,
you have all these different technologies. And we can't conflate them all together. And we also have to say,
how does that technology work and what does it do?
Why does how we talk about and define AI so important?
You know, if you say, well, it's just these, you know, machines that are evolving.
There's no evidence of that.
It's the exact same technology.
You can go back to Turing.
It's the exact same technology.
What's increased is the amount of data we have, the storage, and then most of all, computing power.
So it's able to compute at a much faster, faster speed.
But it has nothing to do with an evolving technology.
And so when we use that term, artificial intelligence, what is that?
What do we mean by it?
What do you mean by intelligence?
What do you mean by artificial?
I mean, it does take massive amounts of resources, you know, material resources.
You're saying it's a misleading term.
Absolutely.
Yeah, yeah.
So what's a better way?
Maybe you don't know the exact term, but how else could we be describing this technology
so we understand it better?
you know, it's a complicated field, but we really need to kind of break it down and understand,
you know, like with surveillance cameras, how is that working, how cookies work or how, you know,
these tracking devices that are tracking you and then creating profiles of you. How is that working?
In your lecture, you suggested at the very start of your talk, that, quote,
using the humanities to disrupt the AI industry might in fact be a better plan than the other way around.
And I'm just wondering if you have any idea what that could look like.
It's using things like, you know, the history of the AI industry,
the way that it's co-opted fiction and trying to reclaim fiction from tech propaganda
and returning it to fiction as a sort of comment then on the industry,
there's the ways that the humanities will kind of open up this technology
that is just being promoted, like, don't worry about the past.
You know, we don't have to know anything about the past.
here is the future.
Wow, how do a bunch of billionaires get to define what our future is?
It's a very good question.
I'm so grateful you've come in.
Thank you so much for taking my questions.
Thank you very much for having me.
You just heard my conversation with Professor Teresa Heffernan
of St. Mary's University in Halifax
and her 2026 Weigand Memorial Foundation lecture.
Her address was called Literature v. the AI Industry,
techno monarchists and the drive to reduce the world to numbers.
It was presented by the Jackman Humanities Institute at the University of Toronto.
Thank you to Adam Bell and his team at the Campbell Conference facility for assistance with recording.
This episode was produced by Donna Dingwall.
Lisa Ayusa is the web producer for ideas.
Technical production Sam McNulty, Johnny Casamatta, and Emily Kier.
Jarvisio. Nicola Luxchich is the senior producer. The executive producer of ideas is Greg Kelly,
and I'm Nala Ayyad.
For more CBC podcasts, go to cBC.ca.ca slash podcasts.
