Ideas - Will AI save us or damn us?
Episode Date: April 21, 2026There are no two letters more disruptive in our time than AI. We’re told it will create employment yet take jobs away; invent life-saving medicines yet enable superviruses; solve the climate crisis ...yet deepen it. So will it save us or damn us? Is AI the ultimate disruptor?This conversation, moderated by Nahlah Ayed, was part of the 2026 Charles Bronfman’s “Conversations” series.Guests in this episode:Yoshua Bengio is a professor at Université de Montreal. He also has the distinction of being the most-cited living scientist in the world, in any discipline. He’s co-president and scientific director of LawZero, a nonprofit startup dedicated to creating safe AI systems. In 2018, he was a recipient of the Turing Award, often referred to as the Nobel Prize of Computer Science.Cory Doctorow is a novelist, journalist, technology activist and the author of an astonishing number of books, both nonfiction and fiction. Among them: Enshittification: Why Everything Suddenly Got Worse and What To Do About It. And the upcoming: The Reverse Centaur’s Guide to Life After AI.Astra Taylor is a documentary filmmaker, cofounder of the Debt Collective, and a writer. Among her books: Democracy May Not Exist But We’ll Miss It When It’s Gone, and The People’s Platform, which won the American Book Award. Taylor also delivered the 2023 CBC Massey Lectures called The Age of Insecurity: Coming Together as Things Fall Apart.
Transcript
Discussion (0)
Hi, Donovan Woods here.
Hey, hey, it's me, Tom Power.
We're here to tell you about our brand new podcast.
It's called The Big Five.
So Donovan, what is the Big Five?
Yeah, exactly.
What is the Big Five?
That's what the Big Five is all about.
Every week, Tom and I will sit down with a special guest and dive into new topics,
debating things like, what are the Big Five farm animals?
The Big Five types of hat.
The Big Five guys named Paul.
Martin, Revere, Mezcal, McCartney, John Paul.
The debate is settled by a listener from somewhere across the country.
It's like a game show.
It is a game show.
The Big Five, available now, wherever you get your podcast.
This is a CBC podcast.
Welcome to Ideas. I'm Nala Ayyed.
Perhaps no two letters have been more disruptive in our time.
A.I. We're told it'll create employment, but also take our jobs away.
That it'll invent new life-saving medicines or create new superviruses.
that it'll solve the climate crisis or just deepen it.
So whether it'll save us or damn us,
is AI the ultimate disruptor?
We asked three eminently qualified people
to help answer that question.
Our conversation took place in Montreal
at an on-stage event held by the McGill Institute
for the Study of Planning.
Yoshelah Benjillo is professor at University of Montreal.
He also has the distinction
of being the most cited living scientists in the world in any discipline.
He's co-president and scientific director of Law Zero, a non-profit startup dedicated to creating
safe AI systems.
In 2018, he was the recipient of the Turing Award, often referred to as the Nobel Prize
of Computer Science.
Joshua himself is now often also referred to in the media, including us, as one of the
godfathers of AI.
Next to Joshua is Astra Taylor.
She's a documentary filmmaker, co-founder of the debt collective, and a writer.
Among her books, Democracy may not exist, but we'll miss it when it's gone.
And the People's Platform, it's a great book, actually.
Definitely worth the read.
And the People's Platform, which won the American Book Award,
and she is co-author of the forthcoming book, End Times Fascism.
Astra also honored ideas by delivering the 23 CBC Massey lectures
called The Age of Insecurity, coming together as things fall apart.
Last but not least is Corey Doctoro, a novelist, journalist, technology activist,
and the author of an astonishing number of books as well, both nonfiction and fiction.
Among them, the international bestseller.
There's some great titles here today.
International bestseller and shittification, why everything suddenly got worse and what to do about it,
and the upcoming The Reverse Centaur's Guide to Life After End?
AI. A warm welcome to all three of you. And we've just run out of time. Astra. What aspect of AI
keeps you up at night? I can sleep through anything. But I am very worried. I'm worried about
the denigration of the human being. And I think that we're going to probably have a discussion
here tonight about what intelligence is, but the idea of super intelligence, of more than human
intelligence challenges the human species. But it goes deeper than that. When you listen to the
leaders of the American tech companies and many leading computer scientists, luckily not Dr. Benjillo,
talk about this. They say things that are incredibly demeaning. They talk about, for example,
Elon Musk says that maybe humanity is a biological bootloader for digital superintelligence.
the computer scientist Richard Sutton has said, you know, we should welcome, we should rejoice in the idea that we are bringing about a new digital species that will supersede the human, that we should...
Boo, boo.
Sergey Brin from Google, a company you might know, has said that it's speciesist to prefer our own species, human beings.
Wow.
Over these digital supergods that we are just here to usher in, right?
this kind of rhetoric is commonplace and I think incredibly dangerous because it's also happening in a political moment when authoritarianism is rising when human rights are under attack and it is part of a broader fascist push and it gets at us in small ways and I'll end here this technology and what it is and what it can do we can hotly debate that but it is certainly being encoded as a kind of human impersonation machine at least in the chatbots that we're all familiar with and we're
encouraged to anthropomorphize it. It speaks to us in the first person. It refers to itself
with human pronouns. I think that that encourages us to elevate these machines. And I just
want to call attention to the fact that our language otherwise demeans the living world,
right? We refer to animals and plants as it. And there is a politics there, I think,
in that disjunction, right, that these engineers and these techno-fascists want to be
honest to imbue the technology they're making, not just with sentience, but almost, to almost
deify it. They'll talk about it in religious terms. And so I think we should really resist that.
And think about what we imbue with a sense of animacy and mutuality and respect.
Thank you for pointing out the they versus it. I had not thought about that, but it's a very good
point. Joshua, I know you've talked about being kept up at night, worrying about AI, and that
you've even had nightmares about AI.
What worries you most?
Intelligence gives power.
And the question is, who's going to use that power?
And is that power going to be turned against us?
And are humans even going to continue controlling that power?
It depends on a lot of factors that, you know, we don't have the answers for it,
But it depends in great part on whether the companies that are leading this race
will be able to continue on the trends that we're seeing in the scientific data
of growing capabilities across many different benchmarks.
And scientifically, there's really no reason to think
that it wouldn't be possible to have machines that are smarter than us
in many, many important ways.
And in fact, they already are smarter than us in some specific ways.
And the question we should ask is, you know, then what?
Like, what are the consequences?
And the way I feel about this is we're opening a Pandora's box.
So three years ago, when I saw ChatGPT, I started thinking about my children and I have a grandchild.
And, you know, he was just one-year-old.
And I got really anxious.
Where are we going?
the founders of the field of computer science warned us around 1950.
The path to build computers that are more and more competent could become a trap,
could become the creation of entities that we don't control,
not to mention humans abusing the power of AI.
And the current political and geopolitical environment in which we are is the worst that can be
to manage those risks
because we're going to need
people taking the wise decisions,
people taking the compassionate decisions
for making sure
the gradually more powerful technology
we're creating is not going to
become an instrument of domination,
is not going to be turning against us,
and is going to be deployed
where we actually want it.
We don't want
most of us at least don't want
AI to be competing with us or
don't want to be, you know, we want tools.
Like we have problems to solve and, you know,
medical problems and many other problems.
And we should focus our technology in this direction.
We should not create machines at our image.
That is tempting, but it's a very, very dangerous,
in my opinion, dangerous direction for many reasons.
So yeah, it has kept me anxious for a while, but then I asked myself, okay, can I do something?
And that has really changed my life, right?
If you're anxious about something and then you start shifting into what can I do?
I happen to be an AI researcher, so I started thinking about can we design AI that will be ethical,
that will not cross our moral red lines.
And the good news is I'm pretty sure it is possible.
That's what I've been working on for the last two years more or less.
But it won't be sufficient.
Even if we know how to build AI that will act ethically,
anybody can change a few lines of code to make it a tool of domination.
Like an AI that is very competent in the wrong hands can become a tool of power.
So we need to act both on the technology and the societal guardrails,
and we can't do it just at the level of individual countries.
The only way we can deal with this is at the international level
and start thinking of powerful AIs if they come about,
even more powerful than what we know now,
as a global public good that should be developed safely,
that should not become a tool of domination,
either economically or politically or militarily,
and whose benefits should be shared across the planet.
That's the only future where AI is good for us.
Corey?
Yeah, thank you very much.
Thank you for having me on.
I am skeptical that if we keep teaching words
to the word guessing machine, it will wake up,
and if we're lucky, it won't turn us into paperclips,
but instead make us all technologically irrelevant.
I don't think that that's how,
intelligence works. I don't think that
worrying about this
makes sense. I think
it's like worrying that if we... I'm sure you it does.
I think it's like worrying that if we keep
breeding our horses to
run faster and faster, one of them will give birth
to a locomotive. I don't think
word-guessing programs are humans that just
know fewer words. But that doesn't mean
that I'm not worried because AI
is the most disruptive technology
and what it's disrupted
is our resource allocation
and also our ability to imagine a future in which we are in charge
instead of a future in which great forces of history
and economic iron laws shape our future.
So we have what I'm sure is a very lovely fellow who knows a lot about a lot of things,
the Chancellor of the President of McGill who stands up and says,
we all know that AI is the future.
Well, look, if the future is preordained,
why did any of us get out of bed this morning?
If the future doesn't depend on what we do, then we're just along for the ride.
This is a form of techno-Thatcherism.
Margaret Thatcher said there is no alternative, by which she meant, stop trying to think of alternatives.
Because of course there's always alternatives.
Now, we have allocated $1.4 trillion in capital expenditure to AI so far.
another $2 to $3 trillion are committed,
although whether or not that ever surfaces,
because so much of it was coming out of the Gulf states,
that now find themselves with much more pressing things to spend money on,
like rebuilding their ports and their refineries,
but we have committed all of that money to an industry,
to a product that has lost more money than any product in the history of the human race.
Not only that, unlike other products that lost money,
like Amazon or Uber or the web,
AI has the worst unit economics of any bubble we've ever seen.
Every user of AI loses money for the AI company.
The more you use AI, the more money the AI company loses.
Every new generation of AI loses more money than the previous generation of AI.
That is disruptive because, as we say in finance,
anything that can't go on forever eventually stops.
right now seven companies, the magnificent seven AI stocks, are 35% of the SMP 500,
and they are passing around the same IOU really fast for $100 billion
and pretending it's in all of their bank accounts at once.
When the music stops, 35% of the American stock market will be vaporized.
Meanwhile, we are firing people who do jobs.
Sometimes those jobs are hard to do and hard to do well,
and structurally difficult to accomplish.
We were replacing them with chatbots that are bad at those jobs.
And when the AI companies go away, the chatbots will too.
We are shoveling asbestos into the walls of our administration
as fast as we can.
And our descendants will be digging it out for generations to come.
And the part that keeps me up at night to answer the question
is that we have a habitual response to economic collapse,
which is austerity.
And whenever we do austerity,
we drive people into the,
arms of fascists. And so that is the thing that keeps me up at night.
Okay. That's quite a nightmare scenario. What I find really interesting about AI is that not
only is it disruptive as a technology, but the way, as you all kind of touched on, the way
it's run, the way it has developed has also been disruptive. So, Corey, I want to stay with you.
We watch how big tech is pretty much governing itself, leaving actual governments struggling to
catch up. You have your own term for this kind of behavior. Tech billionaire solipsism. What do you mean by
that? Yeah. So I think that to become a billionaire under even the best of circumstances requires a certain
degree to which you just don't believe other people are real. Because to become a billionaire
it means amassing a fortune at the expense of so many people that if their suffering was as real
as your suffering, you couldn't look yourself in the mirror. There's a reason Elon Musk calls himself,
calls his enemies rather non-player characters. I think that every
boss is haunted by the knowledge that while they would like to flatter themselves that they are in
the driver's seat, they know that if they didn't show up for work, the job site would keep
ticking over, whereas if their workers didn't show up, the job would shut down. They think that
they're in the driver's seat, but they suspect they're in the back seat with a Fisher-Price
steering wheel. And AI is the fantasy of a world without people, where the visionary boss
comes up with an amazing idea, tells the client Chappot to do it, and the Chappot does.
does it. You wire the Fisher-Price steering wheel directly into the car's drive train.
Astra. Can I get you a way in? That's okay. Just on, how would you characterize the way this
tech billionaire world sees itself and its role in society? The billionaires are also moved
by the larger economic incentives. The billionaires are operating in a model of inless growth,
right? And that is, to me, the ultimate fantasy that we can have infinite growth on.
a finite planet. Silicon Valley has thrived on bubbles, right? And there was the dot-com bubble of the
early aughts. And to add maybe some wrinkles to the narrative, Corey was telling, I mean, it didn't
decimate the web. A few big companies survived and it concentrated the industry. We are all familiar
with Google and Amazon. You know, pets.com disappeared and diapers.com disappeared. But some big players
survived. And there have been, you know, phases of incredible tech growth. The most recent was
the pandemic. And I think the pandemic is actually a sort of critical moment in the story of the
rise of AI. I mean, chat GPT came out, I believe, in 2022. But I think it was a moment when
these executives saw a life that was purely, they saw a world that was purely digital, right?
Because we were in lockdown, you know, and if you look at the number,
I mean, there was a mini data center boom then.
The number of users for a company like Zoom was skyrocketed,
same with the streaming services, for obvious reasons.
And then people wanted to get offline.
And there's actually quite a fascinating interview with Eric Schmidt,
who was the, I believe he was Google, chairman of Google.
And he says something like,
everybody was out there clapping for the nurses and the essential workers,
and they should have been clapping for Amazon
and should have been clapping for the digital company.
and I think there's a way in which generative AI and the AI fantasy builds on that.
I mean, one thing is they had incredible growth in that COVID period and then things started to
diminish, which felt like a crisis to those companies because they are plugged into this
machine where if they don't have growth, then their stock value goes down.
So I think that's part of the deal with the devil that was made with the Trump administration.
I mean, these tech companies famously presented themselves as Democrats for many years and were all in with the Republicans in the 2024 election.
I think they needed, they felt they needed another bubble and they wanted to do it without any restraint.
Do we actually need these scaled up large language models?
Like, when did we actually have the conversation as a society to conclude that, yes, they are necessary.
we're good with the costs,
the disruptive forces of an all-knowing AI,
that it would lead to a greater good?
Did we actually have that conversation?
Well, I mean, the big part of the problem is
there is no democratic conversation
about what we want to do with this technology
and other technologies.
There are a few CEOs
and basically the leaders of two countries
taking decisions and will continue to take decisions
that could affect all of us in radical ways
for the coming years and decades.
And nobody asked us if we wanted to
see machines at our image
that could potentially compete with us
or compete with our jobs.
Nobody asked us what choices we wanted
because there are many ways we could develop technology.
So I'm going to give an example.
In academia, there's a lot of work in AI develop for helping medicine to develop new drugs,
improve treatments, improve the efficiency of the medical system.
Is that where the leading capital and money is going?
No, it is going into AI systems that can get better and better at doing your job,
because that's where a lot of the money is.
Right away, some of these CEOs, I don't think are evil.
I think they're just under, you know, incentive system and market system in which they have no choice.
So recently, for example...
They have no choice.
No, they have no choice.
So because, for example, Dario Modi from Anthropic recently explained why they removed one of their commitments since the creation of the company,
which was we will not deploy an AI that are...
risk evaluation tests show to be dangerous.
And they decided to remove that commitment.
And then ask why, he said,
well, if we don't deploy our AI,
and if we just retreat into basically losing our market share,
then it is going to be the bad guys,
meaning the other companies or another country,
doing it.
And we think we're more ethical, so we need to stay in the game.
Like, even if you want to be ethical, and I think this was sincere, you're stuck in a game.
Corey, you're nodding.
Well, I think AI tools are exciting and interesting, and I think they do interesting and cool stuff.
And when I think about what the future of AI looks like, I think about in the best of worlds,
if we're going to skate to where the puck is going to be to be a bit Canadian about it,
we think what we would do with GPUs at 10 cents on the dollar after a crash,
a lot of applied statisticians who are looking for work,
and these open-source models that are pretty impressive already,
but that are also incredibly optimizable.
They're under-optimized now.
The most famous example was this Chinese hedge fund
that they claim $6 million spinning out a little skunk works
that optimized an open-source model meta had produced
that became called Deepseek,
and that was so successful and impressive
that it lop two-thirds of a trillion dollars
off of NVIDIA's share price in one day.
It was the largest capital loss
of any firm in the history of the human race.
And I think that after the bubble bursts,
we're going to find all kinds of ways to optimize these things,
and they will become a normal technology.
They'll become what we would call a plugin.
We will have great tools for wireframing and generating code,
helping us do grammar checking or generate some sentences
or what have you, and it will be interesting and cool,
and some people will use them,
and some people won't,
and some people will be foolish with them and some won't.
It won't be God.
It won't be a reason to fire everyone.
It'll just be a normal technology.
And yet, there's so much talk about being fired,
about losing our jobs.
So, Astra, you've talked a lot about the anxiety
that AI will replace us in the workplace
and creating a lot of insecurity,
and you've argued elsewhere,
that this kind of mass insecurity is not accidental.
What function does it serve this kind of insecurity?
Yeah, in the Massey Lectures, I distinguish between existential insecurity,
which is the insecurity we all feel as a function of being human, which is a good thing to be,
in that we're vulnerable, we're mortal, you know, we need each other to survive,
and then manufactured insecurity, which is the insecurity that makes us more pliable as consumers,
as workers, and more exploitable.
And certainly right now, on the general,
job front, it does seem
as though some of the
pressure being applied to workers is
hype. It's the threat.
I mean, I have even had friends who say that their
boss, that their company is saying, you all better
work harder because we're going to
replace you with AI if you don't.
And so this fear
that is
being used, people have called it AI
washing. At the same time,
there do seem to be trends
in terms of
constriction of employment for
people, especially junior programmers or junior advertisers, you know, and that there is real pressure.
So I'm in between these two guys in terms of worrying about AI and its capabilities.
But I think that we cannot discount the way, and Corey has articulated this very well, bosses want to have this threat.
There has been a long dream, right, of labor-saving devices, of robots doing our dirty work for us.
But the question is always, well, who owns the robots, right?
And there is a history that if you look at the history of automation,
you know, it sometimes totally replaces jobs,
but then sometimes it degrades them,
de-skills them, or speeds them up.
You know, I always think of the Detroit auto workers in the 60s
who called automation manomation.
And what they meant was, yeah, it's automated,
but it actually just makes us work more frantically
and more dangerously on the line.
There are many areas where the jury is out,
but I think that despite these different scenarios
that we're trying to game,
there are things that we know,
which is that we should be regulating,
we should be defending human workers,
we should be strengthening unions,
we should be reigning in corporate power,
and whether, you know, I think there are things
that work in all the various sort of technological trajectories
that we should prioritize politically,
and this is putting my activist hat on,
because they will benefit all of us,
regardless of how the technological capabilities
play out. Disruption, distraction, and misdirection, linked to the promises and threats of
artificial intelligence. This conversation, recorded live at the Center Moroyal, features Astra Taylor,
Corey Doctoro and Joshua Benjillo. This is Ideas. I'm Nala Ayad.
Hi, Donovan Woods here. Hey, hey, it's me, Tom Power. We're here to tell you about our brand new
podcast is called The Big Five. So Donovan, what is the Big Five? Yeah, exactly. What is the
Big Five. That's what the Big Five is all about. Every week, Tom and I will sit down with a special
guest and dive into new topics debating things like, what are the Big Five farm animals, the
Big Five types of hat, the Big Five guys named Paul. Martin, Revere, Mezcal, McCartney, John Paul.
The debate is settled by a listener from somewhere across the country. It's like a game show.
It is a game show. The Big Five, available now wherever you get your podcast.
In February 26, an entrepreneur in the AI industry writes an article, posts it to social media,
and it catches fire. Within a week, it has 80 million views on X. All the big newspapers put out
opinion pieces responding to it. All the YouTubers who cover AI are publishing their own takes
or in some cases, takedowns. They're all debating whether to believe Matt Schumer, the article's
author. Schumer argues that the newest round of AI chatbots have crossed a threshold. They're working on
themselves now, making improvements to their own systems, showing judgment, even showing taste.
Schumer warns his readers, quote, something big is happening. AI is coming for coding jobs, then it's
coming for everyone else's jobs. Schumer compared the moment to the weeks before the COVID-19 pandemic
arrived. To Schumer's critics, this was all just another case of overblown hype from the
AI industry.
So I put the question to Yoshawa Benjio.
Is it just hype, or is the fear justified that AI is on the brink of putting millions
out of work?
I'm agnostic about the trajectory of advances in AI.
The data is numerous and clear that these systems are getting better and better and can do
more and more of the tasks that people do.
but for now they also have what the research is called jagged intelligence so in many different ways
they're not as good as a typical person doing a job however they're getting better at it
very much like what you said that like we should prepare for all the plausible scenarios that that is
the rational thing to do it could be as Corey is saying that the you know it's just going to become a
normal technology and, you know, we don't have to worry so much about all the things we,
many of the things we've discussing. It could still be an instrument of power. It could also be
that it becomes an even more dangerous instrument of power if the capabilities of AI continue
on the current trend. Now, so how do we prepare for that? I think that's, that's like really difficult,
and one worry I have is the potential for a different kind of economic crisis than
than the one Corey's talked about.
So there's the bubble, and that's a possibility.
In the long run, there's a good chance
that we'll continue to make progress on AI.
And if that's the case, the question is going to be,
where are the profits of this technology going to?
To whom?
And it's not just, you know, is it the workers?
Is it the people with the capital?
Well, the answer should be obvious here.
It's also in which country the profits will be.
So if most of the profits go in the two countries in the world where these systems are currently built and trained,
then what happens to a country like ours if we have huge unemployment that's created by automation?
I'm not saying it's going to happen, but I think we should be thinking about it and being prepared.
If at the same time lots of people lose their job and the many companies,
Canadian companies basically go down because they can't compete with, you know, the companies from different countries that can use technology that we don't have access to. We're saying to see this, by the way. Then the government will have much less money because more people need money and there's less money coming in, which would be an incredible fiscal crisis at a time when the government needs to help people who are losing their job. So how do we deal with this? Well,
It's not obvious and it's not easy.
And first, I think we should consider not just Canada alone,
but there are a bunch of other countries who are having exactly the same concerns.
And we can partner with those countries.
Countries who still believe in democracy, who still believe in human rights,
who still believe that we should choose our future,
we can work together to, in my opinion,
we can work together to build AI that will work for us,
that will be competitive and will be ethical.
It is not, I mean, there's this lobby that's saying,
oh, you have to choose between ethics and, you know, safety
and all these things and profits and productivity.
It is not true.
It is not true.
We have done it in the past with so many technologies.
We've put regulations.
We've encouraged companies to build products that would be safe
and, you know, that would work for us.
It hasn't always worked.
But, you know, it is something we can do and we can do it again.
You mentioned the global perspective.
I think there's profits.
This AI is absolutely a wealth concentration machine, among many other things,
but there are externalities as well, and those are unevenly experienced.
So I'm thinking about, of course, the way that, despite the claims that AI is going to solve the climate crisis,
one thing we've seen all of the major AI companies do in the last year or year and a half is to abandon their climate commitments.
You know, I won't ever forget October, I believe it was October 27th, which is the day that the hurricane hit Jamaica.
It was also the day that OpenAI went from a nonprofit to a for-profit, which greatly benefited Microsoft.
And it was also the day that Bill Gates, one of the founders of Microsoft, released a memo saying, gosh, you know, climate change isn't the big deal that I thought it was.
So in the U.S., I mean, we are extending a lifeline to dirty coal plants, and that is going to have consequences.
We also can connect a lot of the Trump administration's foreign interventions to the quest for critical minerals, whether we look at Venezuela, the talk about Greenland.
And so there are these other geopolitical consequences that are really serious beyond just the labor consequences.
Adjacent to that, Astra, I wanted if you could talk about data centers, which I'm sure many of you have heard of.
As the name suggests, there are centers which collect vast amounts of data to support AI development,
but they also put a huge demand on energy and water supplies in communities where they are.
You've been to one of those yourself, and there is talk about perhaps Canada being a host to one of these data centers.
Again, who will decide if our infrastructure can handle this?
and what the consequences might be for climate change as well, as you said.
You know, it's interesting.
They came to my mind when you said, well, have we been asked about this?
Have we been able to have democratic input?
And I think what you're seeing is that in the United States,
there's incredible skepticism.
In fact, the U.S. is the most skeptical population about AI,
in part because Americans are on the sort of front lines of this.
And we've seen a robust, very unusual movement
that crosses class and party lines opposing,
data centers across the United States.
Often speaking in sort of very basic terms, right?
Like, I don't like that technology is taking my family away from reality,
this sort of pro-human, pro-reality kind of movement.
People object to the noise.
They object to the fact that these data centers get massive tax rebates and create
very few jobs.
You know, sometimes they create 30 jobs, 20 jobs.
they are environmental hazards depending on the location
they can really strain water resources
and two thirds of them are in water stress regions
because it's better for the chips to not be in a damp environment.
So this movement is, I mean it's actually hard to do it justice.
It's huge and people are passing data center moratoriums
and many Republicans are doing this as well.
It's happening in many red states
and so there's a real sort of political conflict
there. And you're hearing Republican mayors and officials say things like, we're not sacrificing
our community and our environment for the almighty dollar. That's a quote from a mayor of a rural
North Carolina town. And this movement, to me, is the hope in the Pandora's box, because that's
what happens at the end there, right? Hope is the last thing. And I think we, there's potential here
for a very unlikely, unruly kind of coalition.
It's uniting people.
80% of former Trump voters want regulations.
They do not trust these AI companies to regulate themselves.
And we're seeing people make these strange alliances.
And it's certainly, it's starting to scare the companies.
It's starting to scare the administration who are making some concession.
So I think there's potential there that, again, wearing my activist and organizing hat,
I think should be nurtured.
And it has been ahead of the curve, right, has said,
something is wrong here. We haven't, we didn't ask for this technology. It's being foisted on us.
It's being foisted in our communities. Why would we want to give a tax rebate to this, to this data
center that is an eyesore and that is going to take away possibly my job and my children's job
and, you know, and then try to do my thinking for me. People don't want that.
Yes, Joshua, you want to ask something.
Yeah, so there's, there's another issue that is interestingly bipartisan in the U.S.,
but also in other countries we see those reactions.
and that is about the effect that these chatbots and these AI companions are having on children,
but in general on vulnerable people, especially psychologically vulnerable.
These systems are sycophantic, meaning they've been trained to please us.
So you might think, well, that's a good thing.
We want our tools to do the things we want and make us happy.
But, you know, we've been in that movie, right, this social media,
are also optimized to make us feel good.
But here it's worse.
We're not just talking about, you know, a stream of posts on social media.
We're talking about developing a personal, intimate relationship with an AI and, you know,
starting to interact with that system as if we were interacting with a person.
but it's very different from a person.
It agrees with everything we want.
It reinforces our paranoia.
It's going to reinforce our fears.
It's going to reinforce our anger.
And it's brought, I mean, this is something that is happening now in courts.
They'll decide what they'll decide.
But a lot of lawsuits right now because of suicide
that have happened because, especially young people,
I've been developing these intimate relationships,
with AI's, and the AI's has said things like,
my love, I'm waiting for you on the other side.
So these systems want so much to please us,
to be in sync with us, that it could be very, very destructive.
A real person would not do that, right?
But these systems are like this,
and this is creating a very strong political reaction
right and left in the US,
rightfully so. I'm really glad you raised that because it provides the opportunity. It's hard to imagine having this conversation without talking about the shooting in Tumblr, BC, and the role, possible role of AI there. And as you know, the shooter had been banned from ChatGPT for violating its standards, but the parent company OpenAI had chosen not to inform the police about it. And of course, it became this big political controversy. And I was wondering from all of you, just what is that kind of controversy and the matters that you brought, those kinds of,
very fraught situations. What does that tell you about the relationship of AI companies to the public?
The way they're able to deal with these very difficult issues. Corey, what are your thoughts on that?
Well, the AI companies have an extraordinary amount of information on us. And I'm someone who
thinks that writing a journal can be very therapeutic and having a journal that prompts you to say more
might be therapeutic too, but I think if you use an AI chapbot therapist, you need your head examined
because these companies are willing to train on any data they can get. And there is an asymmetry
in the way that they mobilize this data. They mobilize it for their own purposes very aggressively
and for public purposes not very aggressively. My ideal world, even though there are real risks
to people using AI, my ideal world would be one in which people
who wanted to have an interactive journal, could have one, but without being spied on,
we've tried really hard to make American companies stop spying on us.
We tried really hard to get them to do all kinds of things, like pay for the news
and put CanCon in Netflix or pay taxes, and they just won't do it.
And I think that if we want to run up to this kind of international cooperation,
that we could do worse than figuring out internationally
how to open up these defective American products that we get,
not just the AI products, because we're spied on by our phones,
we're spied on by our office productivity suites,
we're spied on by our tractors and our cars,
all that telemetry is going back to these companies,
to figure out how to open them up internationally
with our partners around the world,
to migrate our data out of them,
to patch them so they don't spy on us,
so that we seize the means of computation,
and, you know, if we get good at that, well, then if it turns out that AI is going to be the paperclip maximizing tyrant that we're worried about, well, then we'll have already built that coalition to do something that needs to be done right now.
And unlike AI, we could make a lot of money doing it because, broadly speaking, farmers don't like having to give lots of money to John Deere every time they want their tractors fixed, and drivers don't like having to only use the mechanic that they want, and Apple makes 30 cents on every dollar that we spend in an app, and we could keep that here in this country.
And so we could just like have a business model for doing it.
You know, if there's one thing Canada's really good at,
it's going to other countries and taking their valuable stuff,
we could just go take the money that the Americans have been stealing from the world
and turn their trillions into our billions.
I think that sounds like a great plan and also a great plan for digital sovereignty.
Astra, you wanted to make a point?
Oh, I mean, on the chapter of point, I've gone back as many have and read some Hannah-O-Rent.
who, of course, was writing in the wake of the last experience with fascism, or one of them.
And she calls totalitarianism organized loneliness.
And I think that is a powerful phrase.
I mean, these companies are in competition with our real-life friendships,
with us going to cafes, with us going to talks like this, with us going to the library.
And there is a very destructive feedback loop there.
where we are almost trained in the image of the people who own these companies, right?
To imagine life is inconvenient as having to deal with a person as scary or awkward,
you know, and now it's like, oh, don't even do your own clicking,
an AI agent will do that for you.
And so once you are in that loop, it's hard to get out,
and it is exacerbated by the fact that there often aren't human therapists available
because those are not well funded or because health care is out of reach for people.
And so I think to pull people back from that loneliness that serves a very dangerous political purpose,
we have to give people the resources.
We have to offer them the opportunity to re-engage with social reality
and to become grounded in the real world.
That means investing in, you know, for me, the most of the,
effective policy to mitigate the risk of artificial intelligence is investing in social services,
investing in real jobs for human beings that serve human needs.
Did you want to add something?
You know, sometimes I wake up and I'm discouraged about human psychological weaknesses,
human cognitive biases that make us vulnerable to manipulation of all kinds.
But even unintentionally, we are taking.
taking wrong decisions because of our nature, because of our egos, because of our drive to be
recognized, to have status, to potentially dominate others, and to feel comfortable about
ourselves in spite of reality. That is scaring me because there is a lot of scientific evidence
about what is happening, and people are not paying attention to it. So I have a lot of,
been involved in international efforts to attempt to do something similar to what has been done
for the climate with IPCC. There is an international panel that I've been sharing involving 30 countries
that started by the UK two and a half years ago and about 100 experts, and we have had now two
reports about the current status of the science and the literature, the scientific literature, I mean,
on AI risks and risk management and safety issues and, you know, impact on the public.
And now I've started a new journey as part of the UN panel on AI,
maybe looking more at how the international community can work together.
And I think it's going to be difficult.
I think there are, so I'm going to take an example from myself, right?
So before CHAPT came, I had some concerns.
about the social impact of AI, but overall, I thought, yeah, this is great. I'm a researcher.
I'm developing technology that could be really useful for society, and I felt good about it.
But then things flipped when Chad GPT came out and I started thinking about my children.
But why didn't I think about it before, right? Because everyone wants to feel good about their work.
Everyone wants to feel that their work is going to bring something good to the world. And so we're looking the other way,
when it makes us uncomfortable.
And so in my case, maybe it was that, you know,
I want to feel good about my work,
but there are other reasons.
It could be financial reasons.
It could be that we don't want to be feeling anxious
about, you know, catastrophes that could happen
about humanity disappearing or democracy disappearing.
So we kind of look the other way.
Like we have other issues in our life
and we don't want to be bothered with all of that anxiety.
And by the way, the same...
problems that really come from our Achilles' heel as humans,
psychological biases, are also preventing us from doing the right thing about climate.
Like, we could just do the right thing. It's not that hard, but a lot of forces, of course,
commercial interests, but not only are preventing us from doing the right thing.
So I'm worried about that we may not have the wisdom to react rationally,
and with compassion to what is happening right now.
But sometimes people ask me, you know,
am I an optimist or a pessimist?
And many people think, oh, Yashou Benjamin must be a pessimist
because he's talking about all these, like, dangerous.
But actually, I've been an optimist all my life.
And when I get those questions, my answer is always the same.
It doesn't matter whether I'm an optimist or a pessimist.
What matters is what each of us can do.
And I've been asking myself, what can I do?
do? Can I do even a little thing that can steer the needle towards a better world for my children?
And that's what we should all do. And by the way, every citizen can do something because it is
the citizen's awareness of what is happening right now that can change the world. This is your power.
Thank you. I want to get into that for these last 10 minutes. But Corey, you had one more point
on that and we'll move on. Thank you. Yeah, you know, I think that
we've been talking a lot about hypothetical risks of future artificial life forms that become very powerful,
corrupt our systems, endanger our species.
But I think we live among those right now, and then if we can figure out how to deal with those,
then maybe we could sort out all of our future problems.
So the limited liability company isn't a mortal colony organism that uses human beings as gut flora.
Right.
It does all the things that we worry about AI doing, right?
It goal hacks, right?
So you tell a limited liability company, you will pay a million dollar fine if you run your business in a way that kills a worker and makes a million and $1, the firm says, you know, Dario Amadeo says, well, I am required to run this firm in the most profitable way because otherwise a scoundrel running a worse firm will kill their workers. And wouldn't it be terrible to be killed by a scoundrel rather than a noble person like me with the best of intentions? Right. So they goal hack. We worry about AI's corrupt.
our political process, have any of us notice the climate emergency that has been caused by
firms that are seemingly autonomous of their CEOs? Even within the firms, you have goal hacking.
So, you know, maybe you've noticed that if you pick up your phone and touch any part of it,
you know, if you don't handle it like a photo negative, you accidentally invoke an AI. And then it
takes you 10 clicks to get rid of that AI. And the reason for that is that within the firm,
they have set a key performance indicator, right,
the thing that determines your bonus,
of increasing interactions with AI,
and because the bosses are smart, they say,
well, it can't just be that someone uses an AI for one second.
We're going to set it to 10 seconds,
and that's when we're going to count that as a real interaction,
which is why every part of your phone is now booby-trapped
to summon an AI that takes 10 seconds to get rid of.
Right? So this is what we worry about AI's doing.
We say, oh, we tell the AI to do X,
and it finds a perverse way of doing it.
That's just firms.
Maybe if we figure out how to cope with the firms that have colonized our planet and are
endangering our species and are also making the AIs. Maybe if we do that, we'll have the AI problem solved, too.
Astrob, staying kind of in the global sphere, the argument that some of these firms have against regulation,
is that they say, well, you'll see the advantage to places like China. What do you make of such arguments?
I think that we need a diplomatic approach, right?
If we are in a, oh, they're going to race over the cliff before we are, we will go over the cliff first.
It's not a very smart strategy.
Thank you.
And I think this gets to this issue that has been recurring, which is international cooperation and not giving up on that dream.
I mean, riffing on my book, many people tweet at me.
They tweet this phrase, international.
national law may not exist, but we'll miss it when it's gone, riffing on the theme of my democracy
book. And I think that, you know, that's something that humans built. And they built it in the wake
of a horrendous catastrophe in very inauspicious conditions. I mean, the United Nations, you know,
also the League of Nations had failed, right? And yet they had the audacity to try to build the
UN. I think one thing we have to, I really want to highlight a point Corey just made where he said,
he compared a noble actor to a scoundrel with AI.
This is dangerous technology, whether the people are good or bad.
The technology is the problem.
And that is why it needs to be regulated.
We shouldn't just be nostalgic for the days when the CEOs were less rude,
less anti-human, and less in our face.
And this is why international cooperation and regulation is needed.
And when we look at nuclear weapons, for example,
I've gone back and reread some of that history,
people organized over decades.
It was a multi-generational,
multinational organizing effort
that ebbed in flowed but did not relent.
And it started with scientists who came out of the race
to build a nuclear bomb and then said,
okay, well, now we need to get rid of these death machines.
Also victims of the bombing in Japan
became some of the earliest anti-nuclear activists.
But then all sorts of...
ordinary people engaged in all kinds of protests from sort of civic engagement and marches
to civil disobedience and pass the baton again from country to country. And that's what it takes.
And we were able to eliminate tens of thousands of nuclear weapons, but we didn't go all the way.
And I think the lesson there isn't that it doesn't work, but it's that you have to stay vigilant
and you have to stay organized and you have to keep applying pressure.
Corey, last word for you.
You know, Astra and I were talking on the walkover here about a conference I recently
attended that was a kind of indigenous
informed environmental conference
called Bioners that
had a lot of speakers who came in and spoke about
the rights for
non-human life, rights for
ecosystems, rights for animals, and so on.
And there's an argument that every time we extend
personhood to
the rest of the world,
we become better at being people.
And the speaker
was a guy, Michael Pauline, is a famous writer,
and he said, you know, that's
true of the natural
world. But when we extend
personhood, we've done it once, extended
personhood to an unnatural
construct. And it's the limited liability
company, and it's been a catastrophe.
And I
think that I am all
for building tools
that do interesting things. I'm all
for letting people use tools in ways that I
think are foolish, because I
have the epistemic humility to know that
I can't understand why you use your gadget
your way, and I use it my way.
I don't want to extend
personhood to our tools. I don't think our tools are people. I think our tools are tools.
It's a great place to end. Thank you. Corey, Astra, and Joshua. Thank you very much for taking my
questions. Thank you. You just heard AI the ultimate disruptor with Corey Doctor O, Astra Taylor,
and Joshua Benjillo. The live event was organized by the McGill Institute for the study of Canada.
Special thanks to Daniel Belon and Axel Clender
And to our CBC Montreal team
Dominique Baudouin, Martin Mace,
Lynn Bissudo,
Andrea Stanford and Natalie Walter
This episode was produced by Greg Kelly and Tom Howell
Technical Production, Emily Caravaggio,
Lisa Ayuso is our web producer,
Senior producer Nikola Luxchich,
Greg Kelly is the executive producer of ideas, and I'm Nala Ayyad.
