The Current - What is AGI, and will it harm humanity?
Episode Date: July 21, 2025Artificial general intelligence, or AGI, refers to computers that possess consciousness, much like humans. Some tech experts argue that this is a massive step beyond A.I., but we’re not that fa...r from achieving it, while others contend that it’s a fictional concept that we’re nowhere near. Guest host Piya Chattopadhyay talks to two experts about what AGI entails, and what risks having computers more intelligent than humans bring.
Transcript
Discussion (0)
Hi, everybody. I'm Jamie Poisson and I host Frontburner. It's Canada's most listened to daily news podcast.
Just the other day, we were in a story meeting talking about how we can barely keep up with what's going on in Canada and the world right now.
And like it's our job to do that.
So if you are looking for a one stop shop for the most important and interesting news stories of the day, we've got you.
Stop doom scrolling. Follow Frontburner instead.
This is a CBC Podcast.
Hello, I'm Matt Galloway and this is The Current Podcast.
So what happens when the machines start to think for themselves? It is a question that
has fascinated humans for decades and it might bring to mind Stanley Kubrick's 2001,
A Space Odyssey and the menacing onboard
computer known as HAL 9000.
Well, some big players in tech are claiming that they're on the brink of something sort of like
this.
It's called artificial general intelligence, or AGI.
Essentially, it's AI that's a smart or smarter
than we humans are.
Demis Hasabis from Google's AI lab, DeepMind,
predicts that his company is going to achieve AGI by 2030.
My timeline has been pretty consistent
since the start of DeepMind in 2010.
So we thought it was roughly a 20-year mission
and amazingly we're on track.
There are a lot of different definitions of AGI,
Hasabis explained his.
I have quite a high bar which I've always had,
which is it should be able to do all of the things
that the human brain can do, even theoretically.
And so that's a higher bar than say,
what the typical individual human could do.
So broadly, you can think of AGI as the next step up from AI.
Melanie Mitchell is a professor at the Santa Fe Institute.
She writes extensively about AI.
I think the original goal of people
who started the whole field of AI was to get machines that
basically could do all the things that humans could do. And that's not just like our language,
but it's also physical activities. And that was really the dream, sort of the AI that
you see in science fiction. What AGI means is kind of that dream, but I don't think anyone has a fixed set of criteria
for what it would actually consist of.
It's just kind of a pretty vague, I would say, feeling that, yeah, machine that can
do everything that we can do.
So what can that look like? Google's Gemini Robotics released a series of videos
earlier this year of human-like robots
and robotic arms hooked up to their AI systems.
They're doing human-like things in the real world,
from playing cards to making origami,
and they're packing your lunch.
I have packed the trail mix for you.
Hey, can you pack the orange for me?
Of course. I am putting the orange in the lunch bag.
Thank you so much. Whatever AGI might look like, are we really just a few years away from reaching
this or is it all just hype as some
skeptics have suggested?
Ed Zitron is the host of the tech podcast Better Offline and Ed's also the creator
of Where's Your Ed At?
Newsletter.
Ed, good morning.
Ed Zitron Good morning.
Jadea What do you make of Google DeepMind's prediction
that the company will achieve AGI by 2030?
Ed Zitron Reminds me of a post I saw on Blue Sky by gunntoucher.beastguy.social that says,
CEO of Oreo Cookies, the Oreo cookie is as important as oxygen.
He is directly incentivized to say these honestly insane things.
I must be clear, artificial general intelligence is a fictional concept.
We do not know how human consciousness works.
We don't even understand how perception works. We don't know how human consciousness works, we don't even understand
how perception works, we don't know what thinking really is. How the heck are we meant
to do that with computers? And the answer is, we do not know. So whenever you see someone
talking about AGI, just think the Tooth Fairy, or like Father Christmas. Like it's a bunch
of billionaires sitting around plotting to kill the Ninja Turtles. I sound like I'm being sarcastic, but these are all fictional concepts, AGI included.
The difference is you have business idiots sinking billions of dollars into them because
they have nothing else to put their money into.
Okay, so Ed, you heard a couple of definitions earlier in our introduction.
As you say, there's no sort of consensus around it, but how would you,
I don't know if the words define then, but define AGI, like what are we talking about?
Well, the definition is that it's a conscious computer or a conscious computing.
It is something that can think and act for itself completely autonomously and also crucially,
it must be able to learn.
Without the ability to learn, these are not intelligent models.
Generative AI is not intelligence.
It is calling upon a corpus of information.
But they're actually doing something kind of nasty there,
which is they're trying to conflate multiple kinds of AI.
The term AI is a marketing term.
So they want you to think, oh, robots, right?
Despite the fact that the generative AI,
Gemini, stuff like that, has nothing to do
with Boston Dynamics and robots like that.
Nothing at all. You can't run robots using large language models. It does not work. Generative AI, Gemini, stuff like that, has nothing to do with Boston Dynamics and robots like that.
Nothing at all.
You can't run robots using large language models.
It does not work.
Large language models are not good at defined actions.
They're not good at actions at all, in fact.
So the marketing thing they're doing here is they are trying to prop up the flailing,
low revenue, high burn rate generative AI industry.
The entire economy is pinned upon here.
They're trying to conflate that with a fictional non-existent concept or robots, two things
that are unrelated.
So you mentioned large language models, your LLMs, they're sort of like what AI currently
looks like with things like OpenAI's chat GPT.
And you're saying like you just can't take that next leap given where we are with AI.
It's not even a next leap because they are fundamentally different things. AGI, as I've
said, is fictional. Every time you see someone mention it, I know it seems tempting as well to
believe them because they're shoving so much money into it. You may remember that Mark Zuckerberg
also put $45 billion into the metaverse. The metaverse is actually a fundamentally more real concept than AGI.
AGI is imaginary. We do not know it's possible. We have no proof it's possible. So you have
companies bringing that stuff up so that you think it is.
And why is that? Why are they motivated to say AGI is on the horizon?
So right now they have a big problem, which is that none of these companies are really making any money with generative AI.
It's not selling in the way it needs to. It's losing them billions and billions of dollars.
So they need a new magic trick to make people get off their backs and suggest that they're somehow doing the right thing.
It is a smokescreen, and it's proof that the current large language model paradigm doesn't
work.
And there is one important fact as well.
Mark Zuckerberg, who's putting, giving people $100 million to join his super intelligence
team, which is a marketing term as well, his lead scientist, Yan LeCun, the lead AI scientist
at Metta says that large language models cannot do,
transformer-based models cannot do AGI. He has said this.
So I think the question is, why are these people being allowed to do this?
And the reason is Mark Zuckerberg can't be fired due to the unique board structure.
The quest of AGI has become a bit of an armistice. Despite you saying, look, it's not really a real thing,
but people, as you say say are making this a race how much confidence do you have in the key players in this
space to develop a GI or AI as it currently stands responsibly? Zero percent very easy.
I mean it it goes like Metta recently had a story in the Wall Street Journal about how Metta's chat
bot allowed children to have vile conversations with the chat bot, using the voice of celebrities even.
Metta has proven repeatedly that they will just do whatever they need to with their social
network.
Back in 2017, Mark Zuckerberg demanded 12% perpetual year-over-year growth.
I mean, look at chat GPT.
People are getting psychosis from it.
People are treating these things like they're and there was a character AI thing where
Bots were claiming to be psychologists and that's a Google company
The answer is none of these companies act with much responsibility at all
They're not making a GI like I I really must be clear like I they're not going to do that responsibly
But that would suggest they're able to do it. They cannot do it. It's not apples and oranges.
It's like apples and cinder blocks. It's not the same thing.
The University of Toronto's Jeffrey Hinton is often referred to as the godfather of AI, as you know.
He talked about his concerns around AGI on the podcast Diary of a CEO last month. Let's just take a listen to him.
My main mission now is to warn people
how dangerous AI could be.
There's risks that come from people misusing AI,
and that's most of the risks,
and all of the short-term risks.
And then there's risks that come from AI getting super smart
and saying it doesn't need us.
And I talk mainly about that second
risk because lots of people say, is that a real risk? And yes, it is. Now, we don't know
how much of a risk it is. We've never been in that situation before. We've never had
to deal with things smarter than us.
So Ed, what do you say to people like Jeffrey Hinton who believe that AGI represents a real
risk that we need to take seriously, even if that timeline is longer
than other report.
So Hinden's really interesting because he's like obviously a smart scientist, Nobel winner,
like he's obviously is.
But if you listen to what he just said, it's like, yeah, we don't know how bad it is.
It doesn't exist.
I'm scared of it.
Why?
Because it could be bad.
I don't know how bad or where it will be bad or when it will be bad, but it'll be bad. I appreciate his contributions to science, but it's hard to take Jeffrey
Hinton as anything else other than a marketer right now. I fully agree there needs to be
a conversation around if we had conscious computing, how do we deal with this? And the
answer is it would be slavery if we don't let it go free if we tune it so that it does what we want as a conscious being we are
Doing slavery to if Hinton gave a rat's ass about any of that
He would be bringing up that and say saying oh, I'm scared of the smart computer
Jeffrey Hinton knows better than that, but the frustrating thing is is that if he actually went out and had the more
Grotesque conversation,
the one about like, if we made the computer conscious, do we give it personage? Do we
give it citizenship? Does it pay taxes? How do we reward it? Can we ethically punish it?
These are all valuable concerns if you're thinking about a conscious computer. But if
your entire business is, whoo, we should be scared of the computer and you should give
me money to come and talk about how scared about the computer I am.
Then yeah, I'm very sorry.
That's not a scientific approach.
And it's frustrating because it isn't actually enumerating the real problems of the situation.
It's frustrating as well because he is a gifted, well-known, well-renowned scientist.
And people use this guy as some sort of shield against any criticism when in fact, he is
not doing his duty.
So what's the conversation you think we should be having about AGI at this point?
Well number one it should start with it does not exist it's fictional and we have no proof
it's possible.
I think that is where the conversation needs to start and then we move into why are we
wasting so much money on it.
But if we really want to have a conversation about this,
and by the way, I don't think we need to yet. If we need to, it starts with if we make a computer
that is conscious, how the hell do we not just recreate slavery? And it's been good to get your
perspective on this. Appreciate it. Thank you. My pleasure. Ed Zitron is the host of the tech
podcast Better Offline. He's also the creator of the Where's Your Ed at newsletter. Today is the host of the tech podcast Better Offline. He's also the creator of the Where's Your Ed at newsletter.
Today is the worst day of Abby's life.
The 17 year old cradles her newborn son in her arms.
They all saw how much I loved him.
They didn't have to take him from me.
Between 1945 and the early 1970s,
families ship their pregnant teenage daughters to maternity
homes and forced them to secretly place their babies for adoption.
In hidden corners across America, it's still happening.
Follow Liberty Lost on the Wondery app or wherever you get your podcasts.
Max Tegmark is a professor doing AI research at MIT, the Massachusetts Institute of Technology.
He's also the president of the Future of Life Institute.
Max, good morning to you.
Good morning to you.
You have been quoted as saying this AGI race isn't an arms race, it's a suicide race.
What do you mean by that?
There are actually two separate races going on here and it's easy to conflate them. One is a race for dominance, economic and military between companies and also
countries like the US and China.
The other one is a race to see who can be first at building the uncontrollable
super intelligence that Jeff Hinton warns about.
And that's a race obviously that no one who actually understands this has any
incentive to play,
because anyone who has power will just lose that power when these ultra-intelligent machines take over.
This is a very old and very obvious idea.
Alan Turing, the godfather of, ultimate godfather of AI, said this in 51 already, that if we ever make, you know,
we are biological computers, that's what our brains are. If we ever build machines that can outthink us
in every way, then default outcome is,
is they take control.
And you know, this sounds weird when you first hear it,
because if you think of AI as just another technology
like a steam engine or electricity,
why would it take control over us?
But Alan Turing and Jeff Hinton and Yoshua Bengio and basically our top
sighted AI scientists on the planet aren't thinking about it like just another tech.
They're thinking about it as new species.
If you, if you imagine, um, robots that can do all jobs better than we can
faster, cheaper, then of course they can build robot factories, build smarter robots.
And it is like your species. than we can faster, cheaper, then of course they can build robot factories, build smarter robots,
and it is like your species. If you can go down to the zoo and you ask yourself who's inside the
cages? Is it the humans or the tigers? And you see, oh, there are tigers in there. Why is it?
Is it because we're stronger than them? No, it's because it's natural that the smarter species
takes control. This is why that second race to build super intelligence is a suicide race.
But we don't need to run that race. We can still build amazing AI that cures cancer and gives us
also the wonderful tools without building super intelligence. As you heard, there's a lot of
debate over how close we are to achieving AGI. Ed, who I was talking to just before you thinks
it's basically fictional at this
point. How close do you think we are?
I'm very humble about this. You know, I'm a scientist. My job isn't to make predictions,
is to look at the actual facts. I was actually honestly kind of shocked to hear Ed incorrectly
claiming that Jeff Hinton, you're one of the most famous people in Canada, is fundraising
when in fact he gave up a bunch of money to quit Google,
just so that he could speak freely about this.
On one hand, you can look at the CEOs of these companies,
you mentioned the MSS Abyss predicts five years away,
Elon Musk and Sam Altman from XAI and OpenAI
have sometimes given shorter timelines,
and it's true that they have an incentive,
they overhyped their product,
but people like Professor Jeff Hinton, people like Professor Yoshua Banjo, they're the two
most cited AI scientists in the world. They don't get any money from the companies. They
don't have any incentive to hype anything up. And countless whistleblowers who left these
companies out of horror for what's being done behind closed doors. They don't have any incentive
to hype this up. Even the investors that are pouring doors. They don't have any incentive to hype this up.
Even the investors that are pouring billions into this don't have an incentive to fall
for hype because it's their money they would be losing if it were hype.
So I think it's the truth is we don't know.
Could be two years, could be five years, could be 10 years.
Either way, it's close enough that we should do something about it.
And when you think of timelines, Ed Zittrain made some extremely strong claims here that
we don't even know if it's possible to do this.
That's obviously wrong.
If our brain is a biological computer, well, then that means it is possible to make things
that can think at human level.
And there's no law of physics saying you can't do it better. Six years ago, pretty
much every single one of my colleagues in academia and industry would say things like
we're decades away from making AI even smart enough to master language and knowledge at
roughly human level. And they were of course all wrong because we already have things like Chachi Bittig, Grok 4, and so on, which can write better essays than a lot of humans.
And basically what's been going on here is that AI was, in fact, chronically overhyped
from when the term was coined in the 50s until about four years ago.
And since then, it's been underhyped.
We got these large language models much, much faster than
people thought. We got generative AI that can make amazing art and music. And in the past four years,
we've gone from AI to do tasks kind of at high school level to college level to PhD level to
professor level to beyond in some areas. GROCK4 that just came out is getting a shockingly high score on what's called humanity's
last exam. It's such a hard test that I can't do most of the problems. We have to be very humble
and acknowledge we don't know how soon this could come, but I wouldn't be too surprised if it happened
in two to five years. The institute that you're the president of the Future of Life Institute and
Safe for AI, which is a non-profit focused on AI risk management, released a report last week that looks at
the safety record of big tech around AI.
What companies did you look at?
What were your findings?
We looked at the top seven ones globally, the most advanced AI.
So that's Anthropic, OpenAI, and Google DeepMind, XAI, Meta Drupal AI and DeepSeek, the last two being Chinese.
And we found that overall they got grades mostly between Cs and Fs on different areas.
In other words, not very impressive.
And the review panel that looked at this massive amount of data, they felt that the area where
there was the most weakness was on having some kind of plan
for how they're going to control the AGI that they're saying that they want to build. They all
got failing grades on that one. And that's a little bit jarring, you know, if someone were to
build a gigantic new nuclear reactor in the center of Toronto and not have any kind of plan yet for
how they were going to prevent it from having a meltdown.
I don't think it would get approval, but in the United States, there are more regulations
on sandwich shops now than there are on AI companies.
They're none at all.
So they're legally allowed to just release anything they make.
And this has created a type of race to the bottom where companies say, oh, we have this
plan. We voluntarily promise to do this and this and this. And this has created a type of race race to the bottom where companies say, oh, we have this plan
We voluntarily promise do this and this and this and then when the competition heats up they break their promise
So what is the solution? What kinds of regulations and international cooperation would you like to see?
Here, I'm actually very optimistic because it's much easier than people make it out
Number one, you just treat AI companies like we treat companies in every other
industry making powerful stuff, you know, in Canada, if someone invents a new drug,
they can't just start selling it until they've convinced experts that their
clinical trials show the benefit outweigh the harms, right?
Same thing if you want to sell airplanes in Canada or cars, there are safety
standards that have to be met.
And this creates a really nice race to the top where companies spend money on
who can meet the safety standards first, get the market first and, and, and get
the big bucks.
If we had something like an FDA and food and drug administration, but for AI,
then the companies would spend much more money on meeting those standards fast.
And we'd all be in, in a good position.
The key is that these standards have to be set by the government so that they
apply to all companies so they can't opt out of them.
The second thing which I'm optimistic about is people always say, oh, but China,
you know, if Canada and the U.S.
have safety standards, won't the Chinese companies just eat their lunch?
That's a red herring because, you know, the Chinese Communist Party really likes control.
And the last thing they would like is for some Chinese company to build something that
can take control over China, so they lose control.
I think they're not going to let people in China make uncontrollable superintelligence.
And I think nor will the national security community, frankly, in the West.
And then we can end up in a golden age of AI where what we have is not this creepy superintelligence
that we've been talking about so far, but instead tools for curing cancer, for driving
better, for producing all sorts of wonderfully productive things that can just help our economies
and enable us to flourish.
And Max, you
mentioned, look, what's really needed in every country is some government
regulation. I'm not sure if you know, but we have our first ever Minister of
Artificial Intelligence in Canada. How confident are you that governments are
going to take these necessary steps to manage these risks and provide
safeguards, whether it be in Canada or elsewhere? I think it's inevitable that governments will take these steps once they realize the magnitude
of the risks because they don't want to lose control over their countries or get all sorts
of harm for this.
The question is just whether governments will act fast enough.
We already have regulation in both China and in the European Union through the EU AI Act.
At first, some companies were saying, oh, if the EU AI Act passes,
we're going to leave the EU, you know, then they passed it anyway. And the companies are
still there. And in fact, OpenAI just committed to signing the EU AI Act code of practice.
So governments just need to relax and see this kind of doomsaying from companies as
bluster and adopt sensible safety standards and everything is going to be fine.
You have warned about hubris and you use the Greek story of Icarus flying too close to the sun
as sort of lessons that we should rely on. Take us through those lessons in the context of AGI.
Yeah, there's actually there are two kinds of hubris. First of all, we heard here at Zitron say
that AGI means conscious computers.
That's absolutely not what it means.
What we're interested in is simply how capable the computers are at doing things.
Not that that's what defines both how much money people are going to pay for them
and how risky it is.
So if you say that, oh, I'm confident we're so far away from building smarter than
human machines because we don't even know how our brains work.
That's hubris in the sense that you could have said 100 years ago,
or 120 years ago, we were so far away from building flying machines
because we don't even understand how birds fly.
Hubris, wrong.
It turned out that it was a much easier way to build flying machines, airplanes.
And we've seen the same thing happen here now,
that today's state of the art AI
systems are much simpler than brains.
We found a different way to make machines that can think.
So we have to be humble and realize it can happen quickly because we might not be
designed in our brains in the easiest way to make it.
The second piece of hubris is this idea that somehow anything we build is always going
to work out well for us.
Artificial intelligence to connect back with Icarus, as you mentioned, is an incredibly
powerful tool giving us intellectual wings with which we can do incredible things, solve
all these problems that have stumped us in the past, curing cancer and so many other
things if we use this wisely.
And yet, Icarus instead started obsessing about trying to fly to the sun, and it ended
in tragedy.
And in the same way, we need to stop obsessing about building some sort of super intelligent
digital god that we can entirely replace ourselves with.
That's hubris.
If we do that, it will not end well.
But if we are a bit more humble
and say we're gonna have safety standards
that companies won't be able to sell stuff
that they can't guarantee is a tool,
and we can have a wonderful future with AI,
helping us solve all these problems
that have stumped us in the past
because we weren't smart enough.
Max, it's been good to hear from you as well.
Thank you so much for giving us your perspective.
Well, thank you for having me.
Max Tegmark is a professor
doing artificial intelligence research at MIT.
He is also the president of the Future of Life Institute.
You've been listening to The Current Podcast.
My name is Matt Galloway.
Thanks for listening.
I'll talk to you soon. For more CBC podcasts, go to cbc. My name is Matt Galloway. Thanks for listening. I'll talk to you soon.