American Thought Leaders - Max Tegmark on AI Superintelligence: We’re Creating an ‘Alien Species’ That Would Replace Us
Episode Date: June 4, 2025Few people understand artificial intelligence and machine learning as well as MIT physics professor Max Tegmark. Founder of the Future of Life Institute, he is the author of “Life 3.0: Being Human i...n the Age of Artificial Intelligence.”“The painful truth that’s really beginning to sink in is that we’re much closer to figuring out how to build this stuff than we are figuring out how to control it,” he says.Where is the U.S.–China AI race headed? How close are we to science fiction-type scenarios where an uncontrollable superintelligent AI can wreak major havoc on humanity? Are concerns overblown? How do we prevent such scenarios?Views expressed in this video are opinions of the host and the guest, and do not necessarily reflect the views of The Epoch Times.
Transcript
Discussion (0)
One of the most memorable stories of 2023 was that the CEOs of all of these companies,
Sam Altman, Demis Asabe, Stari Amadei, signed this statement saying,
hey, this could cause human extinction. Let's be careful.
They said it. It's kind of like if Oppenheimer and Einstein had written a letter warning that
nuclear weapons could kill people and then everybody just sort of forgot about it.
Few people understand artificial intelligence and machine learning, as well as MIT physics
professor Max Tegmark. He's the author of Life 3.0, Being Human in the Age of Artificial
Intelligence, and founder of the Future of Life Institute.
The painful truth that's really beginning to sink in is that we're much closer to figuring
out how to build this stuff than we are figuring out how to control it. Where's the US-China AI race headed? How close are we to science fiction type scenarios
where an uncontrollable AI can wreak major havoc on humanity? And how do we prevent that?
Do you know that AI is currently the only industry in the US that makes powerful stuff
that has less regulation than sandwiches.
This is American Thought Leaders, and I'm Jan Jekielek.
Max Tegmark, such a pleasure to have you on American Thought Leaders.
Thank you.
We've been seeing an incredible growth in AI, especially through these chatbots. That's what
all the fuss is about, maybe some also in robotic and
some other areas, some very interesting applications. But I have this sense, I have this intuition
that what's really happening is much, much greater than that at a level that perhaps I can't even
grasp at this point. So why don't you tell me what's really going on here? So it is indeed hard to see
the forest for all the trees
because there's so much buzz in AI every day.
There's some new thing.
So if we zoom out and actually look at the forest,
we humans have spent a lot of time over the millennia
wondering about what we are, how our bodies work, et cetera.
And hundreds of years ago, we figured out
more about muscles and built machines
that were stronger and faster.
And more recently, there was a quest to try to understand
how our brains process information
and accomplish tasks.
The name artificial intelligence is coined in the 50s.
People started making some progress. Since then, it's mostly been chronically overhyped.
The people who started working on this said,
you're going to do a summer workshop in Dartmouth
and kind of figure it out.
It didn't happen.
But for the last four years, it's
switched to being underhyped, where progress has actually
been much faster than people expected.
So to give you an example, I've been doing AI research at MIT
as a professor there for many years.
And as recently as six years ago,
almost all my colleagues thought that building
a machine that could master language and knowledge
kind of at the level of CHAT GPT-4 was decades away. Maybe it would happen in 2050. And they were
obviously all wrong. It's already happened. And since then, AI has gone from kind of
high school level to kind of college level to PhD level to professor level and beyond, and in some areas, which leads us close to
the point where, which the godfather of the whole field, Alan Turing, talked about in
1951. He said, look folks, if you ever make machines that are just better, can out think
us in every way, you know, they're going gonna take control. That's the default outcome. But he said, chill out, it's far away, don't worry.
I'll give you a test though, a canary in the coal mine
when you need to pay attention.
It's called the Turing test.
That's when machines can master language and knowledge,
and which is what's now happened with ChachiPT 4 and so on.
And so basically, after a lot of hype and failed promises,
we're now at the point where we're
very close to probably the most important fork
in the road in human history, where
we have to ask ourselves, are we going to somehow
build inspiring tools that help us do everything better and cure diseases and make us prosperous
and strong, or are we going to just throw away the keys to the planet by building some
alien, amoral machine species that just sort of replaces us?
I really want to touch on this. I'm very fascinated about how you talk about the AIs that we're creating as alien. That's very interesting.
Just very briefly, what is the Turing test and how has it been passed now?
The Turing test, the way Alan Turing defined it, is an imitation game. You basically get to
interact with a machine machine maybe by typing.
And the question is, can you tell the difference
between a machine and a human?
There was actually a nerd paper that was written by a bunch of scientists
recently where they've now found that machines pass this test more often
than humans even.
They're even better at convincing people they're human than humans are.
But in practice, passing the Turing test means that you really master language and basic
knowledge and so on the way that today's chatbots can clearly do.
And the reason this test is so important is because if you ask yourself, why is it that
when you go down to the zoo here in Washington,
it's the tigers that are in the cages, not the humans.
Why is that? Is it because we are stronger than the tigers or have sharper teeth? Of course not.
It's because we're smarter, right? And it's kind of natural that the smarter species will dominate the planet.
That was the basic
point Turing made and it still stands.
People who don't see this, I think, make the mistake of thinking AI is just
another technology like the steam engine or the internet, you know, whereas Turing
himself was clearly thinking about it as a new species. That might sound very
weird because why your laptop isn't
the species, of course.
But if you think of a species as something
that can make copies of itself and have its own goals
and do things, we're not there now with AI.
But if you imagine in the not distant future
that we have robots that can just outthink us in every way
and out plan us and out strategize us.
Of course they can also build robot factories and build new robots.
They can build better robots, they can build still better robots, etc.
and then gradually get more and more advanced than us.
That checks every box of being a new species.
And at that point, you know, before that point, we have to ask ourselves, is that
really what we want to do? You know, we have this incredible gift. We're placed here as
stewards of this earth, you know, with an ability to use technology for all sorts of
wonderful stuff. And I love tech as much as anyone. But don't we want to aim for something
more ambitious than just replacing
ourselves? Again, before I start talking about this alien foreign thing, which we seem to be
creating ourselves, I've been thinking a lot about how these AIs learn with all sorts of information.
Information has become increasingly politicized,
especially over the last decade. It's always been a bit politicized, but in the last decade,
it's been taken to a whole different level. This is a huge challenge for us at Epoch Times to cut
through that. That's part of the reason I think we've had a lot of growth is because I think we
have. You have Variety News now it's called, and an attempt to do the same sort of
thing. But it's also been demonstrated that many of these chatbots, even at this, I guess you would
call it still a primitive level of AI development, they have huge holes in their knowledge and
freely biased ways of thinking. It's like the values of the programmers have basically
really biased ways of thinking. It's like the values of the programmers have basically
gone into them, right? And there's even ratings on what kind of values the different chatbots have. Sure.
Well, we're not even talking about creating this AGI or anything like that. That itself seems a
huge problem at this primitive level. Your thoughts?
For sure. George Orwell pointed that out. If someone can control the truth, in
quotes, that's pushed onto people, then they can control the world, of course. And living
in some sort of AI-fueled 1984 is not what I want for my children, you know.
Basically the same thing as being a good journalist, which means is that you pursue the truth.
You follow the truth, whatever it is, even if it's not what you
had hoped you were going to find, you know.
And I sometimes get asked what I mean by being a scientist.
And my definition would be that a scientist is someone who
would rather have questions they can't answer than answers
they can't question.
And you're laughing here, like think about Galileo.
You know he asked some questions that the authorities didn't want him to ask.
Look at people more recently who asked questions that were deemed inconvenient.
All sorts of stuff that got them blocked on Facebook or whatever, and then later turned out to be true.
And what we see is this is a quite timeless thing.
Jordan Orwell was on to something.
The first thing that people who want to manipulate
the truth will do is accuse their critics
of spreading disinformation.
So as a scientist, I get really annoyed when people say,
oh, just follow the science.
Being a scientist is not like being a sheep
and following what you're supposed to follow.
It's asking all the tough questions, whether people
get pissed or not about that.
And that's why I created the very new site, where it's,
just like in science, it's hard to figure out the truth,
let's be honest.
Very hard. Very hard.
Very hard.
We, we, you had the world's smartest physicists for hundreds of years believing in the wrong
theory of gravity.
And that wasn't even political.
They believed in Newton and then Einstein came along 300 years later and said, oops,
we were all wrong.
So the idea that you can just figure out the truth by having some government appointed
or Facebook appointed committee that's going to decide it for you is ridiculously naive,
right?
If it were so easy to figure out the truth, you could shut down the universities, shut
down all the news organizations.
I think science has the best track record historically of ultimately getting to the
truth, closer to the truth.
And so the idea with Verity was just
to sort of emulate that Libyan media.
Rule number one is you don't censor anybody,
even if they're dressed badly or don't have a fancy degree.
If they have a good argument, you've got to listen to them.
I'm pretty sure that someone back at MIT
is going to criticize me for saying, hey, Max, why are you
on the talking to people at the Epoch Times?
I'm a scientist.
I will talk with anybody.
I will listen to anybody also.
And that's the scientific method.
If you go to Verity.News, all the different perspectives
are in there.
And we actually use machine learning
to figure out what people from the different sides
of a controversy all agree on.
We call those the facts. And then we also separate out where they disagree. So people
can just go in if they're busy, you know. Oh, this seems to have happened because Fox
News and CNN agree on it. And here is what these people say. Here's what these people
say. Now I can make my own mind up. That's how I want to have it as a scientist too. If someone is arguing about a vaccine or anything controversial, I want
to see, okay, these ones say this, these say this. Now I can decide for myself.
One of the big challenges is that we have something that I kind of call the megaphone.
But it's like when you have an information ecosystem where there's, for some reason, consensus on an
issue, no matter how crazy it turns out to be years later, and then that dominates the information
ecosystem. According to your algorithms, you might think because of that, that's the truth.
Because all these different media agree on this point. One example was this idea of Russiagate.
There was this false narrative that the president was a Russian agent. Everybody was writing it.
You would think you would have believed it almost, except that we were asking the difficult
questions that got us in huge
trouble. And of course, you're vindicated later, but you suffer along the way. And the
question is—
Just like Galileo.
Just like Galileo. But is Veridini's able to distinguish between that? I'm just fascinated
by what you're trying to do here.
Yeah. We actually did an interesting experiment. This was with a student, an MST student, Samantha Delonzo,
where we gave AI a million news articles to read
and asked it if it could automatically
figure out which newspaper wrote the article just
by reading the words.
And it was remarkably good.
And it put all of the newspapers on a two-dimensional plane.
The obvious left to right, like if it's if it's an issue, an article about abortion,
it was pretty easy to tell.
The AI discovered that if it talks about fetuses or unborn babies, it could make some guesses
as to where, what views the journalist had about that matter.
But it was a separate, whole separate axis also that the AI just found by itself, which
is what we call the establishment bias.
Are you basically saying the same thing that everybody else is saying, or are you saying
are you being the Galileo here with making you saying the unpopular thing?
It's often newspapers that are a little bit farther away from power that don't always
echo the big talking points of the corporations, and so on.
So there is an objective way of ultimately noticing this.
And what Verity tries to do is just
make sure we sample from all the corners of this
for every article.
And in particular, whenever there's a topic,
you know what the controversy is going to be about it.
So you make sure you look at newspapers on both sides of it
and see what do they agree on, right?
Yeah.
So that the viewer can come in very quickly and say, okay, this is what happened, here's
what they say, here's what they say, and make their own mind up.
It's incredibly important to me that we never tell people what to think, but that we make
it easier for them to make up their own mind.
But what you just said is really super interesting, right? Because what if the AI is injecting
inherent bias? I mean, this dovetails perfectly, right?
Of course it is. And in two ways. I mean, one is just from its training data and so on. But also,
if you have a country and all the media is owned by one guy, the media will start writing nicer things
about that guy.
There was a study about what happened
to the views of the Washington Post
after Jeff Bezos bought it.
You will not be so surprised you fall off your chair
if I tell you they found that it started criticizing Amazon less.
And now we're in a situation where OpenAI, Google, et
cetera, et cetera, these companies
are controlling an ever larger fraction of the news
that people get through their algorithms, which gives them
an outsized power.
It's only the beginning of an ever more extreme power
concentration that I think we'll see
if we keep giving corporate handouts to these AI companies,
let them do whatever they want, that they will define the truth. And the truth will always include, oh, when we
tech companies are great, then should be allowed to do whatever we want.
One of the things that shocks me is that there are actually, in the literature of some of
these AI companies, some of them appear to be wanting to create
artificial general intelligence. So I'm going to get you to tell me what that is
for the benefit of our audience. And also, that is astonishing and almost like a kind of, to me,
like some sort of odd quasi-rel religious commitment, which scares the living daylights out of me.
So please comment.
So first of all, artificial intelligence is
just non-biological intelligence.
The idea being that anything that has to do with
information processing, you don't need to do
the information processing in
carbon atoms and neurons in brains.
We found you can also do it with silicon atoms in computers, right?
Intelligence itself in computer science is defined simply as the ability to accomplish
goals.
It doesn't matter whether the thing has any kind of subjective experience or emotions
or anything like that. We just
measure it. You know, can it beat you at chess? Can it drive a car better than you?
I mean, can it make more money than you at the stock market? That's how we
operationally define intelligence. So what's artificial general intelligence
then? Well, the AI we have so far is still pretty stupid in many ways. In many
cases it's quite narrow also that it can do some things and not others.
For example, most AI systems today are very passive.
What you have there in your iPad is it can answer questions,
but it's not outdoing anything in the world.
It's not making plans for what it's going to do next year.
Artificial general intelligence is defined simply
as AI that can do everything we can do, basically,
as well as we can or better.
And some of these companies have been put on their website
that their goal is therefore to basically automate
all economically valuable work.
People who say there'll always be jobs
that humans can do better than machines
are simply predicting that there will never be AGI,
artificial general intelligence.
They're betting against these companies. For many years, people thought that AGI, artificial general intelligence. They're betting against these companies.
For many years, people thought that AGI was science fiction
and it was impossible.
We already talked about how people predicted even
that chat GPT was impossible, or at least until 2050.
So we should eat humble pie.
And I think it's increasingly obvious to people
who work in the field that it's absolutely possible.
If we get to there, where machines can do all our jobs,
that also includes doing the job of making better AI.
So very quickly, the R&D time scale
for making still better AI will shorten from months to years
for people's human researchers to maybe weeks
or months or days.
Machines will keep just improving themselves until they bump up against the next bottleneck,
which is the law of physics.
But there is no law of physics saying you can't have something, processing information
vastly better than us. You're a very smart guy, but your brain
is limited by the size of your mom's birth canal, right?
And moreover, your brain runs on 20 watts of electricity,
like a light bulb.
And you can easily put millions of watts into your computer.
My laptop runs about 1 billion times faster than my brain
also.
So once a machine starts improving themselves, they can get as much smarter than us, I think
it's obvious, as we are smarter than snails.
And if you had the idea that some of our snails were going to control us, or they were going
to build us and then control us, you should laugh at this because it's pretty obviously
a tall order.
So what I describe now is what's known as super intelligence, which is what probably
will come not long after artificial general intelligence.
And this is exactly what Alan Turing warned about.
He was not worried that you're some sort of of calculator was going to take over the world.
He was worried that we were going to make self-improving machines that would eventually
sort of take over.
And there's a really interesting historical analogy here, because I know you're quite
into history.
So Enrico Fermi, the famous physicist, in 1942 he built the first ever nuclear reactor
actually under the football stadium in Chicago.
It totally freaked out the world's top physicists when they learned about this.
Not because the reactor was dangerous in any way, it wasn't, you know, but because they
realized, oh my, this was the last big hurdle to building the nuclear bomb.
That was like the Turing test for nuclear bombs.
So now they knew maybe it'll take three years, four years, two years.
But it's gonna happen.
It's gonna happen. It's just a matter of engineering now.
In fact, it took three years there.
The Trinity test was in July of 1945, first bomb.
It's exactly the same with the Turing test when that was passed now with things like Jack GPT-4,
that it became clear that from here on out to building things
we could actually lose control over, it's just engineering.
There's no fundamental obstacle anymore.
And instead, it's all going to come down to political will.
What are we going to want to do this or not?
Can I inject some optimism in this?
Because I'm just feeling a little gloomy in the direction we're going here.
We still have not actually had a nuclear war between the superpowers,
because there was a political will not to have one.
So we didn't have one,
even though there were some close calls.
It's absolutely not a foregone conclusion either
that we're just gonna hand over the keys to our future to some stupid alien machines because
frankly almost nobody wants it. There are actually a very small number of
tech people who want, who have actually said, gone on YouTube, and places like that, and said that they want,
you might need to be replaced by machines
because they're smarter somehow and that's better.
I think of that as digital eugenics.
It's deeply nihilistic thought.
Like, I can't even imagine what these people
are actually thinking.
Like, that is, it's insane.
Isn't it by definition maybe?
I don't know.
Yeah. Yeah.
Yeah, I think they're sort of salivating over some sort
of digital master race.
I'm not a psychologist.
I'm not the one who can get into their minds exactly.
I believe in free speech.
I believe they should have the right to want what they want.
But I'm on Team Human.
I'm not on their team.
I want our children
to have the same rights, the long and meaningful life that we have. And frankly, I think you're
on my side on this.
I'm deeply on your side, but my concern—I mean, we're talking about this, right?—is
that I think the people who are actually working on this are above average more predisposed to the view that you described.
And that's the concern. And the other part is that with the nuclear bomb, there's this mass
physical devastation that happens when you launch one. This is somehow different. This is ethereal. Mass physical devastation doesn't happen until you've already sort of—ChatGPT
doesn't feel like it connects to the potential risk the way that a proto-nuclear bomb does.
The biggest difference of all is we have all seen videos of nuclear explosions.
Nobody has seen a video of anything about AGI or super intelligence because it's in the future.
So the uphill battle you and I have here, if we want to make other people pay attention to this,
is imagine we were warning about a nuclear winter in 1942.
People would be like, what are you even talking about?
And if we do get there, oh, there
can be plenty of devastation.
I mean, the most likely outcome is that it just
causes human extinction.
And we talked about suppression and memory
holding of inconvenient truths.
One of the most memory-hold stories of 2023
was that the CEOs of all of these companies, Sam Altman,
Demis Asabe, Dario Amadei, signed this statement saying,
hey, this could cause human extinction.
Let's be careful.
They said it.
It's kind of like if Oppenheimer and Einstein
had written a letter warning that nuclear weapons could
kill people, and then everybody just sort of forgot about it.
So there could be devastation, but it feels so abstract now
because it's not today's AI we're afraid of.
Today's AI can, of course, brainwash people,
manipulate people, cause power concentration, do censorship.
But it's not today's AI that we could lose control over.
It's the AI that we'll get in the future
if we let it happen.
The huge shift that's happened after we passed
the Turing test though is that this future
is no longer decades away, right?
Dario Amode, who leads Anthropic,
one of the top American companies,
predicts that it'll happen by 2027, two years from now.
We'll have, AGI will have what he calls a country of geniuses
in a data center, where these geniuses in the data center,
these AIs, are better than Nobel laureates in physics
and chemistry and math at their various tasks.
And so most likely, we'll get to the point
where this could happen during Trump's presidency.
Trump is the AI president.
It's going to be what he does during his watch, which determines how this goes.
I promised you some optimism.
So here's the optimism.
This dystopian future, almost nobody wants it except for some fringe nerds.
And that's the main, you know, there have been a lot of polls in the U.S.
The vast majority of Americans absolutely don't want this future.
There's not even any partisan divide on that.
Moreover, I think what's beginning to happen is people in the NatSec community
are beginning to also realize that this is a national security threat.
You know, if you're working in the US Natsac community,
and your job is to keep tabs on other countries that
could pose a Natsac threat, and then you
hear some guy talking about a country of geniuses
and a data center, you'll think, well,
maybe I should put his country on my list
to pay attention to also.
Do I really want some San Francisco-based nerd who's had too much Red Bull to drink,
to make decisions about something that could overthrow the US government?
The painful truth that's really beginning to sink in is that we're much
closer to figuring out how to build this stuff than we are to figuring out how to control it.
But that's also the good news, because nobody in the NatSec community wants us to build
something uncontrollable.
Even if you are talking about weapons, you want controllable weapons.
We don't want uncontrollable weapons.
And this is the way the Nateng community will think in every country. Take China, for example.
I'm pretty sure Xi Jinping would like to not lose his control. So if some Chinese tech
company could overthrow him, what do you think he's going to do about that tech company?
One of the things that drives innovation is technological competition. I think that's
obvious. I think it's a major driver. You're
trying to get ahead of the other. I'm sure this whole chatbot mass evolution, quick evolution
that's happening right now. The difference is that the moment you let the genie out of
the bottle, the moment that thing can start building itself. We're rushing right now.
We're in an arms race. You mentioned communist China, Xi Jinping, a completely amoral
governing regime. There's no moral boundary to be crossed. You're right. The moral boundary is
control. Absolutely, the Chinese Communist Party wants to not relinquish an iota of control. That's true.
However, any technology that's outlawed or problematic, CRISPR babies would be a great
example, though technically the guy didn't go to jail for that. But you know all that stuff is
happening. Why? Because there is no moral boundary and there's a technological arms race in all these
areas that we can get ahead. And all these American companies have built AI centers,
you know, notably Microsoft and so forth. So you know that all that information has been siphoned
and used. And I'm convinced that the Natsac people or the military people are going to be like,
hey, we got to keep pushing the envelope here,
because otherwise those other guys are going to be ahead.
And then the genie comes out of the bottle,
and now it's doing its own thing.
How does that not happen?
Yeah, those are really profound questions.
So the key thing is to think this envelope
that you talk about, to think about what
is the edge of this envelope, actually.
It's not so simple that AI is just either you have less of it
or you have more of it.
Think if there is a space of different kinds of AI systems
with different properties.
For example, you have a lot of intelligence in your iPad
there, but it's narrow intelligence, not very general.
And it's, for example, it can't drive.
And it doesn't have agency.
It doesn't have goals that it's out pursuing with a robotic body in the world.
At least I hope.
Yeah.
And the thing that we clearly don't know how to control is actually only when you combine three separate
capabilities when you have not just very strong domain intelligence at what you're
good at like alpha-fold for curing cancer or whatever or and very strong
generality so it knows how to manipulate people and produce new pandemics or
whatever and also agency autonomy autonomy. So it's only
when you put those three together that we don't know how to control it. It's
also only when you put those three all together that it can do all the human
jobs and cause a lot of negative upheavals and maybe in our society.
But if you have only one or two of those traits, it's perfectly possible both control it
and also make sure that it just complements people
rather than replaces them.
You have autonomy and generality
in your intelligence, et cetera.
So you're not gonna be replaced by something
which lacks one of those.
So if you think of the Venn diagram,
I'm being a nerd now,
there's like a donut shape where you just pick one or two of those traits,
but not all three, that lets you kind of have the cake and eat it.
And I think that that's where we're going to end up in the donut, so to speak,
because if you want to make a lot of money on AI,
suppose you want to build the world's best self-driving car
that can drive better than any human driver. We don't have it now
What do you need from this? Yes, you need of course you need autonomy and you need a domain intelligence
It better be really good at driving
But do you really need it to know how to make bio weapons?
Do you need to have so much generality in it that your car can persuade you to vote for your least favorite politician?
You probably wouldn't even want that you just take car hey car, can you just stick to driving, please?
And similarly, if you want to cure cancer, there's no reason to
teach a cancer-curing AI how to manipulate people or how to drive cars.
So I think what we're going to, this is my optimism. Where the race is going to shift is towards these kinds of...
Towards tool AI, the kind of AI here in the donut where we know how to control it.
To me, a tool is what I can control.
I like to drive a powerful car, not an uncontrollable car, right?
Similarly, what military commander in any country is going to want an uncontrollable
drone swarm?
I think they would prefer the controllable kind, right?
So they're not going to put all this extra AGI stuff in it so it can persuade the commander
of something else.
We will see a competition of who can build the best tools between companies, between
countries, but there will be at some point a hard line drawn by the Netset community competition of who can build the best tools between companies, between countries.
But there will be at some point a hard line drawn, and that's the community saying no
one can make stuff that we don't know how to control.
And you write that people in different companies, different countries have very different views
of things, but the Soviet Union-U.S. dynamic shows that
you don't need to necessarily agree on everything, to agree not to have a nuclear winter. You
know, there wasn't, it's not like there wasn't a great trust between Ronald Reagan and Brezhnev,
right? But they still never nuked each other because they both realized that this was just
suicide. And they never even signed a treaty to that effect.
There was no treaty where Reagan hugged Brezhnev on a stage and said, I promise not to nuke
you.
Because the U.S. knew that the Russians viewed it as suicide, trying to nuke the U.S.
And the Russians knew that the converse was also true.
So you had this red line that nobody crossed. And I think building uncontrollable,
smarter-than-human machines is exactly
the same kind of red line.
Once it becomes clear to everybody
that that's just suicide, and no one is going to do it,
instead, what people will do is race with tools.
And then they'll come up with some basic safety standards just
to protect their own people.
Do you know that AI is currently the only industry in the US
that makes powerful stuff that has less regulation
than sandwiches?
Like, if someone has a sandwich shop in San Francisco,
before they can sell their first sandwich,
some health inspector checks out the kitchen.
And a great many other things, yes, correct.
Yeah, if someone in OpenAI has this crazy,
uncontrollable super intelligence
and wants to release it tomorrow, they can legally do so.
You know, it used to be that way also for,
for example, for medicines.
Have you heard of thalidomide?
Yes.
Yeah, so, you know, caused babies to be born without arms or legs, right?
If there were no FDA, I'm sure there would be all sorts of new thalidomide equivalents
released on a monthly basis in the U.S. and people wouldn't trust medicines as much and
it would be a horrible situation.
But because we've now just put some basic safety standards
in place, saying, hey, if you want to sell a new drug in the US,
first show me your clinical trial results.
And it can't just say in the application,
I feel good about this.
It has to be quantitative and nerdy.
This many percent of the people actually get the benefit.
This many people actually get this side effect,
that side effect.
The companies now have an incentive to innovate and race
to the top.
And within a couple of miles of my office,
we have one of the biggest biotech concentrations in the US.
Those companies spend so much money innovating on safety,
figuring out how to make their drugs safe.
They take it very seriously because
that's the way you get to market first, right?
If you just had even the most basic safety standards for AI, the same would happen with
the AI industry.
They would shift from spending 1% of their money on safety stuff to spending a lot more
so that they could be the first to go to market with their things.
And we would have vastly better products.
And I'm not talking about some sort of horrible innovation
stifling red tape, of which there has been much, of course.
I'm talking about some very basic stuff.
I think it would even benefit the AI industry itself.
The worst thing that could happen
to our American competitiveness
in AI is that some dudes with hubris ruin it for everyone else and cause a huge backlash.
Unfortunately, regular viewers of the show will understand that we've often criticized the regulatory systems for
development of medicines and so on. Although I do believe that there's an attempt at the FDA
right now and other health agencies to create the scenario that you exactly described more, more so,
created more so.
Yeah.
I just want to be clear here.
Yeah.
I'm not some sort of starry-eyed person who thinks that regulation necessarily is all
great.
There's horrible examples of regulatory capture, massive overreach.
100%.
But there's also examples like thalidomide, where I don't know anyone of any political
persuasion who thinks it's good that people were allowed to sell the thalidomide.
So somewhere in the middle is the optimum amount of safety standards.
I was only saying this for the benefit of the people who are skeptical of our regulatory
system to Yeah. I'm just saying that having less regulations on
new species AI that we have no idea how to control then on sandwiches doesn't feel like
we've gotten it quite right. Okay. I think we can absolutely agree on that.
Your approach is kind of tool. Think of build AIs as tools, not aspire to sort of
sentience and the three circles kind of coalescing around. Not too much generality
in particular, I think, is the issue.
We should not just try to build replacements for ourselves, try to aspire to build replacements for ourselves, you know, aspire to build replacement humans, we should
build tools that make our lives better.
And we talked about safety standards, safety standards on sandwich might be like, should
not give you salmonella, you know, safety standard for AI.
The one thing I would insist goes on our list is it's a tool.
Before you release it, your job is to persuade the FDA for AI or whatever it's called
that this is a tool that can be controlled.
And actually, it's even better.
Of course, if the thing has no agency,
if it's just a chat bot answering questions or telling
you how to cure cancer, it's very easy
to see how you can control it and have it be a tool.
But you can even have agency.
I don't lose any sleep over future self-driving cars taking over the world.
They have agency autonomy.
They really want to go from A to B safely and et cetera, et cetera.
But they lack the generality to pose a real threat, as long as we don't teach them again
how to manipulate people and make bioweapons
and all sorts of other stuff that they really
don't need to know.
And there is a lot of cool science happening, actually.
This is where we work on in my MIT research group, actually.
How can you take an AI that knows too much stuff that's
not necessary for the product and distill out of that an AI that just knows
what it needs to know to really do the job? That's really the
question because how do you know that you didn't inject that code that is
that it's sort of quietly outside of your purview, developing its
sentience or you know, or
something like this, right? I can tell you actually. Yeah. It's, the way you do it is,
let me make an analogy. Suppose you figure out on your own a new algorithm
for navigating a rocket to the International Space Station. How do I
know for sure that you're not secretly gonna instead crash it into the White
House or something? Well, I have no reason to doubt your motives, but the safer way to do it is, say, instead
of having you sit there and steer the rocket, I'll just ask you, okay, can you take your
new algorithm and just code it up, write a computer program in Python or some programming
language that steers a rocket and give it to me?
And then I can look at that and other people can look at it
to make sure that all it does is steer the rocket
to the space station and not into the White House.
I don't have to trust you anymore
to trust the tool you built me.
And the amazing thing is,
AIs now are getting very capable at writing code.
So what we work on in my group is,
if you have an AI that's figured out how to do your tool, you tell it to write some computer program
that is the tool and also write a proof that it's going to
actually do exactly what you want.
Now you have a guarantee, actually,
that you can run an automatic proof checker
and be fully assured that this thing actually is only
going to do what you want, even if you don't trust the maker of it.
It's a little bit like if you talk to someone who's a mathematician,
and you ask, why do you trust this theorem from Irene,
from this other mathematician who seems very untrustworthy,
you don't answer well because I checked the proof.
I don't have to trust the one who wrote it.
So because AIs are getting so good at doing programming now
and even proving things, we are opening this new chapter
where we actually will have the ability
to make tools that we know are always going to remain tools.
So, many people tell me, I'm not worried about the future of AI because AI will always be
a tool based on hope.
I say instead, let's not worry about AI because we will just insist that AI be a tool.
Companies will then innovate to always guarantee that, and then we actually create the tool AI future or we can all prosper you know basically everything we
want when I talk to people from industry from work in hospitals really anyone but
people we want our tools to figure out how to cure cancer to figure out how to
avoid the stupid traffic accidents to make ourselves more strong and prosperous and so on. That's what we really want, not build some sort of crazy digital
master race that's going to replace us.
I am absolutely convinced that this needs to be regulated. I'm not a fan of overregulation by any means, but obviously, there has to be rules
here. It's actually kind of interesting. There's a legislation that's being put into the so-called
Big Beautiful Bill that actually would prevent states from regulating AI.
That's odd.
It's something I've been thinking about
given our conversation here.
And I'm not even convinced that regulation is gonna work,
but I still think it obviously is needed
in order to at least try to keep them tools
and not this other thing that you described. Yeah, this so-called preemption legislation where they basically
usurp state rights and say for the next 10 years Texas isn't allowed to do anything at all,
even if they want to, to protect Texans against non-consensual sexual defects.
But let's deal with…
It's clearly just a corporate boondoggle by the same...
We talked about power concentration before and how these companies are often trying to
shape public opinion in their favor.
So it's not surprising that they're also trying to just make sure that they can keep their
monopolies and be allowed to continue doing whatever they want with the oversight.
It's not surprising that they're pushing it.
The idea of just taking away constitutional rights from states for 10 years on an important
issue I think would make some of our founding fathers just turn it over in their grave.
And second, it's just so transparent that this is a corporate handout.
It's corporate welfare boondongle that they're asking for. If you pass something like this, the next
thing they're going to ask for is that AI companies should pay negative income tax.
But steelmanning it, what is the argument?
It's the same argument that the car companies made against seat belts and the maker of thalidomide made against why there
should be any clinical trials. Oh, you're going to destroy the innovation, destroy the
industry, etc. etc. etc. I see.
And then of course, the argument that works best in Washington also to get rid of any
regulation is to say, but China. Well, this is the arms race. And in some situations, it's compelling because you're thinking to
yourself, look, we definitely don't want them to be imposing anything here. It would be
a terrible world. Communist China I'm talking about. But the key point is, first of all, that if Sam Alton wants to release the digital master
race, and the state of Texas says, no, you can't do that, saying that he has to release
the digital master race so that Americans get destroyed by American
digital master race AI rather than the Chinese. It's a pretty lame argument. We don't want
anyone to do this, including our own companies. There are, of course, very serious geopolitical
issues. Of course there are. But that does not mean that lobbyists aren't going to be
crafty and try to twist them into arguments that are very self-serving
for them? Does that make sense? It makes perfect sense. I mean, that's kind of what lobbyists do,
right? So I want to talk about something that you seem to be quite interested in,
and it's this idea of the Compton constant. Again, I guess we're going pessimistic again here. This is like a measure
of how likely we're going to get wiped out by AI, if I'm not mistaken.
This obviously has utility, but having been a modeler myself at one point,
I know that a model can only be as good as the
information you put in. How could we possibly know the variables to fit into something like this?
And does it even have any real utility because of that? I can't understand the dimensions that
would fit into it, but perhaps you can. Please tell me.
Yeah, so here's how I think about this.
The way to get a good future with AI
is just to treat the AI industry like every other industry,
with some sort of quantitative safety standards.
So if some company says we're going to release thalidomide,
and the FDA says, does it have any risks?
Does it cause birth defects?
And the company says, ah, we think the risks are small. FDI is going to come back and say, how small exactly?
What's the percentage? Show us your clinical trial.
This is how we do it with airplanes also, nuclear reactors.
To open a nuclear reactor here in DC, you have to actually show a calculation that shows that the risk is less than one in
10,000 of meltdown per year, stuff like this.
So we would like to do the same thing for future very powerful
AI systems.
What's the probability that they're going to escape control?
And the reason we call this the Compton constant
in this very nerdy paper my MIT group released
is in honor of the physicist Compton, who
estimated before the first nuclear bomb test,
the Trinity test, that the probability of that
igniting the atmosphere and killing everyone on Earth
was less than 1 in 30,000.
And this was a very influential calculation he made.
Clearly.
It was because that number was estimated to be so small
that the US military decided to go ahead and do the Trinity
tests in July of 2045.
I'm pretty sure that if you had calculated that instead
that it's 98% chance that it's going to ignite the atmosphere,
they would have postponed the test
and commissioned a much more careful calculation.
So our very modest proposal is that you should similarly
require the AI companies to actually make a calculation.
Yeah, they say the risk is small,
that they're going to lose control, but how small?
And we actually did this analysis.
We took one of the most popular approaches
pushed by the companies for how they're
going to control
something smarter than them, which
is that you have a very smart AI controlled
by a slightly dumber AI, controlled by a slightly dumber
AI, et cetera, kind of like Russian dolls
stacked inside of each other.
We found that the Compton constant was higher than 90%
chance that you'd just lose control over this thing. Maybe there are better methods they can come up that the Compton constant was higher than 90% chance
that you'd just lose control over this thing.
Maybe there are better methods they can come up with
in the future.
But right now, I think it's fair to say
that we really have no clue how to control vastly
smarter than human machines.
And we should be honest about it.
And we should insist that if someone wants to release them into the
United States, they should first actually calculate the content, quantify it, just like
the pharma industry and the aircraft industry and everybody else has to do for their products.
I find it really interesting your portrayal of this AI, which we're trying to not create in a way. Aliens are a very
common topic for humanity for quite some time now. But you're saying, we're actually making that.
We're actually creating these things. And the way they think could become very quickly extremely unimaginable tasks, how they exist, how they
make decisions. Even the moral dimensions, is there even anything like that?
It's a very, very deep question. I'm going to be humble here and say that this is not
something I can give a glib answer to.
And I think if someone else does give a glib answer, I can explain why they're overconfident.
What we do know for sure, the space of systems one could build to process information in
some way is vastly, vastly broader than the kind of information processing systems that exist in brains.
We humans have so much in common with each other.
Even fierce political opponents still love their children.
They'll have so many other very basic things in common
that they would, by default, not have in common with machines
at all.
You could imagine very alien minds that just don't give a damn about children and think of childrenness
the way they would think of another obstacle, a stone or something in the way, and just
completely, maybe they don't have anything like human emotions and really just couldn't care one
way or the other if all humans go extinct or not.
We have a very common heritage, all humans on this earth, that gives us this bond.
We're opening up a Pandora's box here of other kinds of minds where it would be a huge mistake
to think that there can be anything like us by default. What we see now, there
are sadly a large number of people who already have AI girlfriends, for example, which are
entirely fake. They have big googly eyes and say sexy things, but there's absolutely no
evidence that there's anyone home or that this is anything other than just the total deception. And let's not fall for that.
The fact that there is such a vast space of possible AI's
we can make can also be used very much to our advantage.
We can actually figure out, let's figure out
how to make AI tools that will be loyal servants to us
and help us cure cancer and help us with all sorts
of other things.
This is not like talking about nuclear war where either something bad happens or nothing
happens.
If we get this right, it will be the most amazing positive development in the history
of humanity.
We spent most of our time on Earth here with a life expectancy of 29 years and died of famine and all sorts of
curable diseases. Even my grandfather died of a kidney infection that you could just
cure with antibiotics. So technology is empowering us to make life so much more wonderful and inspiring
for us, for future generations and so on.
And if we can successfully amplify our technology with tools, you know, that help us solve all
the problems that our ancestors have been stumped by, we can create a future that's
even more inspiring, I would say, than anything science fiction writers really
dreamt of.
Let's do that.
In the story of Icarus, he gets these wings
with which he could have done awesome stuff,
but instead he just kept obsessing
about flying to the sun.
Precisely.
Ended badly.
Artificial intelligence is giving us humans
these incredible intellectual wings with which to solve
all these problems if we just stop obsessing about trying to build AGI.
Of course, you have the Future of Life Institute and you have a whole series of proposals or
approaches and you have an action plan for AI. Give me some more
concrete thoughts about what you think should happen to keep the genie in the bottle, so to speak.
Our government is developing a new action plan for what to do about this,
which is really great. They're thinking, putting a lot of thought into it. Some very easy wins for starters would be to make sure that they get input not just
from a bunch of tech bros in San Francisco, but also from, for example, the faith community,
because I strongly believe we need a moral compass to guide us here.
And that has to be something more than the profit of some San Francisco company.
There has to be people in the room who are driven by their own inner North Star.
The conversation has to be not just about what we can do, but what we ought to do. And for that reason, I'm actually very excited that a group of evangelical leaders
has come. And I was doing this letter to the president saying that we need more leadership
here. We need to focus on what's actually good for Americans, not just for some tech CEO in San
Francisco.
We need to build these tools that make America strong and powerful and prosperous, not some
sort of dystopian replacement species.
So that's one thing.
A second thing, I think a very concrete thing, is let Texas and other states do what they
want.
Don't try to usurp their power and let some lobbyists here in DC decide everything.
They think it's just start treating the AI industry like all the other industries, again,
by having just some basic safety standards that can start extremely basic.
Like before you release something that you might be able to lose control over, you know,
calculate the...show us a calculation.
What's the chance that you're going to lose control over it and then let the government
decide whether it gets released or not?
And ultimately, I think it shouldn't be scientists like me who decide exactly what the safety
standard should be.
It should be decided by democratic means.
It takes a while to build up any kind of institutions for safety standards, so we can start with
that right now.
In other words, more regulation or—
I prefer saying safety standards because that's really what it is.
You set the bar.
Industry hates uncertainty and vague things, but if you say this is the bar, if you're
going to sell a new cancer drug, you better make sure it saves more lives than it takes.
If you're going to release new AI that's super powerful, you better show that you can control
it. The benefits that you can control it.
The benefits that way, the harms.
It's basic standards, and then how they get enforced exactly can leave to the experts.
This has been an absolutely fascinating conversation.
I greatly enjoyed it.
Any final thoughts as we finish?
Just to end on a positive note again, you know, ultimately technology is not evil.
It's also not morally good. I want it to be a tool. I want this to put a lot of thought into how we
make sure we use this tool for good things, not for bad things. You know, we have solved that
problem successfully with tools like fire. You can use it for very good barbecues or for arson.
And we put some regulations in place and so on
so that the use of fire generally
is very positive in America.
And same for knives and same for so much else.
And we can totally do the same thing with AI.
And if we do that, we will actually
building the most inspiring future
can imagine where seemingly incurable diseases get cured,
where we can have a future where people are healthy and wealthy
and prosperous and live incredibly inspiring lives.
It's this hope, not the fear,
which is really driving me. Max Tegmark, it's such a pleasure to have had you on.
Thank you so much. I really enjoyed this conversation.
Thank you all for joining Max Tegmark and me on this episode of American Thought Leaders.
I'm your host, Jan Jekielek.