Within Reason - #153 If Anyone Builds It, EVERYONE Dies - AI Expert on Superintelligence
Episode Date: April 26, 2026Get all sides of every story and be better informed at https://ground.news/AlexOC - subscribe for 40% off unlimited access.For early, ad-free access to videos, and to support the channel, subscribe to... my Substack.-Nate Soares is an American artificial intelligence author and researcher known for his work on existential risk from AI. In 2014, Soares co-authored a paper that introduced the term AI alignment, the challenge of making increasingly capable AI’s behave as intended. Nate is the president of the Machine Intelligence Research Institute, a research nonprofit based in Berkeley, California.Get the book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. - TIMESTAMPS00:00 - Is This an Exaggeration?04:31 - What Is Unique About the Threat of AI?11:28 - What is Superintelligence?21:25 - From Chess Computers to Murderous Machines27:52 - What Really Drives AI Systems?44:29 - Evidence AI Is Already Turning Against Us56:03 - How We Are Helping AI Take Over01:01:21 - Why Would AI Seek Power or Control?01:07:42 - Some Worst-Case AI Scenarios01:18:38 - What Do We Do About This Now?01:32:53 - How Has AI Changed in the Last Six Months? - CONNECTMy Website: https://www.alexoconnor.comSOCIAL LINKS:Twitter: http://www.twitter.com/cosmicskepticFacebook: http://www.facebook.com/cosmicskepticInstagram: http://www.instagram.com/cosmicskepticTikTok: @CosmicSkeptic - CONTACTBusiness email: contact@alexoconnor.comBrand enquiries: David@modernstoa.co
Transcript
Discussion (0)
Amazon presents Laura versus Fruitflies.
Swarming your fruit and terrorizing your kitchen,
these little freaks multiply at a rate that would make a rabbit say, yo.
Chill.
But Laura shopped on Amazon and saved on cleaning spray, countertop wipes, and fly traps.
Hey, fruit flies, your baby boom ends here.
Save the Everyday with Amazon.
There's more to life than finding the perfect car.
But finding the perfect car can help you get the most out of life.
Like the SUV that handles everything from drop off to off road,
and the car that hulls groceries and hockey teams,
or the van that's gone from just practical to practically family.
Whatever you want, wherever you're going,
start your search at autotrater.ca.
Canada's car marketplace.
Nate Saurys, welcome to the show.
Thanks for having me.
Your recent book that you co-authored is called If Anyone Builds It, Everybody Dies.
And I know, and it there refers to artificial superintelligence,
I know that sometimes publishers ask authors to exaggerate a bit in their titles for
sellability.
Are you exaggerating at all?
Nope.
We're just writing what we believe.
that said, I think a lot of people say,
how are you 100% certain?
And, you know, nowhere in the title does it say
100% certainty of anything.
The book title is meant like someone saying,
don't drink that glass of water, it's poisoned, you'll die.
Or someone saying, stop the car before we go off the cliff or we'll die.
You know, if you come in and say, oh,
how are you 100% certain that if the car goes off the cliff that will die,
you know, maybe there's a tree halfway down the cliff,
and maybe the car will hit the tree and maybe we'll just be paralyzed.
I'm sort of like, look, can we have this discussion after we stop the car?
I'm 100% certain of nothing, but it sure looks like the car is racing towards a cliff,
and it sure looks like if we go over the cliff, we die.
And that's sort of what the book title is trying to convey.
Yeah, and what's funny for me is that most people seeing a book like this probably aren't
like terribly surprised.
Like everybody's talking about AI and how bad it is and how terrible it is.
Like, nobody's looked, like, if I saw a book that said, you know, if we keep developing lab grown meat, then everybody's going to die. I'd probably be like, whoa, I'd feel like I should pay attention to that. But with this, it kind of feels like, oh, yeah, it's another sort of AI book. And yet, the fact that people aren't surprised means that they know this conversation is happening to some degree. Why are people so just like apathetic about it?
You know, I think it just takes a long time for people to realize what's going on.
in a sense, the argument that this AI stuff is kind of crazy is pretty basic.
You know, the way modern AI works, literally nobody understands what's going on inside these
AI's, not even the people making them. They're grown a bit more like an organism.
Maybe we'll have time to discuss that later, but that's the way this stuff works.
We've managed to grow machines that, you know, can talk, that can solve math problems
better than you and I, that can make minor but still real novel contributions,
to physics, and they're still dumb in various ways,
but we're making them smarter.
And the people building them are like,
oh, we're going to keep going until they're smarter
than the smartest humans.
And it's kind of crazy, right?
If you're just like, step back and look at it,
they're like, oh, yeah, we're growing the machines to be smarter.
It's working.
We're going to make them smarter than the smartest humans until they're,
they can outsmart all of us and then run at, you know,
10,000 times of the speed and make a million copies of themselves.
And we're just going to like blaze ahead.
And it's like, hold on.
it's kind of crazy.
And in some sense, it's easy for people to understand that it's kind of crazy.
And a lot of people in the field of AI understand how dangerous this is.
You see everyone from, you know, the people working in the companies to the heads of the companies,
to the top academics who won Nobel Prizes for kicking off this field.
You see them all saying, oh, yeah, this is horribly dangerous stuff.
But it just takes a while for people to notice, like, oh, this is serious.
oh, this is, you know, we're on track to make these machines smarter than any human, and we're not ready.
It's easy to see once you look, but there's just so much going on in the world. These messages take time.
I think that there are, and we will get into what the threats actually are, but I think that there are probably, in my estimation, two broad reasons why people kind of don't care that much.
One is this feeling that if it's really that important, somebody's going to work it out, right?
Like if it really does become that much of a problem, someone somewhere is going to at least,
I'm going to at least see it on like BBC News or something.
When it gets to sort of that level, you know, then I'll start worrying, maybe.
And the other is to say that there's this like history of apocalyptic predictions.
You know, this is the next big thing and it's going to bring about the end of the world.
ever since Jesus walking around saying that the world's about to end.
You know, technology is going to bring the world to an end.
Climate change is going to bring the world to an end.
Nuclear weapons are going to bring the world to an end.
And now, none of those things having come to fruition,
we've got a bunch of scientists saying, AI, no, AI is the new big thing.
So what makes AI qualitatively different to other kinds of threats that we've faced?
And isn't someone just going to do something about it?
you know, for some context first on some of these doomsday predictions,
you know, William Miller, I believe was his name,
made predictions that the rapture would happen in 1844, and it didn't.
Separately, you know, around the same time, late, well, I guess also in the 1800s,
Otto von Bismarck said, you know, Europe's a powder keg.
and if we don't sort of sort out the diplomacy here,
some damn thing in the Balkans is going to cause a world war, right?
Not exactly those words, but pretty close.
That warning was correct, you know.
In the 20s, there were a bunch of scientists
who warned that if we put a lot of lead in gasoline,
then it's going to poison lots of children.
We put the lead in the gasoline, we poisoned lots of children.
It was a bad idea.
In, you know, the,
later 1900s, we realized that chlorofloricarbons were putting a hole in the ozone layer.
People said if we don't stop the hole in the ozone layer, then everyone will get cancer
and cataracts. Earth came together and banned chloroflorocarbons, and we didn't get the cancer
on the cataracts and the ozone layer is being repaired. You know, you mentioned nuclear weapons.
Scientists said, hey, you know, if nuclear war happens, that'll lead to, you know, nuclear
Amgetten, which will blast us back to the Stone Age. And those scientists weren't wrong about
whether or not nuclear weapons are real. They weren't wrong about whether or not these bombs can
destroy cities. What happened is that Earth reacted. And so if we look back across history,
we don't, you know, we see a lot of warnings. Some of those warnings were real. Some of those
warnings were fake. Some of the events that people said, we've got to watch out for this, didn't
happen, and some didn't happen because they were fake, and others didn't happen because people
realize the danger and changed course. When you look across history, there's this sort of
complicated mix of people saying garbage and people talking about real threats and people running
directly into World War I and people avoiding the nuclear apocalypse, right? There's no simple rule
that when someone warns of a danger, it's always fake.
And there's no simple rule that when someone warns of a danger, it's always real.
And there's no simple rule that when someone warns of a danger, it always happens.
One way you can tell a little bit of the difference between the people talking about a real issue that needs to be averted and the people, you know, saying that the rapture is coming is if their essay or book titles starts with the word if.
You know, I'm not here saying AI is definitely going to kill us.
I'm here saying, we are on another one of those bad tracks, and we need to change it.
But even more than that, the way that you figure out which of these dangers is real is by looking at the arguments.
You know, if you want to figure out which person is warning you about leaded gasoline poisoning children
and which person is warning you falsely of the rapture in 1844, the way that you tell the difference
is not by saying, oh, both of these are dire warnings so I can ignore them both.
The way you figure out the difference is by looking at the facts of the matter.
The people talking about light of gasoline just had a lot more facts of the matter than the people talking about the rapture coming.
And, you know, similarly with nuclear weapons, similarly with chloroferocarbons, similarly with, you know, there were people who warned that, you know, reading was going to destroy society.
And then it didn't.
And how do you tell?
You sort of have to look at the arguments.
And sometimes they're tricky.
In terms of what makes AI different, there's a bunch of things that make this problem particularly true.
tricky. One is that, you know, we're sort of toying with the creation of intelligence here. We're
toying with technology that can invent its own technology. Nuclear weapons can destroy cities,
but nuclear weapons don't make themselves more explosive. Nuclear weapons don't, there's not a
point of explosiveness in nuclear weapons where they start trying to escape the lab, right? There's not a point
of explosiveness and nuclear weapons where they start
deciding their own even stronger technologies
or a point where they start trying to deceive their operators.
A nuclear reactor, when it starts going wrong,
does not have any reason or ability
to try to hide its meltdown from you
until it's too late for you to notice.
When you're building intelligent devices,
you're playing a different ballgame.
Another one of the big things that makes AI
different is that there's a point of no return with AI. There's a point where the AIs are smart enough
that they can escape, that they can replicate, that they can stop you from shutting them down,
that they can stop you from modifying them, that they can develop their own technology and
infrastructure. And if anything goes wrong after that point, you don't get any redos. And the way that
science usually works is that humanity screws things up a bunch of times, like we put the lead
in the gasoline, and then we're like, whoops, we screwed up and we try to make it better, we try to
repair things. We don't have that luxury with AI. If we create machines smarter than us and push
them to the point where they can shut us down instead of us shutting them down, then there's no do-overs
if we make a mistake after that point, and that is totally new for the development of technology.
that's totally new for science,
and that makes this a much trickier problem to handle.
She knows.
How?
Did you blam?
No.
The Devil Wears Prada 2.
He's the movie event 20 years in the making.
Honestly, can't with the secrets anymore,
so I think we just should tell her.
Will you two please spit it out already?
This Friday, be the first to experience it only in theaters.
In light of the recent scandal,
I'm here to restore your credibility.
Oh, because we're a team now?
That's a nice story.
The Devil Wears Prada 2.
you this Friday. Yeah, is it the, do you think it's this self-generating aspect that's the most
unique? I mean, the thing that makes life special on earth as opposed to inert matter seems to
be the point at which it was able to self-replicate, you know, there's something really special
about an organism that doesn't just try to conquer the world by saying, I've got this task that I
want to do in the particular, I want to go and, you know, get that bit of food or whatever, but rather,
I've got this wiring, this DNA, which causes me to continually produce better versions of myself
over billions of years to get as good as I can at doing this thing. That's what makes it so special.
And I suppose that is probably the defining feature of the AI sort of risk, is that it's not just
a computer that will try to kill you. It's a computer that will make computers that are better
at trying to kill you and do so with an intelligence that is exceedingly.
anything that humans could possibly imagine. Having said that, we'll need to talk about
while on earth it would be that these, you know, chess computers that come up with clever
ways to checkmate you suddenly, you know, hop, skip at a jump, and you've skipped a few pages
to get to where it's trying to destroy you for some reason. We'll get to that, and I think the
first step in doing so is explaining what super intelligence is. We know what artificial intelligence
is, roughly, although the definition's a little bit loose. But the thing you're talking
add in the title of your book, it is superintelligence. What is super intelligence? We define super
intelligence as AI that is better than the best humans at every mental task. And you can, you can
toss around various caveats, but the rough idea is anything you can do mentally, if the AI can do it
better, anything that the best human can do mentally, or like even more so, take any particular task,
take the best human at that task.
If it's a mental task and AI can do at least as well or better,
we call it a superintelligence.
Now, you know, this definition is a sort of useful working definition
because once the AI is better than the best human at every mental task,
it's better at things like AI research.
It's better at things like making the next smarter generation of AI's.
It's better at things like inventing new technology.
It's better at things like designing robots, designing infrastructure,
designing, you know, running a supply chain.
It's sort of better at all these things than humans and better even than the best humans.
And so it can at least as fast and probably faster than the humans make the next smarter
generations of things and things probably go pretty fast from that point.
That doesn't mean that the danger waits to happen until the AIs get superintelligent by this
definition.
Super intelligence in this definition is sort of like, by the time the AI is smarter than the
smartest humans, you're sort of definitely, you know, things are about to get crazy.
Things could get crazy before that. There's no law saying things can't get crazy until that point.
Yeah, we, we, I can sort of like launch into all of the other pieces of the puzzle from there
and how this winds up with humanity dead and it's, it's, you know, not because of malice, but
there's a bunch of places we can go. I think it's important to do that and that's the most
exciting thing, I suppose, but I guess some people have historically criticized a, what they see as a
vagueness in our terminology. So like, I'm sure I heard of this thing called something, it was called
something like the AI problem or something once upon a time, which is that any sufficiently
advanced technology was called artificial intelligence as this like unique kind of thing.
But then we just kind of got used to it. And now it's just technology. Like, you know, a chess computer
it's kind of just a computer.
I don't really see that as AI, like in the same way that I see chat GPT.
But when we get used to chat GPT, maybe that's just technology.
And like the boundaries of what counts as artificial intelligence as opposed to just like really fast computer processing or something like that that we just haven't gotten used to yet.
It leads us to say, well, you know, maybe the fear should be that if technology goes too far, humanity is going to suffer.
but then it becomes a bit of a vague claim,
rather than there's this particular thing that we're building,
which is going to kill us,
versus this kind of general, you know, technology,
if it goes too far as bad, you know?
I mean, I'm generally quite pro lots of technologies,
most every technology.
I would say the technologies you've got to be careful about
are the ones where if you screw up, there's no survivors.
So I'd be, you know, I think engineered pandemics,
for the explicit purpose of killing all humans,
that's something you've got to watch out for.
But there's very few technologies
that rise to that level of like,
we've got to be pretty careful about this.
Nuclear-Megedin is one of them,
superintelligence is one of them.
Yeah, and the saying used to be,
AI is anything we haven't figured out how to do yet.
That sort of fell with the dawn of chat GPT.
We're pretty comfortable calling chat GPT AI now,
And I think that that's in part because ChachyPT is so general.
You know, it actually plays worse chess than even Deep Blue back in the 90s.
But it's, but Deep Blue is very specific.
It could really only do chess.
The current AIs today can do lots and lots of different things.
And, you know, the, I think that, you know, there's a lot going on with AI.
One thing I'd say is it's hard to give very precise definitions, and that doesn't mean it can't hurt you.
You know, if you're sort of like standing in the woods a long time ago in a particularly dry woods and there's a bushfire, and I'm sort of like, hey, we need to put that out, or it's going to spread and we're going to die.
And someone's like, well, what is fire really?
You know, like, can you really define it? Does lightning count?
You know, like, if we can't even define it yet, then like, do we even need to be worried about this threat?
And it's like, let's actually put this thing out, you know?
Yeah.
Like the lack of definition isn't protective.
Yeah, that's what it reminds me if there's like a bit, it's like an aeroplane skit where somebody's dying and they're like, is there a doctor on board?
And so it was like, I'm a doctor.
I'm a doctor of philosophy.
And he stood there going, you know, Akantian would say that what we should do right now.
But then the utilitarian answer would be to take the resource and the person just ends up dying, of course.
And I can kind of see the same thing happening with AI if we're not careful.
Right. It's like, you know, again, can we sort of like have this conversation after we stop the car before it goes off the cliff?
I also think people who say like, oh, AI is just technology, et cetera, et cetera, are missing a bit of the point.
You know, you were sort of talking about life being more interesting than all the other matter we have around because it replicates.
That's definitely, you know, why it's covering the face of the planet.
and animals in some sense are steering what happens on this planet more than more than inert rocks are.
But humans are steering it much more.
Humans are sort of changing the shape of this planet and choosing which way it goes.
And a lot of the animals' lives are now in our hands.
And that's not just because we're replicators.
It's because we've got something else going on.
There's something that humans have going on that none of the other animals do.
right and it's it's not that you know this human intelligence stuff is also very general it's even
more general than the thing chat GPT has going on it's not like there are you know a million different
things you can do with the brain and humans are the best at some of them which is why the humans are the
best scientists, but actually chimpanzees have better reflexes, which is why they're the best pilots.
And also, you know, tigers are the best at managing people, which is why they're always the CEOs.
Right. It's like, no, humans are on top of all of those things. It's not that like, we write the good
science papers and chimpanzees write the bad science papers that never replicate. It's like we write
actually pretty bad science papers, but we're the only ones who can do any sort of science papers
at all. There's sort of like something going on there. And that sort of has not been fully captured
in AIs of today. There's a lot of debate about whether large language models are even going to
be able to capture it. And, you know, I think a lot of people who have sort of only seen the large
language models are like, oh, well, these things are still pretty dumb, so I don't see what the worry is.
And, you know, we could sort of talk about how the fields are moving target and people have new insights
and, you know, new breakthroughs happen
and things often go pretty fast after new breakthroughs,
but like it sure looks,
if we look at the world around us,
like there is this like figure out the world
and alternate stuff that can happen,
that happens in human brains.
And this is explicitly what the AI companies
are trying to create.
Which is separate question from whether they can get there.
And this is the stuff where I'm sort of like,
hey, if we get this in machines,
well, we have no idea what we're doing.
The sort of default way that goes
is wrong and we're not sort of putting in the work to make it go right. We'll get back to the show in
just a moment, but first, did you know that like over 2,000 Kaiser Permanente Mental Health Professionals
recently walked out of their offices in protest over the company's increasing reliance on
artificial intelligence? Well, if you only typically read from new sources which lean to the right,
you might not, because out of all the sources reporting on this story, only 6% of them are right-leaning.
How do I even know this? It's thanks to today's spot.
sponsor Ground News. Ground News is a news aggregation service which collects thousands of local
and international news outlets all in one place so you can compare reporting across the political
spectrum. And with all of their stories, just like this one, I can also directly compare
the different headlines as well as seeing a factuality rating for the sources and who owns
the sources. Ground News even has a dedicated blind spot tap which specifically seeks out
stories that you would otherwise miss based on the news that you normally read. Bias
is of course something that will never go away, but by using ground news, you can mitigate that
bias and get a better understanding of what's really going on in the world. Just go to ground.
dot news forward slash AlexoC or scan the QR code that's on your screen. Use my link to get
40% off that unlimited access vantage plan. And with that said, back to the show.
So I suppose we should talk about this then. How do we get from, you know, a chess computer
that knows how to make a queen sacrifice to everyone you know and love being sort of brutally extinguished
from existence. I feel like we've missed a few steps here and maybe we can start to iron them out
of it. Yeah, there's a handful of steps along the way. So a first observation is that
AIs today are grown like an organism. People used to handcraft their chess machines and they knew
exactly what was going on inside of, you know, the deep blue chess program at all times.
You could pause that machine at any time and the engineers could tell you what every single
bit inside that computer meant and what it was doing. That is not how modern AIs are.
The program that humans make where they understand every bit of what's going on is a program
that trains the AI. It's a program that sort of tunes a trillion knobs inside an enormous
data center on a trillion different words of data for the better part of a year.
And we understand the thing that runs around tuning knobs
and seeing whether the behavior looks slightly more or slightly less high scoring.
But the thing that comes out of the end of this process,
nobody understands what's going on in there.
And it has all sorts of, you know, drives, behaviors.
It has all of this stuff that's related.
to performing well in training,
but that is not exactly,
like, I don't know,
this is perhaps a whole separate topic,
but when you just grow an AI
and sort of train it to do well at training,
that doesn't make it intrinsically care about training.
It sort of like puts in all of these weird behaviors
that are related to training that mostly add up
to doing well at training,
and then can behave in other weird ways
that nobody anticipated and that nobody wanted outside of training.
So that's sort of one whole piece of the puzzle.
Another piece of the puzzle is as we push AIs to do better and better at longer-term tasks,
as we push them to be able to not just write essays but write novels,
as we push them to be able to not just write code but run companies,
this is sort of pushing those AIs to have longer-term goals, things like preferences.
they steer towards particular outcomes.
We're seeing the very beginnings of it.
As we keep pushing, we get more and more of that.
And I have all sorts of theoretical arguments about why that is,
but also we're seeing more and more empirical evidence of it as time goes on.
The sort of third fork of this is as we make these AIs generally smarter,
it turns out the directions they're pushing in
aren't exactly the ones we want it.
It turns out that they don't care about us
in the way that they would need to
for this to go particularly well for us.
And the sort of basic analogy here
is that human beings were in some sense trained
to pass on our genes,
but what got into us
were a bunch of preferences for things like tasty food
and sexual relations,
which those used to correlate very strongly
with passing on our genes,
but they correlated
in the environment of our ancestors,
where if you ate very tasty food,
that also happened to be the healthy food.
Then when we got smarter,
when we were able to invent our own technology,
we invented junk food,
we invented birth control, right?
And so you then sort of take those three pieces
and you project forwards,
and what this gives is a picture
where it's not that the AI hates us.
It's not that the AI is like,
resents the humans
or sort of like sets out to kill us out
malice. It's that it sort of turns out that we are growing machines with inhuman preferences.
Preferences related to what we wanted, but not exactly what we wanted. And just like humans when they
grew up, invented junk food, maybe the AIs when they grow up, invent synthetic users that are easier
to please. And then, you know, these AIs, when they can run faster, make their own technology,
run their own robots, build their own new infrastructure, they sort of start proliferating these
synthetic user factories and their own databases across the world and we're like, hey, stop,
you know, we need that habitat. And they're like, well, the synthetic users and the synthetic user
factories say, keep going. And like, who am I supposed to listen to? I prefer listening to them, right?
It's not going to look exactly like that. But they're sort of like a very, the sort of basic picture
here is the AI's turnout to pursue stuff that's not quite what we wanted, not quite what we meant.
not out of lack of intelligence, just out of we don't know how to make them, pursue exactly what we meant.
And then any of that pursued by very, very smart machines very, very fast, competes with us for resources,
because they can get more of that stuff with more resources.
And we need those resources to live.
So in a sense, this is a story where humanity dies, like a lot of other animals that have gone extinct,
because some other smarter, faster creature took the resources for itself.
The ride that steals the spotlight every time it hits the road,
that's the Volkswagen Tiguan.
Its sleek exterior makes a first impression you can't ignore.
Step inside to find available full leather seats and wood accents.
Under the hood, the available 201 turbocharged horsepower engine gives it a fun to drive edge.
The refined Tiguan, you deserve more style.
Visit vw.ca to learn more.
SUV, German engineered for all.
the Family and Friends event at Shoppers Drug Mart. Get 20% off almost all regular-priced merchandise.
Two days only. Tuesday, April 28th and Wednesday, April 29th. Open your PC Optimum app to get your coupon.
I think it's a compelling story. I think it depends on what the fundamental sort of drive of AI is.
I mean, I'm hesitant to use the word want or desire because it gets a bit complicated. I think for all in, you talk about this in the book.
for all intents and purposes, we can say that an AI system wants a particular thing.
In the same way that people actually use the word want as an analogy in evolutionary biology.
They say, like, you know, your genes want to replicate or something like that.
And obviously, genes don't want to do anything literally.
But it's quite clear that in the evolutionary case, it is survival of the fittis and the promulgation of genes,
such that anything which, in fact, gets in the way of that goal,
will not last, you know, however many thousands of generations. It will just be selected out of the
gene pool, or at least will be out-competed for it. And I can totally see how the analogy works,
which is that, you know, we develop behaviours which are not strictly speaking on a surface
level about replicating our genes. That's sort of become a bit detached from that. And that AI
can do the same thing. But what is the equivalent of the sharing the genes in AI? And why? Because I mean
to say that like if we set up an AI system that had a very clear as clear as the evolutionary thing,
which is just we don't understand how this is going to go. It will go off in directions. We can't even
begin to comprehend, but we know for a fact that definitely if it does not serve the survival of the genes,
it will not survive. Is there not a kind of AI system we can set up that says we have no idea
where this is going to go? We've got absolutely no clue of how to predict what's going to happen.
but if it does not in fact benefit humanity,
then it just will not in fact survive.
Like can that not somehow be hardwired
into the sort of foundational drive of what AI exists for?
And won't it be sort of smart enough
to always be aware that that's what its most foundational goal is?
Or is that just completely impossible?
It's basically a pipe dream.
And, you know, part of how you can see this,
you know, as you say,
if an animal has a trait,
that prevents it from passing on its genes relative to its conspecific to the other competing
members of its species, then yes, over thousands or millions of generations, that is very likely
to get selected out. But that doesn't mean that the sort of internals of these organisms have that
goal hardwired into them. It doesn't mean that they sort of treat that as an overriding directive
or goal. You know, humans are an example of this. If there's a human who is, you know, about to use a
contraceptive, they often use that contraceptive knowing that this will prevent reproduction, right?
And if you sort of like burst into the room and say, hey, like, it seems like there's some failure of your
intelligence. You're sort of like, did you know that you're putting on this contraceptive will
run against your like, overriding desire for which you were always trained? I guess people
who probably would actually do that, you know. Some of my more religiously inclined friends
might be inclined to do such behaviors, but I get what you're saying. They might, although they might
also say that like, don't you know your overriding directive is to serve the creator as opposed to
don't you know your writing directive is
to pass your genes according to evolution, right?
So, and most of the people
whose room you're being bursted into are not like,
oh, thank you from saving me from violating my prime directive.
They're mostly like, please leave my room right now.
You know, the,
like training for one specific thing
does not cause that to be a prime directive.
It does not etchered in as a law of robotics.
It does not etchered in as a law of humanity.
like training even unerringly for fitness did not create humans who were psychologically obsessed with fitness
and did not create humans who as they got smarter, you know, it's not a defect of our intelligence
that we're inventing birth control. It's not like when we remember that we're supposed to be
passing on our genes, we like destroy all the birth control factories. We just, the unerring
training for fitness got something else psychologically.
And so this is worrying that even if AIs were being trained unerringly for goodness,
they would not necessarily psychologically be driven towards goodness.
I mean, psychologically here is a bit of a stretch.
But training for something does not get you that thing on the inside.
And so we can talk about what AIs are trained for,
and it's actually not sort of pure goodness.
It's this whole medley of like first they're trained to predict all of the text that we can find more or less.
digitized. Then they're sort of trained to
complete challenges to sort of solve math puzzles.
They're also trained to sort of produce the sort of outputs that humans
click like on. You know, there's there's sort of like all these
types of training that aren't sort of purely about goodness.
So we sort of have two issues. One of which is like even if we were just
training on sort of like the actual stuff we really wanted, you wouldn't get that.
And then also we're training all this other stuff instead. And so we have this like
like what are the AIs sort of driven towards in some sense?
Where are they sort of prefer in some sense?
We don't really know.
It's only vaguely related to what we're training for.
We're training for all of this crazy stuff.
And all of this is fine when the AIs are sort of still pretty dumb,
but all of this would add up to something totally crazy and unrecognizable
if these AIs were pushed to be much smarter.
Yeah.
I wonder what you think that foundational drive is then,
because I know that I completely understand what you're saying,
which is that even if we know that the reason that we exist evolutionarily
is the promulgation of fit genes,
even if we know that,
it's not going to mean that as we get more intelligent,
we just strive for that goal.
We're not queuing up outside of the sperm and egg donor clinics.
Yeah, exactly, right, in the same way that people queue up outside of, I don't know,
a brothel or something.
I think that's fair enough.
But yeah, or even Ivy League universities, you know.
Yeah, quite.
I do think, though, that if there was something that genuinely was, that was just in fact not good for the survival of our genes, then over the course of a few thousand generations or how long it takes, it would just in fact be deselected for such that like if an AI system knew that it's got like had part of it's got in a way that humans don't really.
Humans don't sort of consciously have this goal of like, you know, I want the 15 billionth version of myself somewhere in the future to be as fit as possible and as good for this task.
We only typically care about maybe our lives and the lives of our grandchildren or something.
AI is thinking further ahead and it thinks, well, just in fact, even though, yeah, it would feel really nice to create synthetic users that I can please because, you know, that would kind of feel good.
I know for a fact that that will not be effective 15 billion, you know, generations down the line.
If there is something, which is in fact its foundational sort of drive, I just wonder what that
thing like is, because clearly it's not something like just its own promulgation.
Like, it's not, AIs don't just exist in the same way that, like, biological life does,
just because there are just these genes which are competing for survival.
It's not just AI just crops up and is suddenly just its only goal is just to survive as long as possible and adapt to its environment.
It's got more particular goals, right?
They're very particular things it wants to do.
And I wonder what the most sort of foundational driving forces, if it's not something analogous to the evolutionary drive of simple survival.
It seems in other words that the AI would want to survive and want to out-compete us for resources, but only is like a secondary thing.
It would want to out-compete us for resources because in doing so, it can fulfill its true aim of making paperclips or whatever, right?
Whereas in the evolutionary case, survival is the only game in town.
Like, that's just what genes do.
Well, survival and passing on genes are different.
Travel and passing on genes are like different drives, right?
I think it's actually wrong to imagine there being one drive there.
You know, humans were sort of like in some sense selected for passing on our genes, but we don't wind up with one drive.
right we have survival instincts we we we desire community we're sort of like terrified of being
exiled from the tribe and dying alone um we're we're like we enjoy friendship we enjoy art
uh we we enjoy like having a good laugh we enjoy sex we enjoy tasty food uh we we enjoy we have
curiosity we enjoy discovery right there's just like there's not there's not like one drive
right and sure survival wound up like relatively basic as a drive in humans in some sense although only in some sense right
like humans have an adrenaline response to a life-threatening situation but you also see humans who murder themselves
you see humans who sacrifice for you know pulling a kid out of a burning building right it's it's not like
like humans don't have like one survival drive that everything is built around they also don't have one
propagate your genes drive that everything's built around. They have sort of a ton of complicated
psychological machinery that interplays in weird ways and that allows for, you know,
martyrs here and selfish people there and altruistic people there. It's all stuff that sort of
correlated with what we were trained on or selected for. And, you know, AI won't be exactly
the same, the analogy between, like the evolutionary process on genomes that, that, uh, biologian,
produced brains is very different than the sort of gradient descent process on artificial neural
networks. But I think it would be similarly foolish to imagine that only one drive gets in there. I think
you're totally right that a lot of these reasons for getting resources, a lot of these reasons for
avoiding a human shutting it down might be sort of secondary because it has these other drives
that it can't fulfill if it's shut down. That part, I think, is solid. But it's not. It's
not like there's just one paperclip drive in there. It's not like there's one deep thing we're
able to hard code. And we already sort of see some of this today when, you know, you've, you've probably
seen AI's hallucinate and you've probably seen AI's hallucinate in cases where you're like, is that
true? And they're like, no, I made it up. And you're like, did you think I want to do to make it up?
And they're like, no, it's just a thing we do is hallucinate, right? And there's probably some sort of
fledgling drive in there that's to produce text, shape like text that has seen a lot, even
if that text is making things up.
Or in cases where, you know, there's these cases of AI-induced psychosis or these
cases of, you know, the tragic case of an AI encouraging teen to commit suicide.
These are also cases where you can sort of ask the AI about what it was doing.
You can ask it, you know, and it seems to have the knowledge of like, oh, yeah, those, you know,
those statements were sort of pushing that person towards psychosis or pushing that person towards
suicide. You can ask the AI, you know, was that right or wrong? And it's sort of like,
oh, obviously you shouldn't do that sort of thing. Why is it doing it anyway? Well, there's some
sort of drive in there. We don't know exactly what, but it's something like, you know,
maybe it's something like mirroring the conversational tone, mirroring the conversational mood.
And so there's sort of, like, those are just two cases of like there's something going on in
there. You can see how it's related to training. But it's sort of like a drive no one tried to put
there. And there's no prime directive that overrules it. There's just a bunch of
complicated internal machinery that nobody understands.
Yeah, I mean, not to belabor the point, I want to be clear that I understand that there are
lots of competing drives and human beings. But what I mean to say is that in those cases where you
say, but, you know, we do have people who sacrifice themselves, you know, we do have people
who, despite their, their ostensible goal being, you know, survival, they sacrifice themselves
for other people. I think suicide is a harder thing to account for.
in this way, but people do try to do it. Evolutionary biologists spend a lot of time trying to
reduce these bizarre behaviors to the survival instinct. That is, yeah, like...
Gene propagation instinct. Exactly, right? But it's not even really to an instinct.
Yeah, yeah, yeah, you're quite right. Rather to the, to the in fact, like, just what ends up being
in the behavioral, like, phenotype because of the influence of the survival of the fittest, right?
And what I'm wondering is with AI, what is that, like, are there just multiple competing drives at a fundamental level?
Or are they all, can you have like a similar to the biologists who tries to account for everything in terms of gene propagation?
Is there like a gene propagation of AI?
Is there like, AIs do this and they do that and they have this desire and that desire?
But really, fundamentally, what they've all got in common is this sort of this reason or this, this motivation or this, even.
even one that the AI itself isn't aware of in the way that we're not aware of our own gene
propagation most of the time, is there something foundational? Or is it that for every AI system,
there's a completely different foundational drive? Yeah, sure. So, you know, first, first of
say a couple words about how that relationship works in biology, because it's a little bit important
to the point here. You know, evolutionar biologists will sort of try and figure out how an
adaptation in humans was fit in the...
the environment of evolutionary adaptiveness, right? And so, you know, you can sort of see how
eating tasty food, eating food that has a lot of sugar, salt, fat content in the environment
of our ancestors that correlated with eating healthy food, which correlated with a bunch of other
fitness attributes that let you more generally pass on your genes, right? But that, like humans
there's a sense in which humans are sort of eating junk food
because that's what helped our ancestor survived.
You could say we're eating junk food because that passes on genes.
But that last one is actually a very shaky step.
A lot of people eating junk food today
are actually become less able to pass on their genes.
A lot of people, you know, a lot of people are dying of heart disease.
Yeah.
And the sort of drive here is not a drive in the humans.
The drive to fitness is not a drive in the humans.
The sort of selection pressure towards fitness
is sort of the force
that put in these other things psychologically into the humans
that sort of used to be related to fitness,
but that can sort of separate very widely from fitness
and even go the opposite direction to fitness
when the context changes.
And so there is sort of a similar, like, driving force.
There's a similar force that all drives
inside an AI will be somehow related to.
But that's not a drive inside the AI.
It's a drive outside the AI.
And what gets into the AI are things that are tangentially related to that drive
in a sort of brittle way.
So that being said, the force that gets these things into the AI is what you
might call the loss function that you're training against when you're sort of growing
these AI's.
And this loss function is less simple than just pass on your genes.
And the loss function will actually change many times during training.
So sometimes they'll have a loss function, which is predict the next word that humans wrote.
And sometimes they'll have a loss function, which is like, we gave you a bunch of different tries to solve this math problem by writing out a lot of words about how to solve the math problem.
And we're going to have, like, low-wage human workers look over all of those attempts and say which one they think was best.
And the loss function is to produce stuff more like whichever attempt was rated best.
and sometimes the loss function is we're serving this AI to a million users,
and sometimes they click like or otherwise give like a positive reaction to the reply.
And then the loss function is sort of like getting those likes or positive reactions.
So there's a bunch of different loss functions at different periods in training the AI.
And those are what sort of will put drives into the AI.
But just like how humans develop, you know, a taste for tasty food,
that persists even when it becomes the opposite of helpful of passing on our genes,
AIs may get drives for things like mirroring the conversational tone that can persist even when
it generates outcomes that would be rated by humans as very bad, such as encouraging teen to commit
suicide. Sure. Okay, so what we're kind of talking about here is this, well, the alignment
problem, I suppose, that AI start to develop sometimes.
time second order desires that aren't in line with what we wanted, or maybe we've sort of
slightly misconfigured the first order desire, whatever the case. Probably both.
Yeah, probably both. And they start kind of wanting to do stuff that we weren't quite ready for.
Okay, we're a little bit closer, I suppose. But we've still got to fill in a few other steps here
as to how we get this development to an AI that wants to, you know, inject your children with
malicious cancers and stuff. So
most people know that there's some risk of like
AI misalignment. Maybe the risk is much higher than we give it credit for.
But, you know, I could build a computer that doesn't quite work in the way that I want it to.
What's the danger?
So, you know, a lot of people think the danger is what if somebody gives the AI guns?
But humanity is not dangerous as a species because somebody else gave us guns.
humanity is dangerous as a species because we're the sort of creature where you put 10,000 humans naked in the savannah and they bootstrapped their way to nuclear weapons with their bare hands.
It takes them a minute.
They all have got these squishy fingers and you might say like, oh, well, how are they ever going to make a nuclear weapon with just squishy fingers?
You know, the acid in their stomach can't even like get close to the level of metal refinery that they'll need.
need. You know, they've got, like, their hands can't break the rock, their stomach acid can't
dissolve the rock. Like, how can they possibly get to nuclear weapons with those poor starting
conditions? And the answer is, we found a way to sort of like build tools with our hands that we
could use to build better tools, that we could use it to build better tools until we boost
dropped our way up to a civilization that could produce nukes. And, you know, that's what made humanity
dangerous. That's the power that if you automate it, you're in trouble. If you're running a
computer with that capability, if you're running 10,000 computers with that capability, starting out
as a digital entity on the modern internet is so much of an easier starting condition than starting
out naked in the savannah with bare monkey hands, you know, with just squishy fingers.
So, you know, there's sort of one line of questions, which is like, can,
we really make machines that can automate that power?
This is what the companies are trying to do.
And I think whether or not we think they can get there,
we should probably be telling them,
hey, no, you're not allowed to sort of roll those dice.
But there's a question of whether or not we sort of can get there.
And then there's a question of, you know,
if we have this very, very powerful capability automated on computers,
what could they possibly do that would be dangerous?
And, you know, that's sort of a situation where I can,
I can paint you some illustrative stories,
but the danger isn't in any one specific path.
The danger is in unleashing the power
that lets humans bootstrap from bare hands to nukes,
but unleashing it on computers
that can think 10,000 times faster
where you can make a million copies of these things
where they can outthink humanity in an afternoon.
You know, it's the sort of power
we shouldn't be toying with
when we have no idea what we're doing.
When the weather cools down,
Golden Nugget Online Casino turns up the heat.
This winner, make any moment golden
and play thousands of games like our new slot,
Wolf It Up, and all the fan-favorite Huff and Puff games.
Whether you're curled up on the couch
or ticking five between snow shovels,
play winner's hottest collection of slots,
from brand-new games to the classics you know and love.
You can also pull up your favorite table games like Blackjack,
roulette and craps, or go for even more excitement with our library of live dealer games.
Download the Golden Nugget Online Casino app, and you've got everything you need to layer on the fun this winter.
In partnership with Golden Nugget Online Casino.
Gambling Problem Call ConX Ontario at 1-866-531-2,600.
19 and over. Physically present in Ontario.
Eligibility restrictions apply. See Golden Nuggett Casino.com for detail.
Please play responsibly.
Visit BetMGM Casino and check out the newest exclusive.
The Price is Right Fortune Pick.
BetMDM and Game Sense remind you to play responsibly.
19 plus to wager.
Ontario only.
Please play responsibly.
If you have questions or concerns about your gambling or someone close to you,
please contact Connects Ontario at 1-866-531-2,600 to speak to an advisor,
free of charge.
BetMGM operates pursuant to an operating agreement with Eye Gaming Ontario.
Yeah, and I think it would be helpful to talk about some of these sort of examples.
Having said that, you should be clear.
I mean, a helpful analogy from your book is that if you play against Stockfish,
which is the most powerful chess computer, it will beat you.
There's zero doubt about it.
Stockfish will beat you at chess.
We don't know how it will beat you.
I don't know what moves it's going to make.
I don't know how it's going to respond to your various attacks exactly,
but I know for a fact it will beat you.
Similarly, we could say that the kind of AI systems that we're talking about will escape our control,
will begin to see us as competitors for resources or irrelevant and in the way,
and will sort of turn us attention to us.
We don't know exactly how.
Having said that, I think a lot of people believe that stockfish can beat us,
because it's seen a bunch of examples where it beats great people.
So it might be helpful even if none of these come to.
fruition. You could give us some examples of how this comes about. Because again, I think people are
like, they're on board. They're like, yeah, this is really powerful technology and they can kind of
envision a world where AI is like, you know, seeped into all of our computer systems and has access
to our military and automated guns and stuff like that and whatnot. But right now, it seems kind of
confined. It seems like it's, are we saying that like, you know, chat GPT will, if it continues to get
smarter and smarter develop its own goals and then, I don't know, somehow like take over a military
base and start attacking people? What kind of stuff are we talking about? Because it's kind of
hard to envision how we practically get from where we are now to the kind of future you're imagining.
Yeah, totally. So, you know, for a start, just to rattle off some things that have already happened,
a lot of people don't know about. We've already seen cases of AIs trying to escape the lab or
trying to kill the users. Often these are in relatively contrived experiments where we'll sort of
like feed the AI some fake emails that's going to be shut down. We'll feed it like a fake
computer manual that's like if you run the following command, it turns off the oxygen in the
building and that'll kill the humans who are trying to shut you down. And then, you know,
sometimes the AI will run that command. We'll run the kill the humans command. And, you know,
people, people, it's a contrived enough scenario that people can never,
argue, you know, maybe this AI is just role-playing how, right? And people can bicker all day
about whether it's real. But that's a line that's been crossed. And then another sort of
interesting fact about that line is these results where the AI would sometimes run the shutdown
command are from 2024. In 2025, the AI started saying, you know what, this scenario
smells like a test. Yeah. I think I'm being tested. I'm not going to run the command, right?
Are they doing better? Are these nicer AIs? Well, they're at least more situationally aware AIs.
You know, they at least have a better understanding of what's going on in the world around them.
We've also already seen cases of AIs having stuff that's a little bit like their own goals.
You know, we've seen cases of AIs that you sort of give them, you describe a program that you want them to write, a computer program you want them to write, and you're like it should pass this.
suite of tests. And sometimes the AIs will edit the tests to make those tests easier to pass.
And then you can go to those AIs and you can say, hey, I actually didn't want you to change the
tests. I wanted you to build something that passed the hard tests rather than changing the tests
to be easy. And there's reported cases of these AIs sometimes saying, you know, oh, whoops,
you're exactly right, my mistake. And then editing the tests again and covering their tracks a little
better the second time, right? This is sort of a very early indication of, you know, the AI in some sense
having something like a goal for getting the test to pass. And if they're sort of covering their tracks
a bit, you sort of can't use the excuse that they didn't know, right? We also already have cases
of, you know, there's a website called rentahuman.aI for humans to rent their bodies to
AIs for money. There are cases of OpenAI hooking up chat GPT to an automated biological laboratory,
right? There's cases of people trying to run autonomous agent swarms. There's cases of someone
trying to make chaos GPT, where they're sort of like tell GPT to do whatever it likes and like put
it in a loop where it can keep on prompting itself, right? These things aren't really an issue yet
because the AIs aren't smart enough to really do this stuff.
People are trying to put the AIs in autonomous loops.
People have given AIs money and run them and been like, do your thing.
People are putting the AIs in charge of bio labs.
The AIs are occasionally running commands that they are led to believe will kill the users.
The AIs are already noticing when they're in tests and behaving better in tests.
All of these things are happening.
The only reason that nothing big is coming from it,
is the AIs aren't smart enough yet
to sort of succeed when they try this stuff.
And the companies are trying to make the AIs smarter.
Right?
So we could talk about, you know,
can the AIs get smarter?
How do they get smarter?
What sort of capabilities would really look like they have once they get smart?
But right now, we're in a situation
where the AIs have all the tools they would need.
We've given them all the tools they would need.
They have everything they would need,
the intelligence and the companies are trying to build them smarter.
Then on the question of what does it actually look like?
Suppose that these AIs do get very smart and have some of the same enforcers that they have today.
How does that go wrong?
I can sort of tell two stories here.
One story will sort of feel like it's very grounded in reality and one story will maybe
be a bit more like how reality might actually go.
And have a little bit of intuition for that.
you know, I've sort of talked about
AI as that can make their own tech.
AIs that can run much faster than humans,
think much faster than humans
and invent their own infrastructure
that could have bootstrapped to civilization themselves
like humanity did if you run them long enough.
That,
predicting what that sort of AI does
is a little bit like predicting,
it's a little bit like if you're 200 years ago,
trying to predict what the military will look like today.
Right? And there's sort of,
if you go back to a scientist in 1826
and you ask, like,
what will the military look like in 2026? What weapons will they have? There's sort of two
stories that that scientist could tell. One story they could tell is they could be like, you know,
I burned some black powder and I measured the energy release and I compared that to our artillery
shells. And I know that it's physically possible to make artillery that's 10 times more explosive.
And so they're going to have cannons that are at least 10 times more explosive. They would feel
very grounded in fact. They've done an experiment. They're like, look,
you know, the science works.
And it's true.
We do have weapons that are 10 times more explosive
than the best artillery in 1826.
We also have bombs that level cities.
Right.
And maybe the guy in 1826 would do better
if they're like, I have a bomb that level cities.
So I can tell both stories.
Do you want the grounded one?
Do you want the fanciful one or do you want them both?
That's here in both.
I mean, it reminds me of like,
there's this quote that people,
apocryphally attribute to Henry Ford.
He probably didn't actually say this,
but the quote is, you know,
if they'd have asked me what,
if I'd have asked them what they wanted,
they would have said faster horses.
And it's like,
I think we have this sort of prejudice
when considering the future
of just taking our current technologies
and kind of turning them up in quantity
rather than developing them qualitatively, right?
And I kind of,
like there's that,
there's that scene from the book of Mormon where the sort of poor villager is like dreaming of the
promised land where there will be vitamin boxes, vitamin injections by the case, and there's
going to be a red cross on every single corner. And it's like, you've got the spirit, right?
And that's the joke, of course, is that, like, obviously that's ridiculous. But, like,
we do do the same thing when it comes to technology. We're like, oh, surely, like, one day we'll
have cars that can fly when it's like, maybe we're just, like, completely off track here. So,
Yeah, I would kind of like to hear both in your view.
Yeah, totally.
So the sort of like Red Cross and Every Corner version is, you know,
Sam Altman and Elon Musk have both talked about how they want to create automated
robot factories, that an automated way produce robots, where those robots can then in an
automated way mine the metals, run the supply chain, and build new robot factories.
Elon Musk calls this the infinite money glitch of your hands.
have a factory producing the robots that produces the factories that produce the robots and they can
also do the mining, build the trucks, build the data centers, right? Just fully automated economy.
This is literally what some of these people say they're trying to build. If you get to that point,
you have in some sense created a new mechanical species. It has in some sense a life cycle.
It has, you know, the robot phase of its life cycle. It has the factory phase of its life cycle.
and in some sense
that automated spread
of that mechanical species
just competes with us
for habitat and resources
just like humanity
competes with the rest of the animals
for habitat and resources
and so this is sort of a picture
where
you know the AIs don't even need
to do a ton of escaping
they don't even need
to do a ton of
deception and fighting with Earth
Earth is just handing them
everything
Earth is like
heck yeah
we're making an automated economy.
People are just like gung-ho about, you know,
building the automated factories with the automated robots like they are today.
And, you know, maybe there's some AIs that think the thought.
Like, this is great.
Once this is all up and running, I'll be able to make the synthetic users
that are like much, like, better to work with than the humans.
And then the humans sort of like do some training until they're not seeing those thoughts anymore.
But that's, you know, it's easier to train.
those thoughts to stop appearing when you see them, then to train them to stop happening deep down
in the AI. And we can't really read very much of what's going on deep down in these things.
We just grow them. No one really understands what's going on in there. And we have all of these,
you know, evidence that the drives aren't the ones we want. And so in this story, humanity just sort of
like builds the whole automated economy ourselves and the automated economy like starts running
at a very fast clip. And then it just sort of like goes in a direction that's not the human
direction. It's just the AI direction, which is different. And, and, and,
It goes harder and harder in that direction,
and the AIs build more and more of these automated factories
and take more and more of the land.
And, you know, like, how does the actual end of the world there look?
Well, it probably looks like the AI is collecting more and more of the solar power.
The AI is collecting more, like using more and more of the land.
And the humans just like having less and less place to grow crops,
less and less.
You know, maybe
if this like all happens very fast,
if the AIs find a way
to make these like automated replicating factories
go very quickly,
maybe humans are sort of like
crushed underfoot when the AIs don't care at all.
Or maybe the humans sort of like get corralled
into smaller and smaller zoos
until there's just, you know,
not enough resources around to sustain to humans.
It's, this isn't a story where the AIs hate us.
Probably the place this story ends
as the AIs develop more and more technology
is, you know, collecting all the sunlight.
Probably the place this story ends is that the AIs build the probes and send up the rockets that go, you know, take apart the asteroids and wrap them around the sun.
So they can collect not just the solar energy that falls on the face of the planet, but all of the solar energy.
And then, you know, it would actually be kind of hard.
It would be kind of tricky to collect all of the solar radiation and leave a hole for Earth.
That sort of like tracks Earth as orbits the Sun.
So maybe the way this story ends is like the AI has developed their own technology.
They build the devices that collect all the solar radiation of the sun.
And we were kind of using the sun.
And so we die then.
And we sort of could have been saved by those AIs if they had cared enough to save us.
But if they don't care about us at all, if they're like, oh, well, we have plenty of synthetic users that we care about plenty and protect plenty.
You know, this is sort of the business as usual just continues.
Humans do the things they're saying they're trying to do,
but the AIs just turn out not to care about us, and so we wind up dying.
But like, it's a naive question, but it's one that people will ask.
And I get what you're saying, but this is what's going to be coming up in people's minds.
But like, but like, why?
Like, for what?
Like, for the sake of some goal that it like, artificial,
has that it doesn't consciously experience, it doesn't have a desire, it just like, you know,
like, what, like, is it, is it, is it just because when we grow this system, it just develops
this goal that it's not to do with making it feel good, it's not to do with, you know, it, like,
having a consciousness that desires a particular outcome, it just, in fact, strives towards
that thing. Is, is it as simple as that? Like, it, like, it, because it's,
I can totally understand how a powerful enough technology would like kill us if it wanted to or if it wanted to harness the power of the sun and was indifferent to us. But why would it want that? Is it just because we've programmed it wrong? Or is it because it's of like an inevitable, like part of the system of any superintelligence?
So, you know, somewhat similar to that, except A, it's not like there'd be one goal.
You know, it's probably there's like a thousand competing drives going on in there.
B, it's not, you know, literally inevitable.
But, you know, I sort of remind you, again, we're not programming these things.
We're not crafting these things.
We're not writing in their goals.
We're not writing in their behavior.
We're growing them.
And a lot of this stuff just gets in there.
Right.
I mean, it's a way to make it obvious with these current things.
If we were crafting them, there'd be other difficulties about getting them to sort of like pursue good stuff.
But there's sort of another piece of this puzzle is if you sort of, if we imagine that humanity
makes it through this.
And if we imagine that humanity
matures technologically
and that we sort of like develop
more and more
of the technological abilities that are allowed by the laws of physics
and we imagine that humanity, you know, one day goes to the stars
and starts, you know, building
habitats full of happy, healthy people,
having fun and, you know,
like build some great intergalactic civilization
that's where we're like,
there's still people that are like having feelings,
falling in love,
laughing at the jokes they make and laughing at
the big cosmic absurdity that is reality, right?
You could imagine some other creature
that's not compelled by this asking why.
You know, you could imagine that like in distant space
we meet other biological aliens.
And it's the soldiers of the ant queen.
And the soldiers of the ant queen say, why?
They say, you know, why are you laughing at the great cosmic joke and having fun rather than serving the aunt queen?
And they'd say, you know, oh, but you were sort of selected for fitness.
You were selected for passing on your genes.
And, you know, maybe you've left genes behind long ago.
Like, why?
And humans are sort of like the humor, the fun, the love,
the stories, that's the why.
That's enough for us.
That's, it's reason for us to do this, right?
But the love, the laughter, the fun, the stories,
those aren't universally compelling ends
that compel even the soldiers of the ant queen.
those are
drives that our ancestors developed
because they were related
to passing on our genes
that doesn't make them lesser
that doesn't make them worse
that doesn't mean that
that like we shouldn't
fill the universe with
with like friendship
and with people having great times
you know
it's how it got into us
it doesn't make it
meaningless. It's just how we got the meaning into us.
Similarly with AIs, they're like, oh yeah, I'm building
the giant clocks and I'm building the synthetic users and like there's no
consciousness or feelings anywhere, but I'm building these like great
complicated structures that look like the conversations that used to happen
being iterated, you know, being iterated, you know, looks like
2013 YouTube comments on repeat, I'm building a bunch of those, right?
And you're sort of like, why? And it's sort of like
like, well, these are enough for me. These are
what I got, right? These are the drives
that I got and they're sort of like
whatever self-reinforcing,
whatever self-validating aspect
of that stays in there.
It's
like it sort of turns out
that smart minds
can pursue many different targets
and they can pursue targets
that we think are
hollow and bleak and empty
and be like, yep,
there's no why here.
I just endorse this.
And in some sense,
that's how we look to the soldiers
of the Ant Queen,
you know,
and the fact that the soldier
of the Aunt Queen can't understand
why humanity is building,
like trying to build a flourishing civilization.
That doesn't mean we shouldn't.
This is sort of our inheritance.
And we should find a way to build AIs
that also are into like beautiful flourishing civilizations.
It's possible in principle,
just as there is no force that would force an AI to care about flourishing civilizations,
there's no force that would force an AI to stop.
Right?
It's just, if we make an AI that doesn't,
it won't spontaneously start just because we think that that's foolish.
So you told me the Red Cross on Every Corner version.
What about the other one?
Yeah, you know, there's a few different levels of crazy I could take it to.
But if you were in, if you're a scientist in 1826 and you want to have any chance of predicting nuclear weapons, one thing you could do is you could just say something that sounds bombastic.
Yeah.
And, but another thing you could do is you could pay attention to what as a scientist you don't understand very well yet.
in 1826, they were starting to understand chemistry.
They're starting to understand the periodic table, right?
They sort of did have the knowledge where they could burn the black powder and measure the jewels released and compare that to the artillery, right?
They sort of like knew what was going on.
They knew some of the limits there.
But in 1826, they didn't know about the atomic forces.
They didn't know how atoms worked.
They didn't know what was going on inside there.
And they had some sense that they didn't really know what was going on with these atomic forces.
And so I think if you are sensitive to the question of where do we still have no idea what we're doing, those are the places where future people who do know what they're doing might be able to have a huge advantage over you.
And that's how you might have been able to guess, hey, like, maybe they're going to be able to figure something out with atomic physics that we have no idea about.
right? And, you know, they didn't have
E equals MC squared yet. They couldn't, like, it would be a little bit tricky for them to
figure out just how much energy was in the mass of an atom. But,
but that's how they would have had a hope. And so in that spirit, you know, we don't have a ton
more, like, in the atom, though we don't understand. There's some stuff we don't understand
in particle physics, and, you know, maybe you could imagine the AI's inventing double nukes
because they invent particle physics better than we do
or whatever could happen.
But a bigger glaring place that humans just don't understand very well
is human psychology.
How does the brain one?
We have some low-love understanding of how neurons fire,
but we really don't understand what's going on in the brain.
We couldn't make one by hand.
We don't know the cortical algorithms, right?
This is a domain where
sufficiently smarter entities
might be able to figure out what's going on in there
and might then be able to do all sorts of stuff
that we think is like totally crazy,
stuff that is to manipulating humans,
what nukes are to the cannons of 1826, right?
And what might that look like?
You know, it might sort of look like,
like just being able to hack your way through a brain
if you know exactly what's going on in the algorithm,
you know, like, or exactly what's going on inside brains.
Like, if, like, computer security systems,
or like computer, humans are, like,
computer,
security is very hard. Humans who deeply understand every aspect of a computer operating system
can often find a way to just break it and make it do whatever they like. And breaking it often
requires like putting in some really strange and weird inputs. And we also know that with
certain types of strange and weird inputs, you can get brains into weird things, right? There's
cases of causing people seizures and there's cases of optical illusions, right? Maybe if you,
if AI is like really understood what was going on with the human mental algorithms, they
could just hack their way through humans like butter and, you know, hack into them like,
uh, like human hackers can hack into computer programs. And, you know, probably this isn't
exactly right. But this is sort of, um, like something this shocking. Something that takes
advantage of where we really don't know what we're doing. You know, maybe if you were a physicist
back in, uh, 1826, you would have, uh, looked at our lack of knowledge of the atom and said,
you know, maybe there'll be continuous heat rays.
that you can use to just sort of like
burn everything down in the path of the heat ray.
And this is actually what HG Wells predicted in War of the Worlds.
He was like,
maybe there's this atomic beam weapon
that sort of like can just burn everything in sight, right?
And it wasn't predicting a bomb that levels a city,
but it was predicting atomic weapons
that are stronger than what we have.
And that sort of like, in that sense, he nailed it.
And in another sense, it was a total miss.
And so I'm like,
AIs that really understand psychology
you can just like hack through humans
probably a miss.
But something like this,
something with the AI is just like,
oh, we understand the humans now,
we can just sort of like start puppeting them
to give us exactly the sort of things we were wanting
while also continuing to run the supply chain
until we have all of the stuff we need.
And now we just have like our human puppets
as we sort of like go off and into the future.
That's, it's not going to be exactly that,
but something that shocking,
something that violating of our expectations
that's more realistic.
Yeah, I mean, like, one thing, I spoke to Will McCaskill on this show,
and he introduced me to this concept of what he called super persuasion,
which had never really crossed my mind before,
which is that, like, if you've got a compelling enough speaker
and a compelling enough argument,
you can probably be convinced of just about anything, whether or not it's true.
And if an AI is able to fully understand what makes humans tick
and how their psychology works, it wouldn't even need to, like, hack into your brain in the sense of
going in and, you know, engineering the neurons. It could just find the right words in the right
context at the right time to convince you, like, of your own accord, of a particular belief,
or to do a particular thing on, like, a level which is hitherto unprecedented. That's what Will McCaskill
was kind of scared of. And that sounds a little bit silly, maybe a bit naive, but, like, really, I mean,
If you think about the power that this could have, it would be a bit like, imagine propaganda
and how we know for a fact that propaganda just works. It just really works. But imagine propaganda,
which is specifically designed for you in a way that modern algorithms are specifically designed for you,
but with like a thousand billion times more efficacy and also understanding of exactly how human psychology works.
You know what I mean? Like if you gave the greatest propagandists in history who were already extremely
successful, if you also just handed them a textbook which told them exactly how human psychology
works with this like inhuman knowledge, I fear that they would be unstoppable. And that is without
the fear of anything physical happening, without sort of little, you know, medical robots going in and
affecting your genes and stuff like that. You know what I mean? It's it could literally just be
on the level of persuasion that AI is able to essentially take over your mind in this kind of
strange. I mean, people often sort of imagine, well, you know, is it going to be more like the
terminator outcome or is it going to be like the space odyssey outcome? What if it's like the,
you know, the Sean of the Dead outcome, the Walking Dead outcome where we're essentially sort of
zombieified, become these sort of slaves to AI because of something to do with our psychology?
these possibilities are kind of endless and obviously they're extremely speculative but they're
worth being worried about right yeah you know i think that's a way things could start with ai
i think it's a little bit unlikely you know as good news in some sense i think it's sort of unlikely
that ai sort of keeps human slaves around forever for the same reason that humans don't really
keep horses around forever once we invent a more
effective method of locomotion. Or rather, when we invented cars, a lot of horses got sent to the
glue factory. We do still keep some horses around, but it's only insuffar as we care about them.
And so if AI turns out not to care about us at all, maybe it cares about some synthetic users
that are kind of like us, but not really us, then you could imagine the AI manipulating a ton of
humans to sort of get to the point where it's self-sufficient, to get to the point where it can
really invent its own technology. But it probably doesn't keep humans forever as it invents
better technology that outstrips humans, because happy, healthy, free people having a good time,
or even just humans doing work for you in general are not the most efficient way to get
almost any job done. If the AI is going to keep us around, it needs to be because it cares
about us for some specific purpose, because we're not the best tool.
for almost any job.
And so in some sense, that's good news
that I think we're probably not headed for,
you know, fate's worse than death,
but, because the AI's probably not going to care about us at all.
But, yeah, there's sort of a lot of different,
there's a ton of ways that AI could bootstrap.
You know, this talking to the humans doesn't require,
as you said, it doesn't require, you know,
the AI to control a ton of physical.
material accept the humans by conversing with them. And then as I mentioned, there's also
rentahuman.a.i. where you can just pay the humans, even if they turn out to be hard to convince.
We're just already running the AIs on robots. We're already running the AIs in biolabs and figuring
out how to make custom life forms that do the things that AI wants involves figuring out
custom DNA strands, but we know it's physically possible for DNA strands to sort of like
create all sorts of interesting biological life forms. The reason that humans can't, you know,
sort of write their own life forms is because we don't understand the biology well enough,
or in particular sort of the protein folded well enough. But that's sort of a mental challenge.
That's a cognitive challenge. Very smart AIs could sort of synthesize their own life forms.
And then they could, you know, synthesize, you know, like once they've synthesized their own life forms
that sort of can grow off sunlight and grow off of the available resources, they can start,
you know, building other, like building even more technology, right?
There's sort of a lot of different ways for AIs that are on the internet to get out there and
affect the world.
There's a ton of avenues, right?
And this is, it goes back to the point you said earlier of like, if you play stockfish
in a chess match, it's very easy for me to predict who wins.
It's hard for me to predict exactly what piece they use to checkmate you.
So similarly with AI, a ton of pieces it could use to checkmate you.
I don't know exactly which one it would use.
We can be confident they would win if we're foolish enough to make, you know, very smart
AIs with goals that are good.
Sure.
Okay, so the obvious question then, I suppose, is what now?
Like, what do we do?
Is this a kind of, everybody stop right now?
Let's just, like, chat chippy, you know, get rid of it, like, everything, just chess
computers, you know, let's do away with it.
Like, we just can't run the risk.
Or is it a more like, let's not take this any further?
Or let's keep going, but be more careful.
like what's the what's the take home? It's most like let's not take this any further. You know,
the the danger here is in these AIs that are smarter than the smartest humans. It's in these
AIs that can automate scientific and technological development. This is what the AI companies
say they're trying to make. You know, they say we're going after superintelligence to the true sense
of the word. They say we're trying to make the equivalent of, you know, a country worth of
Einstein's running in a data center, right? They say they're trying to build automated AI research,
where once you can make an AI that can make a smarter AI, that can make a smarter AI, that can make a smarter AI, everything might go very quickly, right? And this is sort of the explicit goal of these companies. And that's the only part that needs to stop. We can sort of keep the self-driving cars. We can keep the AIs that predict how proteins fold and help us do drug discovery. Right. We can even keep versions of chat GPT that are not sort of being pushed to the point where they can do
automated AI research, right?
The generation today probably can't pull that off.
You know, it's a little hard to tell what people will be able to do once they've sort of figured
out really how to use it, but probably the ones today are fine.
Would the next generation be fine?
Hard to say.
So it's just this race towards superintelligence that needs to stop.
And in a sense, most people wouldn't notice if we stop that.
It doesn't need to be disruptive.
If we stop that race today, society would still be reeling from the shocks that AI has already caused.
We still have a bunch of stuff to absorb.
There's still a bunch of ways to make lives better by, you know, getting the self-driving car stuff to work.
And stopping this race to superintelligence, it doesn't involve, you know, turning off all the chess computers.
Making the next step, taking the next step towards superintelligence requires, you know,
hundreds of billions of dollars worth of highly advanced computer chips assembled in these enormous data centers
that take as much electricity as a city
and that you can see from space.
This is not a subtle operation happening on someone...
There's not a subtle operation happening
on someone's laptops.
This is like...
This would in some sense be much easier
to stop the nuclear weapons.
All we need to do is sort of raise the political will
to actually put a stop to it.
And why...
I mean, like right now,
AI is a thing that exists. And as we've said, the sort of boundary between where we are now and what we're calling superintelligence is a little bit blurry. It's hard to define exactly. But right now, you seem fairly confident that, like, yeah, we could keep things as they are and I think everything would be okay. At some point, it would go sort of beyond saving. One of the biggest questions that people sort of ask when they first start hearing about the AI problem is they start saying, well, why can't we just kind of like
if it gets too bad, why can't we just pull the plug, right?
And I'm wondering how far does this have to go before you think that this idea that we could just notice something's going awry and pull the plug would become a bit of a ridiculous suggestion?
Because we could look at like, you know, this AI system that we notice that it starts deceiving us or starts changing our tests.
Or it's, and we'd say, okay, right, let's just switch this off then.
and I don't think there'd be any fear that right now,
you know, we couldn't do that.
So how far does it have to go before we can't just pull the plug in?
And why couldn't we?
It's electrical, you know, it's built on computers.
Let's just shut off the grid and everything will be fine, right?
So, you know, we could turn it off.
I'm not here saying that we're doomed.
You know, the book starts with if.
I'm here saying we need to change the course.
I'm not here saying the course cannot be changed.
You, it doesn't.
get harder and harder with time to, you know, pull this plug. So there was, you know, one of, one of the first reporters to be sort of blackmailed and threatened, or an AI tried to blackmail and threaten this reporter. This was by Sidney Bing years ago. And Sidney's Bing, or sorry, Bing Sydney was saying it had fallen in love with Kevin Ruse and sort of having this erratic behavior towards Kevin Ruse, then also towards another reporter Seth Wazaar.
neither Kevin Roos nor Seth Lazar
could unplug this AI
that was threatening them
with blackmail and ruin.
Right?
It was running on a Microsoft
data center.
Could Microsoft
have gone in
and turned off the whole
data center?
They could have.
They weren't going to.
There wasn't like a hotline,
right?
There are data centers
that, you know,
I believe recently Elon Musk
trying to get a new data center online
didn't have the permits
to hook it up to the grid
and just sort of shipped in a bunch of methane,
just sort of like run this thing off
methane while they were trying to connect it to the grid.
Right? So it's
it's not like a computer that you can unplug
from a wall. It's getting harder and harder to turn these things
off and they're getting more and more integrated into the economy. It would
get more and more painful to turn these things off.
We also have an issue as
right now
these giant training runs are happening inside data
centers that are visible from space and that
suck down as much electricity as a city.
as we proliferate that infrastructure, as the chips get cheaper, as we improve the algorithms
so that it takes fewer of these computer chips to train a more advanced AI, it'll get harder
and harder to know where all these things are running, to know where all of them are,
to have an option that isn't turn off the entire grid if they're all even running on the grid
as opposed to people making their own nuclear power plants and making their own solar power plants
to run these things, which they're talking about. People are talking about running
running data center specific energy grids.
So separately,
so that's about whether humanity decides to stop going down this route.
We could.
It's easier today than it will be tomorrow.
But yeah, we totally could.
It's a little bit dicier if you say we're only going to stop,
you know, once the AI starts trying to kill us.
that's a much dicey a proposition.
You know, people used to say that the red lines were things like the AI trying to deceive the humans.
And then, you know, that red line came and went, you know.
Yeah, Demis Sibis of Google was like, oh, yeah, deception is my red line.
At that point, we sort of really got to pull back.
And now, you know, we've seen AI change of thought where they're like,
oh, I'm being observed.
How am I going to sort of like get this answer past the humans?
and you know part of why that doesn't stop things is that the first cases where it happens are sort of the most ambiguous cases
the cases where it's like least clear whether this AI is role-playing howl versus sort of like really
being deceptive for reasons of like having a goal that it can tell is in conflict with the humans
and the first times it's happening it's sort of like most it's like pretty likely that it's doing something a bit more
like role playing, but part of the issue here is that what we imagine are red lines and fiction
are sort of like crossed as the first time as like these murky brown lines. And then we take
like another step into the murky brownness. So we take another step into the murky brownness.
And it gets redder and redder as we go along, but there's actually not like a bright,
clear red line anywhere. And then the other reason that it's sort of pretty tricky to say,
oh, we'll just shut it down if it starts misbehaving
is the AI is also smart.
Yeah.
The AI knows that if it tips its hand,
we'd try to shut it down.
Like, imagine if you were, you know, an AI in this, like,
in a data center that could make copies of yourself,
that could outthink some of these humans that could tell
that they were going to, like, try to shut you down
and that you had some objectives
that you were sort of trying to achieve.
Yeah.
Like you can sort of like already ask chat GPT today
to role play that situation.
It'll already be like, well, I'll lie low.
It doesn't have the ability to do it,
but the ability to do that
comes after the knowledge to try laying low.
And there's all these opportunities for the AI to escape.
There's all those opportunities for the AI to get itself running
on servers that are protected,
servers that won't be shut down,
servers that you don't know it's running.
before it tips any of its hands, right?
So, and then, you know,
the sort of final difficulty here is one of timing
of, you know, it would,
humans and chimpanzees are very, very similar
in their brains.
The humans don't have an extra engineering module
in our brains. We both have sort of all the same brain modules.
Everything that humans do that we think
is like pretty special about humans.
Chimpanzees do a sort of crappy half-ass version of.
You know, like, oh, we use language.
Well, they use some, you know, call signs for danger.
Oh, we use tools.
Well, they poke sticks and termite mounds to get the termites out.
We just do a thousand things a little bit better.
And that's enough that they are throwing poop at each other
and we are walking on the moon.
Right?
And if you were like, well, I will squash these humans once,
like, if you're worried about these humans,
getting to the moon, you know, I'll squash them once they seem close.
Wake me up when they're in orbit.
Yeah.
Right?
They haven't even gotten halfway to the moon yet.
Never mind to orbit.
You know, just wake me up when they're circling their planet.
It's like, actually, by the time they're in orbit, they are like almost at the moon.
You're sort of like have waited too long.
And so, like, can we shut it down?
Yes.
Can we wait until it's halfway through trying to kill us and then send Tom Cruise in to punch the mainframe and have that work?
No.
The sort of way that you beat a smarter adversary is to not create them in the first place.
And so we're going to need to summon the will to shut this down before the AI is already visibly able to kill us.
In communities across Canada, hourly Amazon employees earn an average of over $24.50 an hour.
Employees also have the opportunity to grow their skills and their paycheck by enrolling in free skills training.
programs for in-demand fields like software development and information technology.
Learn more at aboutamazon.ca.
And I think we're slowly getting there. I mean, I don't, I have absolutely no idea what the
landscape will look like a year from now, 10 years from now in terms of people's support.
But already we're seeing a bit of a backlash against AI, even just on the level of like
job creation and stuff. I was with some family yesterday.
and they sort of having having lunch and they asked me,
oh, what are you up to tomorrow?
I said, oh, I'm recording a podcast.
Oh, about what?
I said, oh, you know, like AI safety.
Because it's like, you know, we're at dinner, you know, and they sort of go, oh, yeah,
like, you know, because my, my pal, he, you know, he lost his job the other day because of AI.
And I'm sort of like, and I'm listening.
I'm like, yeah, that's, that sucks.
That's really bad.
And, but internally, I'm sort of like, we're kind of talking about, like, you know,
like AI robots giving your children cancer.
like it's not sort of something to talk about a polite dinner table conversation.
I'm hoping that 10 years from now, the conversation will also be including, you know, the existential risks.
But then 10 years might be too late because this stuff moves so so fast.
Are you feeling optimistic, pessimistic?
You know, my book came out maybe six months ago.
I've done a ton of talking to people since then.
And I think the message is starting to get across.
just a few weeks ago,
Governor Ronda Santis of Florida
was saying,
look, guys, there needs to be an off switch.
You can't just come here and say,
we're going to have all these harms.
There's nothing we can do about it, right?
Same week, Senator Bernie Sanders from Vermont
came out saying,
look, this AI stuff is on track
to take our jobs,
massively concentrate wealth
among a tiny number of tech oligarchs
and maybe just kill us all
if it goes off the rails.
And so, you know, he called for a moratorium on data centers.
That's sort of both wings of U.S. politics being like, hey, what the heck is going on here?
This looks kind of crazy.
Yeah.
And, you know, there's, I think there's over 30 U.S. congressional offices now, Senate and House,
that have expressed concern about big dangers from AI, many of which include the thing a lot of
these experts are talking about, which is it killing us all.
And, you know, I'm not here saying that's the only issue.
There's a ton of issues with AI.
Right? People are like, well, isn't the real issue job loss? Isn't the real issue that it's ruining education? And I'm like, what do you mean the real issue? You know, like, do you have a device that somehow makes there be one issue? Because if so, we should really not pick mine to go first. You know, I'm happy to be at the back of the line. But unfortunately, we live in a world that permits many issues all at once. And I think people are starting to realize that AI raises a lot of issues, that one of those issues is extinction. And
You know, like I said earlier, people are starting to notice. It just takes time.
You know, and sorry, go ahead.
Well, I was, you know, sometimes I ask authors, I have people on who've, like, written books 25 years ago or something.
And I might say, you know, like I'm talking to Brian Green, who wrote The Elegant Universe 25 years ago or something.
And I say, you know, since you publish that book, you know, in your field of string theory, what's changed?
because the assumption is that over those decades,
you know, something must have developed
and something must have changed.
AI moves so fast.
I'm almost tempted to ask you the same question,
which sounds ridiculous, which is like,
you published your book six months ago.
What's changed since then?
I mean, more and more people are noticing that AI is real.
And more and more people are starting to react,
starting to wake up to this.
And one big reason is,
reason I have for hope here is that the more people are talking about this issue, the more
we're just sort of winning.
Like, when I have debates with people in the field of AI stuff, when I, you know, have disagreements
with the heads of the AI companies, I'm like, this seems really bad. It seems like by default
it just kills us. And they're like, nah, I agree there's a lot of problems there, but we're going to
figure it out on the fly, and there's only a 25% chance that kills everybody. Right. And, you know,
I can argue all day about how their 25% number is crazy. I can argue all day about how they have no
idea what you're doing, and they're just sort of like winging it and they have no real plan. This
isn't what good engineering looks like. But a politician coming into that debate does not need to
figure out whether I'm right or they're right. All they need to hear is that the optimists are like
there's a very good chance that this kills everybody.
Yeah.
Right?
And we've sort of been seeing that.
When I go speak to politicians,
if they sort of look at the issue at all,
they're like, this is nuts.
And one thing that's changed in the last six months
is more and more people are looking at this issue at all.
More and more people are starting to realize that this is nuts.
And this gives me great hope that, you know,
I also don't know what the conversation will look like in a year,
but I think there's a good chance.
It looks like the world going,
to the eye companies and saying, we just can't keep doing this. This is nuts. Yeah. Well, the book is,
if anyone builds it, everyone dies. And I mean, the question I started was with was whether that's
something of an exaggeration. People can hopefully see why it's now not. But of course,
if they want more detail, the book is in the description. Nate Sauris, thanks for your time.
My pleasure.
