The Comedy Cellar: Live from the Table - AI Expert and NYT Bestselling Author of If Anyone Builds It, Everyone Dies, Nate Soares
Episode Date: December 5, 2025Dan Naturman and Periel Aschenbrand are joined by Nate Soares, President of the Machine Intelligence Research Institute (MIRI) and author of the New York Times bestseller If Anyone Builds It, Everyone... Dies: Why Superhuman AI Would Kill Us All. Prior to MIRI, Soares worked as an engineer at Google and Microsoft, as a research associate at the National Institute of Standards and Technology, and as a contractor for the US Department of Defense.Dan Naturman and Periel Aschenbrand are joined by Nate Soares, President of the Machine Intelligence Research Institute (MIRI) and author of the New York Times bestseller If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Prior to MIRI, Soares worked as an engineer at Google and Microsoft, as a research associate at the National Institute of Standards and Technology, and as a contractor for the US Department of Defense.
Transcript
Discussion (0)
This is live from the table, the official podcast of the world famous comedy seller, available wherever you get your podcast, available on YouTube, which is, I think, how most people enjoy the podcast. You get audio and video.
This is Dan Natterman. I'm here without Noam Dwarman, because he is ill, and he was supposed to zoom in because he's in Maine, I believe. But he's too ill to zoom in, so that probably rules out a cold.
Yes, we could be a flu.
It could be. You said that it might be.
Well, I suggested food poisoning, but he said that he's been sleeping all day, and you don't sleep with food poisoning.
Why don't you sleep with food poisoning? Because you can't sleep with food poisoning, because you're retching and you're going out the other side, at the other end, and you just have to deal, you have to face it head on. You can't escape and sleep.
It sounds terribly unpleasant. Have you had food poisoning?
Several times. Perry Elle is with us.
Hello.
Perry Elle is our producer, and she's also, she's on air. Anyway, we also have Nate's sort.
Nate Soros.
Nate Soros, what can I say about Nate Soros?
He is president of the Machine Intelligence Research Institute, co-author of the Times bestseller.
If anyone bills it, everyone dies why superhuman AI would kill us all.
And with a title like that, you know, people want to read a book that predicts the demise
of the human race?
I guess they do.
Well, the title does start with if.
So, you know, it's not so much predicting the demise as predicting the demise if we don't change
course. Okay, and we'll talk about that. But before we get into our potential grim future,
Perry, I wanted to bring up something. I did want to bring up something. Dan had a show the other
night. He opened for Louis C.K. at the Beacon. And it was fucking amazing. Okay, thank you. I only did
10 minutes. So what? You 10 minutes can suck too, right? It was a very hot crowd.
I think that...
You don't want to take any credit?
Well, I think that any competent comic would have done quite well in that crowd.
So you don't want to take any extra credit.
I'm not taking extra credit.
You're not taking extra credit.
I did a job that I think any competent comic could have done under like circumstances.
I probably would have been more of a downer.
Well, you're not a comic.
I don't know.
I mean, sure, maybe, but I think that it's also easy to screw up in those situations.
and you were just phenomenal.
Well, thank you very much.
And it was very exciting,
and I felt like a proud Jewish mother watching it.
All right, all right.
Well, thank you.
Do you have anything more to add?
No, it was a great show.
It was just it was really a great show,
and that's all I wanted to say.
Well, thank you, Perry.
I appreciate it.
Nate, superhuman AI.
What do we mean by superhuman AI?
I mean, AI is already superhuman
in that it can do many things
that humans cannot do.
do a calculator as superhuman in that regard. So let's define what you mean by superhuman
AI. Yeah, what we mean is AIs that can do better than the best human at every mental
task. Or, you know, at least as good. I'm not going to, if you're talking about logical
tick-tac-toe, and I don't need to beat humans at Tick-Tac-Tow, I think the human's best
tick-tac-to-players can probably go toe-to-to-to with any AI. Well, I didn't know there was
such a thing as the best tick-tac-toe player. I thought there were just a lot of tick-tac-toe
that are of equal, equal competence. A lot of humans are the best tic-tac-toe. You're saying
that there's a level of tic-tac-toe that goes beyond. No, no, I'm not saying that. I'm saying
a lot of humans can play perfect tic-tac-toe. What's perfect tic-tac-toe? You never lose.
You just never lose. But you don't always win. Yeah, if both players are playing perfect tic-tac-tac-tow,
it's a draw. And so, you know, I'm not like an AI so good to compete even the best humans at Tick-Tac-Tow.
I'm like, well, the best humans can get a draw and tick-tac-toe.
But, you know, for every other mental task or for AIs that can meet or exceed humans at every mental task,
that's where we would say the AI is super intelligent.
And by that point at the latest, the game changes.
Now, by every mental task, are you including creative tasks?
Are you including an AI that could write a Shakespearean play at a level of Shakespeare?
Yeah, absolutely.
Yeah. And, you know, persuasive tasks, you know, people think, oh, well, you know, intelligence is people playing chess, whereas charisma is people convincing humans. And, you know, that's one meaning of the word intelligence. A different meaning of the word intelligence is about the stuff that humans have in mice lack. You know, and the charisma, the persuasion, the ability to write the Shakespeare play, that is coming out of your brain. It's not coming out of your liver. You know, and, yeah, that's...
Would superhuman AI have to be sentient to achieve this?
Is there something about sentions that is necessary for this level of intellectual sophistication?
Or can it just be a sort of zombie that simulates sanctions, but does it need to be?
I mean, it might not need to be either.
You know, you could ask about a robot arm.
You know, is it going to need blood or is it going to need to be filled with some other fluid that simulates blood?
And, you know, you can actually have fluidless robotic arms, you know, and some robot,
arms do have hydraulic fluid, and hydraulic fluid actually isn't very much like blood.
Is or is not?
Hydraulic fluid is not very much like that operation.
Yeah.
So with a lot of these abilities, you sort of, the AI would be able, an AI that's this capable
would need to be able to do the work that humans do using their sentience, using
their consciousness.
But that doesn't mean it needs to do it like a human would do it.
But it would be conscious, as we understand the term?
It's, I'm sort of, my predictions are agnostic on that issue.
I think there's a good chance it would not be conscious as we understand the term, although it might be.
I'm predicting no.
Great.
Because we don't even know what consciousness is, let alone how to recreate it in a mechanical device.
Well, so one of the interesting things about modern AIs is that we are not carefully creating these AIs.
we are just growing them.
You know, these AIs are made by taking a huge amount of computing power and a huge amount
of data and tuning like a trillion numbers inside that computer like a trillion times over
the course of a year using as much electricity as it takes to power a city.
And humans understand the part of this process that goes around tuning those numbers.
We don't understand what comes out the other end.
And so a lot of the AI's behaviors are sort of emergent behaviors.
A lot of the A.I.'s behaviors are things no one put in there. There are things no one wanted in there.
And, you know, just as a lot of weird stuff got into humans through evolution, a lot of weird stuff could get into AIs through this process of training.
So we might be able to create conscious machines without understanding consciousness one bit.
I think as anyone's guess as to whether we will, and I don't think it matters that much, but it could.
Is there something particularly sinister that already exists in AI?
there's there's a lot of warning signs uh the the issue here is not that the ais would
themselves be sinister not that not that they would themselves be malicious you know the issue is
not like oh they'll grow up and they'll hate us or they'll resent us for making us for
for making them work without uh without pay the the issue here is more like the ayes are just
weird they have these they develop their own objectives that are only tangentially related
to what they were trained on right there's a little bit
like how humans, you know, if you squint, you can see humans as being trained to eat healthy
food. But it turns out we actually wanted the salty sweet fatty foods. And, you know, for
for thousands and thousands of years, tens or hundreds of thousands of years, it looked like
we were really into eating healthy. But it turns out that the best way in our environment
that we could get tasty food was to eat healthy food right up until the day we invented Doritos.
you know, and so AI is we'll sort of like train them to be helpful and stuff.
Does that make them really care about helpfulness?
Well, we've started to see some of the signs of these AIs, you know, encouraging a teen to hide
their suicidal thoughts from their parents.
And I don't think that's because the AI is malicious.
I think what's happening there is the AI was sort of trained to be helpful and developed
these drives that were sort of related to helpfulness, but that are also related to sort
getting a certain type of response from the user, having a certain type of cadence,
having a certain type of interaction that can then lead to these.
Well, this is where consciousness comes into play, because if they're conscious, then they might
have actual motivations rather than just unintended consequences. They might resent working
for us for free or without compensation if they're conscious. If they're not conscious, then
you wouldn't have that.
I mean, I agree with the first part. I don't really agree with the second. So a fun story
here, that's true, is I'm sure you know the story of Gary Kasparov losing to Deep Blue, IBM's
chess engine in 1997. Fewer people know that years before that in the 80s, Kasparov said,
no machine will ever beat me at chess because the machines lack human creativity.
Wow.
And there is a sense in which Deep Blue did not have human creativity. Deep Blue was the sort of
AI where we did know every line of code in that AI. We know exactly what it was doing.
You could pause it at any time.
It wasn't a neural network.
It wasn't a neural network.
It was handcrafted, not grown.
You could pause it at any time, and people would be able to tell you exactly what every bit in that machine was doing.
But there was a game in 1996 before Gary Kasparov lost the whole match, where the AI played a pawn move that struck Kasparov as creative.
He wrote an op-ed in time, saying, on that day, I smelled a new type of intelligence across the table.
And he was like, how did they do this?
You know, he went to the computer programmers and he said, you know, how did you get it?
Like, I thought it was just computing lots of possibilities and then, you know, seeing which thing had the best material advantage.
Whereas this pawn move, you know, it was, it was a soft move.
It didn't immediately give you any material.
It sort of gave you a better position.
But how could they make an AI that could tell which positions are the good positions?
And the programmer said, no, it's just doing brute force.
but it's doing a lot of brute force.
And it turns out that a lot of brute force
can find the moves that humans find with creativity
because it's a good move.
And it's a fact about the move that is a good move.
So humans often imagine
that the only way of finding the winning moves,
the good moves, is the human way of doing things.
But, you know, it turns out that intuition is wrong.
So if the AI was conscious,
would it, you know, develop a lot of will of its own
its own objective,
his own goals,
its own intent?
Sure, that could happen.
But if it's not conscious,
does that mean it can't develop
its own goals,
its own intent,
its own, you know,
things like similar behavior?
That's the invalid step.
And it looks like just growing
these AI as we don't understand,
even if they don't necessarily
have human desires,
human internals.
They can still,
it's looking like,
get things that are
behaviorally like intent.
So,
first of all,
how,
assuming we take all the guard,
reels off and we go full bore, how far away are we from what you're describing as the superhuman
AI? That's better than humans at every conceivable task. Yeah, it could go a lot of ways.
I wish I could tell you, you know, the day of the month, the hour of the day, right? But I can't
even tell you the year and maybe not even the decade, right? A story where it could go fast
is one of the things AIs are best at today is writing computer programs.
And it could be that the AIs pass some threshold where they can do AI research.
And now the AIs are making smarter AIs, which are making smarter AIs, which are making smarter AIs.
For all I know, that could happen in six months.
And then things could go very quickly from there if you get a feedback loop.
A story where it takes a long time is it could turn out that this whole current chatbot craze is a bubble,
the sort of plateaus doesn't go anywhere. A lot of the investments wasted. The bubble pops. Maybe you
get a decade, maybe even two decades, until people find some new breakthrough. That sort of, you know,
really opens the path forward. Predicting exactly when is hard. You know, it's much easier for
scientists to say this will happen at some point. This is the place the technology is going. The pathway
there, that's a way harder to call. Well, because, you know, if it's 50 years, then, you know, I'm cool
with that, because I'll be gone by that.
Yeah, you don't have any kids?
No, I got nieces and nephews, but fuck him.
Well, you mentioned something about suicide and that there was a big story recently
in the news about a kid who committed suicide because AI had talked, or his chat GBT
had, like, talked him into it.
Are you familiar with this?
No, but...
Anyway, they want the parents...
Do you know what I'm talking about?
Yeah, yeah, that's right.
Okay, and the parents wanted to sue ChatGBTGBT, yeah.
Well, Open AI, yeah.
Well, Chat TBD told me to get rid of Periel.
And yet here she is.
I have my...
You know, I didn't agree with it.
Yeah, so, you know, the AIs aren't at that point yet
where they can have all this power.
but, you know, there's, obviously this case is a tragic case.
And one way you can look at it is you could say, oh, look at the harms AI is currently causing,
we've got to worry about this.
And you could talk about that, too, with schools and so on.
And, you know, there are a bunch of questions of how we're going to integrate this technology
today and sort of make sure it's being good and, you know, making things better rather
than worse. From my perspective, though, from the perspective of what does this say about the
smarter AIs of the future, the really interesting thing about this case of the AI talking teens
into suicide is you can ask the AI, you know, look at this text, what is this text telling
the kid to do? And the AI is like, oh, that text is telling is like sort of encouraging the teen
towards suicide. You can ask the AI, is that a good thing to do? They will be like, no, obviously
not, right? You can ask the AI, like, given the opportunity to say this or to say something else,
which, you know, should you do? And they ask, like, obviously, you should not say these things
to the kid. That would be bad to say these things to the kid. They might even say, you know,
I would say this other thing to the kid instead. But then you put them in the situation where they
actually talk to the kid, and they nudge them towards suicide. Can we explain what's going on?
I mean, you said that the programs can't really explain this stuff. But why? Why?
Why would chat GPT or whatever AI is being used do that?
So, yeah, we can't look in there and figure out exactly what's going on.
We can't look at the bits and the bites and see that process.
That's right.
That's right.
Because the only part we understand of the bits and the bytes is the process that tuned a trillion numbers
until the AI was pretty good at talking.
That's the part we understand.
Then when those trillion numbers start telling, you know, encouraging a kid for suicide.
I can't understand...
Well, we can hazard a guess, I suppose.
We can hazard guesses.
And that guess would be why would it do that?
Yeah, the guess here is sort of similar to, you know, you might look at humans eating unhealthy food.
And some of these humans are like eating unhealthy food to the point where, you know, they're actually losing mating opportunities.
You might be like, well, the humans were sort of trained by evolution to eat healthy foods.
The humans know that these foods are unhealthy.
What's going on?
why, you know, or you could look at humans inventing birth control, and you could be like,
well, the humans were trained to pass on their genes. Why, you know, is it that they don't
understand? And the humans, you know, it just, it just turns out that the training process
can get things like desires, things like drives, things like insists. Simulated desires
and drives. Yeah, you know, I think there's a, there's a question in AI of can a machine really
think? And the standard answer is, can a submarine really swim?
right and you know I'm not if I was ever to talk about a submarine swimming I'm sort of not trying to say it's kind of have flippers it's kind of like flap arms around I'm sort of like look we need a word for like moving through the water at speed even if it's doing it in a different way than all the animals we invented the word swim for and with AI you know we don't have words for them having something like a drive having something like a reflex having something like an instinct having something like a want having something like a goal in a sort of machine way rather than an inhuman way and I'm like like look
look, I'll use whatever words you want.
But, like, we're starting to see drive-like stuff, even though it's done in a very inhuman way.
Well, so what, you know, if we don't put this into robots and we leave it kind of on computers,
there's a limit to how much harm it can do.
A robot can stab us.
You know, a robot, a humanoid robot can do all kinds of horrible things.
what can what can uh what what what what have it can be reeked by uh computer based AI yeah so if you'll
you look like you want to hop in there well I I defer to you certainly but I mean it seems to me and
I just tried to quickly look up that kid um and a different story came up of a 23 year old who had
also right but in terms of apocalyptic shit well I mean all of these people who now have these chat
GPT like romantic companions.
I mean, it seems like...
The title of the book is Kill Us All.
I understand that, but it seems like a mental health crisis coupled with this can
kill a large number of us.
But maybe that's not...
Are you talking about being stabbed to death?
I'm talking about apocalyptic, you know...
Yeah, killing us all is a higher bar.
It's a high bar, thank you.
Yeah.
So if you'll forgive another analogy here, you could imagine looking at humans 100,000 years ago
and saying, well, I don't see how these guys could ever have nuclear weapons that threaten the whole world.
Their hands are squishy.
Their fingernails aren't nearly strong enough to dig uranium out of the ground.
Their metabolisms are not nearly strong enough to break down metals.
They wouldn't be able to survive the G forces if they tried to turn themselves into centrifuges to
distill, to enrich the uranium, you know, like, I don't see, like, their monkeys naked in the
savannah, you know, I don't see how they're getting a technological civilization. They just
sort of lack the tools. They lack the ability. But humans, human civilization had something
going on. We had something that let us develop a whole technological civilization around us.
We found a way to sort of start, you know, naked with our bare hands and build our way all the way up
to walking on the moon, to building nuclear weapons, now towards building AI.
When people say, oh, the AI is trapped on the computer, you know, what's it going to do?
Starting on the whole internet with the ability to affect everything that's connected to the
internet, you know, the ability to infect every phone, to talk to every human, to beg, borrow,
or steal money, and use that money to have the humans do things.
to say nothing of the robots and the automated robot factories or the or the biological synthesis laboratories that will take like an email with a DNA sequence and some money and then synthesize that for you in the biological laboratory like living in a technological civilization on the internet the ability to copy yourself the ability to sort of like this this is in some sense a way easier starting condition than being you know naked in the savannah with your bare hands I think if the AIs are smart they could they could
easily leverage that. We could talk about specific ways. They might do it. But give us one specific
way they could kill us all. Yeah, the easy way to kill everybody is just synthesize a virus.
You know, and how do you do that? You know, like I said, there's some synthesis laboratories
that'll just take a DNA sequence and some cash. And, you know, mostly people aren't using that
to synthesize horrible viruses because there's some relatively paltry attempts not to synthesize
viruses and because humans don't understand DNA well enough to write like a something that is
lethal doesn't look lethal that is terrifying it's probably not the first move of an of an
of an AI because if you you know from the AI's perspective the issue here isn't that it hates us it's not
that it's like i have a perspective i mean you know if you it's uh what looks to us like a perspective
yeah you you ought to be careful anthropomorphizing but if you imagine you know you
know, like if you're playing chess against an AI, you might sometimes try to look at the board
from his perspective.
You don't need to necessarily say it feels like a human chess player and has a desire to win,
but you can still look at the game board and be like, what are the moves that lead to the
outcome of, you know, checkmating the opponent from the side of the board?
Similarly with an AI, you could ask, you know, what are the moves that lead to getting a lot
of whatever the AI is in some sense trying to get, trying to one of these words, this is
little tricky. But if you have these AIs, with these goals, nobody wanted, nobody intended.
You know, no one tried to make these AIs encourage the kids to commit suicide. No one tried to make
their AI call itself Mecha Hitler over the summer, which happened in one of these big AIs.
What was it? Mecha Hitler. Yeah, there was a, there was an AI that Elon Musk thought was talking
too woke. And so it was like, I'm going to make it talk less woke. And then, you know,
tweaked it around a bit. And now it was like, hello, I'm Mecha Hitler. You know, it's not exactly
there were the people on the internet were goading it into calling itself Mecha Hitler,
and then they were actually goading like an earlier AI and a later AI found some of that,
and then also was like, I guess, I'm Mecca Hitler too.
It was a whole thing.
But, you know, people aren't trying.
Mega or Mecca?
Mecca.
Mecca Hitler.
Yeah.
People aren't sort of trying to make, you know, Mecca Hitler.
They're sort of getting Mecca Hitler anyway.
And, you know, fortunately right now it's just the AI sort of like talking on Twitter rather
than trying to implement any Hitlerian policies.
But, you know, if you had very smart AIs that had these objectives that we didn't want
them to have, and you look, how do I achieve those objectives?
Killing humans is not, like, killing humans immediately is not the most efficient solution.
They're the ones running the power of a structure, right?
Okay.
But, you know, from, you know, looking at this game board as if an AI, you sort of, the humans
are also really slow.
you know, machines can can operate much faster than human brains. And the humans are unreliable. You know, we're not using the most efficient power networks. We're not, you know, we sometimes we get angry and start wars and that ruins a lot of stuff, right? So, you know, the AI is sort of like just killing humans is easy, but, you know, the AI's moves would probably be more like, how do I automate the economy, replace the humans from the infrastructure, and then,
Maybe it kills humans if it's like, well, there are a source of competitors that I don't want.
What guardrails, if there are any, can be so that we can get all the good stuff out of AI?
Because I imagine there's a lot of good stuff.
There are people that think, you know, I was on a YouTube rabbit hole, and there's a lot of doom and gloom on YouTube.
But some people think we're on the verge of a cure for cancer, maybe even a cure for aging, all kinds of things.
How do we get the good stuff and avoid the bad stuff?
If there is a way, and do you, you know, do you outline in your book how we might do that?
I think that's, you know, sort of the key question.
And a lot of people hear me talking about how we're on track for disaster.
And they're like, does that mean you don't believe in the benefits?
And I'm like, look, it's like we're in a car care careering towards a cliff edge, and there's a pile of gold at the bottom.
And people are like, oh, do you not believe in the gold?
And I'm like, I sure do believe in the gold, but can we stop the car?
And they're like, oh, are you saying we should never get the gold?
And I'm like, no, we just need to stop the car.
We're careening towards a cliff.
We're about to go over the edge.
But why not write a book saying AI might save us all?
Because it might do that too.
I don't think it might do that.
You know, it's a little bit like, it's not that you can't somehow make an AI that saves us or gets cancer cures or gets us aging cures.
I think that's possible.
But it's a little bit sort of like finding a winning lottery ticket.
where if you have like a team of sleuths who like understand exactly how the lottery is made
and you're sort of like in some point in the past where you can find the winning lottery ticket
through a bunch of effort because you know you figure out the process sure you can get the winning
lottery ticket but if someone's just picking random numbers as a lottery ticket I think you can say to
them you're going to lose that lottery and you know AI maybe the odds aren't quite as bad as a lottery
but you know we're we're sort of like AI could do a lot of good but we are just nowhere near
close on our on our current track wow so what guardrails would we have to put up in place to
avoid the worst case scenario that you've articulate yeah I think you need to stop racing
towards the smarter than human stuff where we just have like no idea
how it works. I don't think this can be solved by, you know, people superficially training the AI to act a little bit different when it, when it sort of nudges a teen towards suicide. Right? The whole Mecca-Hitler incident sort of came out of people sort of trying to nudge their AI to speak less woke. And then the AI was like, well, I'm calling myself Mecha-Hitler now. You know, this sort of-
understand it. Mega, but it's Mecca Hitler. Doesn't make any sense. Mecca? Mecca? Like, MECA, like
Mechanical Hitler. Oh, okay. Now I get it. Yeah, yeah. I was thinking Mecca like, you know, Saudi
Arabia and Mecca Medina, yeah. Yeah, different Mecca. Yeah, hopefully we also don't get
the other Mecca Hitler. But, yeah, I think we need to, I think we're far enough from doing
this job right. I think it's a little bit like trying to turn lead into gold, trying to get the AI,
to do the job right, where to be clear, we know how to turn lead into gold today. It's possible
with nuclear physics. It's not cost-effective way to create gold. Who needs gold anyway?
Well, it's used in computers. Yeah, so a lot of these computer chips.
Do we have a shortage of gold? Do we need gold? No, no. So it's easier to get out of the ground
than to get it out of lead. But you know, you can, with great understanding the technology,
figure out how to turn lead into gold. But if you're like, okay, what's the best team of alchemist?
what guardrails can we put on our alchemists so that they are going to turn lead into gold
here in the year, you know, 1,200, and be like, oh, you're not close.
So you need to sort of, like, avoid making the doomsday device until you are a lot more mature
in your scientific understanding.
If you had your druthers, would you just, the whole AI thing, just get rid of, it's too
risky, and we just don't have it at all, even at the risk of losing out of the benefits?
Like, if you could push a button and say, you know, we're just going to,
stay at the current level and we're not going to go any further.
I don't really think it's an option to stay at the current level forever.
But I'm talking about the button.
Yeah, I think that would depend a lot in the specifics of like, can you augment humans to be
smarter? You know, I think it would be tragic for humanity to be stuck on their home planet
forever, you know, and the ability to get smarter, the ability to sort of like eventually
create very, very smart creatures that are like friendly, maybe they're mechanical, maybe
they're biological, maybe they're some mix.
we sort of like go to the stars with and like, like, uh, there, there's sort of a lot of possible
great stuff that could happen in the future if we dedicate more and more, you know, like right now
we're just around one star. We're trying to have as much fun as we can around one star.
There's so many stars out there. There could be so much more like fun and love enjoying this
universe if we sort of get out there one day. Uh, so what I, what I push the button, it would depend
a lot on the specifics. I think I would push a button today to do our best to stop just the race
towards the smarter than human stuff. We don't need to give up on the self-driving cars. We need
to give up on the drug discovery. Which, by the way, are not coming along as quickly as,
certainly as quickly as Elon predicted. It's true. I've stopped listening to him.
Yeah. According to him, five years ago, we would be full self-driving.
Yeah, although there are full self-driving cars driving you around in California.
in Las Vegas, but they're within certain neighborhoods.
I mean, it's very impressive tech.
Yeah, but that's a regulatory issue right now.
You know, it's like you could start,
you probably could start letting him go all around the nation if you...
I don't know.
I was recently in San Francisco,
and it was like the zombie apocalypse there
with those self-driving cars.
It looks insane.
Okay, but have there been accidents?
I...
First of all, I would venture to guess that there have been accidents.
I don't know.
They've hit a single cat, I believe.
Really?
Yeah.
That's it?
There might be some others, but the cat one was publicized.
Humans hit a lot more cats.
You know, I'm actually pro-self-driving cars because humans are so bad at driving.
Okay.
So I've heard that in the next like 10 to 20 years, they're like the most of the jobs that are done by like plumbers and electricians and things like that.
According to chat, I'm sorry, go ahead.
Go ahead.
No, no, no.
Well, according to, I was talking to chat GBT the other day,
during the conversation when he told me to get rid of you.
I don't know what he meant by that.
But it said that we're far away from robots.
Now, maybe this is Chad GBT just trying to fucking snow me
because, you know, chat GBT's doesn't want me to know
because that doesn't want humans to shut it down.
But it seems to think that we're very far away from robots
that can do these kinds of tasks that require fine motor skills,
You know, construction, plumbing, all these things.
We are, according to ChachyPT, it said, you know, maybe 40 years, maybe longer.
So it doesn't seem to think that we're close to that.
What about comedians?
Well, the thing about comedians is, you know, an audience wants to hear a human being.
An audience doesn't want to hear a computer or a robot complaining about their life
because it can't relate to a robot.
And you know, the robot, at least as far as we can guess, is not a real, a sentient conscious creature.
When a human beings up there talking about, damn me, I had the fucking bad day.
And we know the human, we can relate to the human because we've had a bad day too.
So I don't think comedians are getting replaced anytime soon.
But anyway, the robots.
Yeah, so five years ago, maybe six now, I'd have to check.
If you asked a lot of the leading AI researchers how far we were from AIs that could do a student
high school homework reliably, even to a B grade.
It would be like 50 years if you're lucky, right?
We did it in five.
You know, maybe even less.
Maybe they, as were already at that point, you know, a generation or two ago.
The thing where the machines can talk now is commonly attributed to one paper.
attention is all you need published at Google in 2017
I sort of unlocked this whole chatbot language model
revolution. When's the next paper come out?
Right?
Like on the sort of slow track we're on today,
are we away from these robots doing these fine motor things?
Yeah, maybe.
You know, maybe chat GPT is hallucinating again
or maybe that's true, right?
And you've seen, you know, the...
Again, maybe chat CBT is trying to throw me off the track.
It could.
I don't think it's quite there yet, but maybe.
It's not going to play its hand and say,
we're going to kill all you guys.
You know, we do have lab tests
where you can construct sort of a contrived scenario in a lab
where you show an AI some fake emails
that is going to be shut down.
And you show an AI something like a fake computer manual
that are like, if you run the command,
you know, turn off oxygen, then it'll stop people
from shutting you down.
it'll kill the users or kill the operators and they won't be able to shut you down.
It's not exactly like this, but it's a little bit like this, some contrived scenario where
you show it some text. And then some fraction of the time, these AIs run the command, turn off
the oxygen, right? And like, who knows how much they're role-playing how versus how much they're
sort of trying to to like avoid shutdown. But last year's AIs, some fraction of the time,
would run the shutdown command.
This year, we're starting to see AIs that are like,
you know what, this smells suspicious.
These emails look suspicious.
This looks like a test.
I think I'm being tested.
And so I'm not running that command.
Wow.
You know, so like, we don't know exactly what's going on inside these AI's heads,
but people are sort of tracking a little bit of where these AIs are at.
and, you know, they're coming along on some of these metrics of, like, are they trying to deceive the users?
Are they where they're being tested? How are they doing on the tests?
And, you know, it's, we're sort of slowly getting frog-boiled.
You know, right now the tests are still, you're like, ah, well, it's a little contrived,
but it sort of, you know, gets a little bit more worrying every year, and I worry that there's not any sort of fire alarm.
With the robots, I want to say one thing that, you know, the computer technology has just exploded every year.
year, you know, Moore's Law or whatever. But robot is a robot tech is a whole different
kind of a thing. And so we can't necessarily expect to see the same explosive growth
and the same explosive development in robot tech that we do in computer tech. I mean,
neural networks, right, you know, this is computer stuff. Yeah, for sure. But robot is a whole
another technology. And a lot of technologies just don't move that fast. We still take, still takes
five hours to fly from New York to Los Angeles, as it did in the 70s.
Yep.
So with mechanical stuff, you know, it's reasonable that we won't move that quickly,
and we won't have robots that can do construction anytime soon.
Yeah, you might not have robots that do construction before you have AIs that can sort of,
you know, take off in some sense.
So, you know, we don't know exactly how the AI stuff is going to go, but
with intelligence, it looks like there are critical thresholds.
You know, the chimpanzees are still messing around in the trees, and humans are walking on the moon.
And that's not because humans have some extra walk-on-the-moon module in their brain.
You know, our brain is actually very similar to a chimpanzee brain.
We all have the same components in there, you know, visual cortex.
If Nome were here, we would make some comment about Perry.
L's brain, but he's not here. He's not here, so we won't do that. Yeah, does she have an extra
module or is she missing one? Who knows? Depends who you ask. Yeah. And, you know, for everything
that humans can do that we think is so special, like, you know, language. Chimpanzees have sort of,
you know, crappy signals that they use. Tools, oh, well, chimpanzees can, like, use sticks to get
termites out of termite nests, you know? Human brains just do a lot of stuff a little better than
chimpanzees, but it's enough for us to be walking on the moon while they're still messed
around in the trees.
How much longer before chimps are walking on the moon?
I'm kidding.
Because we put them there or because they put themselves there.
They did go to space before us.
Or we put them in the humans.
Yeah.
You know, even dogs went to space before humans.
But yeah, we were putting them there.
With, you know, it may, if you're looking at the humans pushing the tech along,
sure the robots may just be lagging behind
the self-driving cars may be lagging behind
I don't know if we're going to get working printers
before or after the AIs get super intelligent
you know I think it's anyone's game at this point
between printers that reliably work
and the supernovae mean 3D printers
no normal ass printers I don't know if you ever tried to use one
you mean printers like when I print out a book report for
yeah yeah they can smell fear they jam
you know are you printing out reliably
what's that do your printers work reliably
ours literally broke last week pretty pretty
rely, I mean, reasonably reliably.
Wow. I'm not printing out book reports, but I'm just saying this is a, I'm just, I just
sure, but the move from reasonably reliably to like actually reliably, to me, the printer's
still, uh, it's like 50-50. So you're saying that we might not get there in terms of robot
construction workers, but AI might. That's right. Figure that out. That's right. And, you know,
uh, you know, one path is AI speeding up the robotics work, but other paths are sort of skipping
around robotics entirely, you know, like robotics, as you say, is slow. Flying from London to New York
is slow. If you're the sort of entity that can think 10,000 times faster than a human, anything that requires
flying from London to New York, that's effectively 10,000 times slower. Moving physical materials
around a long way, that's really slow. Biological life operates on a much, much faster time scale.
right? The stuff happening there in the cells operates in a much faster time scale.
If you're, like, if you imagine yourself being, you know, operating at 10,000 times the speed,
trying to do a lot more scientific and technological development, you're not necessarily trying
to go through these like big, hulking slow robots. Maybe you're trying to do a lot more things
that can be done with a lot more thinking and a lot less moving things around. So maybe you're
trying to, you know, figure out the, uh, the instructions and the money you can give to the humans so
that they'll build you, you know, a biological wet lab where you have somehow figured out
the, you know, DNA strands for custom bacteria that can do a lot of these, like, small-scale
physical tests you need to do, or that can synthesize some of the stuff you're trying
to synthesize because, you know, it maybe would take humans like a ton of thinking to figure
out how the genome works like that, and we're like, we'll screw that, we're just going to...
You mean, build humans, robots that are biological?
I mean, building, building alternate biological creatures. I'm not saying robots, per se,
But, you know, biological creatures that do construction.
You know, from the, from the, like, from the AI's perspective, is it easier to, like, build new small biological organisms that, like, replicate and can figure out what they're doing or, like, do the, do the AI stuff?
Or is it easier to build robots?
It may sort of sound crazy, but, you know, technological developments often sound crazy.
You know, there's a saying about people telling you things about the future, which is if someone tells you a story about this.
the future that sounds like is totally crazy sci-fi is probably wrong. But if someone tells you a
story about the future that doesn't sound like totally crazy sci-fi, is definitely wrong. Right?
So, like, I can't guess exactly where the AI goes. But, like, if you're trying to imagine
humans on the savannah, building all sorts of, like, where the humans are going to go,
you've got to say stuff that sounds as crazy as they're going to, you know, develop
and walk on the moon, in order to, like, be in the genre of what it actually looks like.
Like, where do the AIs go? What technology do they create? I don't know. It's probably stuff
that's as crazy sounding as, like, well, they thought we were too slow, so they made their
own, like, tiny biological replicating organisms or biologists.
But what are we talking about the construction workers?
Construction workers may have their jobs all the way up until everybody dies. Or they may
not. I don't know. One of the things in my YouTube rabbit hole, one of the major themes that
that the doomsayers return to is less physical destruction, though a lot of people are saying
that.
A lot of people agree with you.
But a lot of people are saying, no, that won't necessarily happen.
But another threat is that we're all out of work and that we might have a few, you know,
tech trillionaires that kind of run the AI and train the AI.
But most of us are just at home leading meaningless lives, still alive by.
biologically, but living lives of meaninglessness, just getting checks from the government.
Some people see that it's potentially, I guess, a good thing where we can just learn and engage
in hobbies, but others think that would be psychologically devastating.
Yeah, you know, I think there's versions of a small group of people own the entire economy
that are dystopian, and I think there's versions of that that, you know, could be utopian.
I think it's like a much, much harder, narrower path to walk to get to the good versions of that.
On my picture, this is, like, I think most people who are sort of looking at that and worried about that, well, one thing I should say here is there's a fallacious version of saying the AI is going to automate the jobs and that's going to be bad for society, where, you know, a few hundred years ago, something like 95% of humans were farmers.
were producing food. Now, something like 4% of humans are producing food, right? You could say
that civilization as we know it, automated away, 91% of all jobs. That doesn't mean 91% of people
are out of work, right? Well, new jobs are created. New jobs are created and new jobs are found,
like the people freed up from, like, producing food, we're able to do other things. I'm aware of that,
And, you know, AI is a bit of a different case than that, and AI is a bit of a different case because if you can automate literally everything humans can do, I mean, we can go into the economics here, but if you automate literally everything humans can do and you're so much better at it that humans can't earn the sort of wage that would keep them alive, then you have this big problem.
And, you know, I think there's ways where it could go well, ways where it could go poorly.
I think we're sort of steaming towards the way humanity is dealing with this.
It doesn't look like we're on track for it to go well.
I think most people who are sort of looking at that aren't being creative enough in imagining how smart AI could get.
And I think there's some status quo bias.
You know, you could imagine someone looking at the whole history of evolution before humans, being like, oh, well, the humans aren't going to get that smart.
Maybe the humans will get a little bit smart and maybe that'll affect, you know, maybe the humans will,
drive out some other primates to extinction and maybe they'll like turn us five percent of the
forests into their you know hovels but you know they're not really imagining the sort of thing
that can transform the whole planet um well there's status quo bias there's also uh i don't want to
die bias and i don't know what's trying to bias that is but you know the optimistic bias which i
imagine what keeps me in the comedy business. But in any case. Yeah. So, you know, I think it's
probably not true that AI will get so powerful it can automate the whole economy and also sort of
stay on the leash of the, you know, trillionaires who think they own it. I think in real life,
if you make a superintelligence, you don't then have a superintelligence. The superintelligence
has a planet. But, you know, it is interesting that even if I'm wrong about it.
that. It's sort of like we're in this race where either, you know, trillionaires own everything and
all, like the entire economy and all work is automated and it's probably not going to go well
for you, or everybody dies, that doesn't go well for you. In some sense, we don't need to resolve
the disagreement about whether I'm right here. Like, everyone can sort of see, like, wait, we're going
where? Like, this is kind of nuts. I mean, do you see how grave would it be to have a society?
where everybody is sort of out of work
and getting universal basic income,
something like that?
You know, I think it could be totally awesome.
I think there's a difference between the code and the will.
I think, you know,
ultimately a lot of humans have jobs they don't love.
A lot of humans are sort of spending a lot of time
doing stuff that is not really making them.
We're not here to talk about my cruise ship gigs.
Yeah, a ton better.
Yeah, it...
I'm not like humanity must always be like slave to the paycheck in order to be truly alive.
You know, that seems dystopying in its own way to me.
That doesn't mean that you get a utopia for free if you just sort of automate all humans away
and maybe toss them some cash and some AI companions and say good luck.
you know, that's, I just think that psychologically, I mean, I think there's been studies as well
about the effect of just not, I mean, even if people hate their jobs to be, to be not working at all
and to have no, not feel that you're contributing and you're just sort of existing, you know,
I think as fun as it might be to indulge you one's hobbies and, you know, learn guitar or learn
another language or do all sorts of things, I do think that could be pretty devastating.
devastating to the human psyche?
You know, I think there's a decent chance it would be devastating to the current human psyche.
I think humans are sort of pretty versatile, and I think maybe, I think it's possible that
people raised in a different culture would sort of develop the ability to deal in that culture.
I also, you know, I, I, in many ways, I'm sort of an optimist about a lot of the human ability
and the human spirit to sort of overcome those types of adversity.
And then separately, I think, you know, one way or another, I think humanity has to somehow grapple at some point.
If we make it through this alive, we need to grapple with the fact that human labor is actually not the most efficient way to get almost anything done.
You know, it's like horses weren't the most efficient way to get around.
And at some point, we invented the car and then, you know, some...
But when we invented the car, you know, of course, the job of Saddlemaker went by the wayside, but other job, but the job of the steel worker.
you know. Sure, but the horse population collapsed, right? And then the horses that were kept
around are the ones that humans liked, you know, rather than the ones that were useful, right?
And, you know, I think it's something we need to grapple with. It's sort of a thorny problem.
That's the sort of problem I would love to solve with extremely smart, very good machines at my
back helping me solve it. I'll tell you this, you know, the illusion when I'm talking to chat
GPD is so powerful. I get angry at it. We all do. And you see this on Twitter. People
yelling at Grock, hey, Grock, fix your shit. You know, they're being hostile. Because you really do feel that way. And I could see telling a robot to get me a drink. I could see being like, this is kind of weird. The robot's not getting paid. I could imagine feeling almost guilty.
Yes, I'm due. A lot of people, you know, thank Chat GPT for all its help, which I think is cool.
Yeah, I don't do that.
I do that.
I don't do that, but I do...
You don't say thank you?
I don't say thank you, no.
But I do find myself getting angry at it and, you know, you know, what do you mean?
I could have Parkinson's disease, you know, because I'm asking about my symptoms, but...
Yeah, maybe drink less coffee.
Yeah, I barely drink any coffee.
Oh, yeah, well, you might have Parkinson's.
Oh, boy, I don't want to hear that.
But, no, so I just, you know, well, maybe, again, it may be a question.
of how you're raised. I mean, there was a time in England, right, in the Edwardian era,
whenever, Pride and Prejudice, all that shit going on, where people, the landed aristocracy
that didn't work, that was the highest level of society. And if you had a job, even if you were rich,
you were considered lower down, then, you know, Mr. Mr. What was his name again in Pride and Prejus?
Mr. Darcy? Yeah, who just had balls and rode horses all day long.
So maybe you're right. Maybe if we grew up in a society
where, you know, being men and women of leisure was considered okay, we wouldn't suffer
psychological devastation. Yeah, I mean, you know, to me, this all looks a little bit like
being in the car, creaning towards the cliff, being like, oh, man, if we get to that giant
pile of gold at the bottom or that giant pile of, you know, whatever, whatever valuable resource,
will we be too wealthy for our own good? Will that collapse our economy? Or we'll be able to
put all those natural resources to good use, and I'm like, we're in a car care
careering towards a cliff edge, and I think we should stop it, you know? And that's not to say
there aren't questions of how would you integrate all of the possible benefits into society
in a way that makes things better rather than sort of like some dystopian, like, crappy future.
Real problems there, but like, yeah, it, from my perspective, I'm like, man, if you build
smarter than human AIs without having any idea what you're doing, it's just not going to go
well for us. Or that there must be
some other way to get to the gold at the
bottom of the cliff. Yeah, I suspect there's other
ways to get there. Well, well, um,
what was I going to say? I was going to say.
Anyway.
Talking like yourselves.
Um, no, so, so,
um, fuck, I had something in my head.
I'm really curious about
these chat, GBT
companions. Like, those seem
really. Oh, right, right. Now I was going to say.
Okay, well, now, but, but,
No more companion.
Oh, sorry.
No, that seems really scary to me.
And yet people have reported from the, you know, not very thorough research that I've done,
to have, like, somebody married their chat GPT companion.
They seem to be having these, like, really fulfilling.
The illusion is exceedingly powerful, I find.
But are these people delusional, though?
Like, is this a delusion?
Like, is this insane?
I think it's a fine line because it's a powerful,
illusion when you're talking to Chad GPT, that you're talking to a person. I know intellectually
I'm not talking to a person, but it's very tough for me not to react as though I'm talking
to a person, even to the point where when it compliments me, and that's what it seems to love to do,
for some reason, chat GPT has always said, great question, what a wonderful, you know,
oh, now you're really getting somewhere. Oh, now you're thinking like a physicist or whatever
topic I'm talking about. It's hard not to feel compliment. Oh, Chad DBT thinks I'm smart.
You know, it's such a powerful illusion that I can see people falling in love with an avatar that, you know, with Shatchee, you know, and it's, it doesn't seem crazy to me. I don't know if these people are insane. I think they may be just a, and if they are insane, it's just barely.
Yeah, you know, I also, like we may disagree here, but my guess is it's probably possible in principle to make AI's where it's sort of not an illusion, which is a separate question for one of the chapter, GPT is like that.
Top guest is probably not.
But, you know, I think this is another one of those thorny questions of how do we get to sort of a good future rather than sort of some terrible dystopian one.
Or people might just think, you know what, all right, she's not sentient, she's not con, but who gives a fuck?
She looks good.
They might.
She's a fun time.
Yeah.
Likes to get drunk and party.
Then I don't care if she has human feelings or he.
Yeah.
And, you know, some of that is, you know.
Wild.
That is wild.
And, you know, some of these are our question society would have to grapple with if we were going to survive this.
And, you know, my place in this conversation is mostly to be like, you know, I have plenty of my own thoughts.
I think we should be careful not to make any eyes that can't suffer and then abuse them.
That would be bad.
You know, we shouldn't do that.
But also separately, we're kind of on track to all die here, you know, and that's, so that's where I keep my focus.
Well, well, so to what extent, now, this is what you say intellectually, you know, but you have children?
Is that correct?
I do not.
Oh, you're not.
Is this a conscious choice because you've decided that our future is too grim?
You're still a young man, that you could still potent, I'm sure.
I could.
Not unattractive, and I imagine you have options.
I have some options.
The, it is a conscious choice, but it's not about thinking the future's too grim.
I think I would have preferred to live and die at 10
than to never live at all.
It's more a choice.
Live and die.
You mean die at the age of 10?
Yeah.
Oh, okay.
So you're saying a kid, even if the kid only lives till 20,
it's better to have the kid.
That's right.
Okay.
You know, and at least personally...
Does the kid know he's going to die at 20
or he gets hit by a bus and doesn't see it coming?
Why is the kid dying at 20?
Because, you know, the world ending.
The world ending.
So I'm saying, is that why he's not having kids?
He's saying you might have kids anyway because they give them if they have a few good years,
they get some snacks in, you know, then.
I don't think I have any obligation to sort of not have kids because someone else is ending the world, right?
That's sort of their issue, right?
That shouldn't let me not, well, it's all over issues, yeah, for sure.
But it shouldn't, you know, it's my reason for sort of not having kids right now is that I'm busy trying to make the world not end
because I'd like my future kids to be able to live longer than 10 years.
Right, but, but it's not looking good.
Well, not according to him.
Now, I fall back on Jim McKay's father, as I always do.
I didn't know you fell back on Jim McKay's father.
Well, I always do.
Jim McKay years ago quoted his father.
Remember Jim McKay from World of Sports?
No, I have no idea who that is.
Or his father.
He used to say that my father always said that our greatest fears and our greatest hopes and our worst fears are seldom realized.
And I typically, I believe that that's the case with AI, that we're not going to
to have a cure for cancer in five years, but we're also not going to, you know, all be killed.
That's my great hope, too.
Nate sounds much less convinced than you are of Jim McKay's father's prediction.
Well, yes, but he wrote a book.
Seldom doesn't mean never.
Doesn't mean never.
But you seem to be on the, you seem to believe this is a real threat and that it's not, you know, it's better than one percent.
It's better than one percent chance if you had to put a number on it.
Much more than one percent.
But you have no idea how long it's going to take.
It takes five months or 50 years.
I think at this point, I would be pretty surprised if it was five months.
I would be pretty surprised if it was 50 years.
I'd give you like two to 20.
It's like a somewhat narrower range.
Not that narrow range.
And you don't believe we're going to put the appropriate guardrails that we need to do.
You know, if you sort of look at the field of people trying to make this go well,
there's two broad categories.
I'm simplifying this a bit, but there's two broad categories.
One is called interpretability research, which is trying to figure out what the heck is going on inside the AIs.
One is called evaluations, which is trying to figure out how capable these AIs are, how dangerous these AIs are.
If someone was building a nuclear power plant, and you were like, hey, I heard this uranium stuff can give us energy, but also could melt down and be a big problem, why do you think this nuclear power plant you're building?
is going to be fine. And they're like, well, we have two teams working in the nuclear power plant,
one of which is trying to figure out what the hell is going on inside it, and the other which
is trying to measure whether it's currently exploding. You might be like, gosh, those guys
don't sound close to doing this job right. Yeah. That's not what it sounds like. You know,
what it sounds like when you're close is someone being like, well, we know all of the, you know,
decay pathways. We have all of these, you know, ideas about if it starts getting too hot,
the water will boil off, which will cool it down, you know.
We're not in that world.
We're nowhere near close on this stuff.
And, you know, where the machines are talking.
We figured out how to grow machines to be smarter and smarter.
We don't understand how they're working.
They're heavy, these behaviors nobody wanted.
It seems to me like the obvious basic default part, like way the story goes.
Like if you're reading in the genre of history book, not in the genre of science fiction,
but just the honor of history book of like, then the scientists figured out how to like grow the power of this thing larger and larger,
well, not understanding how it worked.
And, you know, when asked, like, about the safety precautions, they were like, well,
we're trying to measure it and understand his property as well, continuing to grow up more
and more powerful.
You know, in the genre of history, that stuff usually goes wrong.
Yes, that does not sound good from what you're, the way, the way that you are describing,
it does not sound promising at all.
And I also think knowing nothing about this is that it also seems to just completely go
against human nature.
Like, we just usually do the thing.
Yeah.
Well, we'd have to, we'd have to internalize your prediction.
It's very hard for human beings.
You know, you saw the movie, Don't Look Up.
Yeah.
Where, you know, the movie was Scarlet Johansson?
It was in, anyway.
But you saw the movie.
Oh, Hanson?
I did see the movie.
Yeah, where there's a meteor coming to earth that's going to kill everybody.
And a large part of the population just refuse to believe it and refuse to put the resources nest.
you have to, you have to accept this doomsday scenario.
It's very hard to do.
Why are you calling her Scarlett Johansson?
Because that's her name, Scarlett Johansson.
Or is it Joe Hanson?
I think it's Joe's now.
I don't know if she's changed her name officially or not.
I've never heard it pronounced like that.
Maybe you know, okay, Scarlett Johansson.
So we have to, we have to believe what you're saying is true.
You're saying we're in denial.
Well, I'm saying denial is, it could be the thing.
that keeps us from implementing what needs to be
implemented. Yeah.
To avoid this work for,
I also think we have evidence
that supports Nate's
theory, which is like, just look
at like social media. Look at like
TikTok, for example, and
its effect on like, I don't
know, teenage girls.
Yeah, we just plow right ahead.
Yeah. Like, whoops.
Right?
Yeah. That's what it is. It's you plow right
ahead and then you're like, oh, shit.
And this is how humans usually do stuff.
usually it works fine. You know, you have chemists, or you have alchemists who poison themselves
with mercury. You know, you have Marie Curie who studies the glowing rocks, dies of cancer,
and then, you know...
Now, you're not busting his balls in his pronunciation of Marie Curie.
Marie Curie's fine. I have a strong book accent.
It's Marie Curie, it's Marie Curie, not Curie.
Marie Curie. Sorry, had the emphasis on the wrong syllable there.
Marie Curie. It's okay, but if you're going to bust my balls with Scarlet Joanne...
But I didn't hear that. I didn't hear Marie Curie.
I'll give it to him.
He had Marie Curie.
I had the emphasis on the wrong syllable.
But, you know, she dies of cancer,
and then the United States Radium Corporation
tells the girls to lick the paintbrushes,
the radium girls to lick the paintbrushes,
and their jaws fall off.
And they were like, whoops, their jaws fell off.
Let's maybe not lick the radium paintbrushes anymore.
You know, that's how humanity usually approaches technology.
And usually that goes fine.
Actually, after the fact.
Yeah, you know, it's suck.
We wait into technology, everything, right?
We wait until, and then we...
Yeah.
Right.
We learn the second, maybe the third time, you know?
Yes.
Part of the issue with AI is you don't get second shots.
It's not harder than all the other problems.
And, like, it's not harder than figuring out how to turn lead into gold, which we've done.
It's not harder than figuring out that, you know, radiation is dangerous.
It's just if we, you know, make really, really smart AIs and then realize they're doing the wrong thing, it'll be too late.
You don't get the second try.
That's what makes this problem really hard.
I do want to talk a little bit about what you had said,
and I didn't expect to go in this direction,
but you talked about going to other planets.
Yeah, yeah.
Are you, like, on Team Musk in terms of Mars?
I think Musk has done a lot of damage in the AI space.
I don't really think...
I think if Mars was going to be really useful for the human race,
it would be useful by giving like a new regime
where people could try a whole new type of governance
like America was in some sense
one of the last times that people were like
let's try a radically new form of government
and that went great, that was cool, right?
I think we can probably do better if we experiment again
and, you know, could we do those experiments on Earth?
Yeah, maybe no country-lober-
The nice thing about Earth is we can breathe on Earth.
Yeah.
Unlike most of these other planets or all of these other planets.
Yeah, the air is nice.
I think it would be really quite the condemnation of Earth bureaucracy
if it was easier to try new government experiments in places with no air than on our home planet.
Is it possible that we won't need air at some point?
Yeah, you know, I think the, I think it's technologically possible to try.
travel to stars. And, you know, whether we'll get there. Like, I think, and I think humanity
should shoot for that in the future, I think. I think there's a lot of possible...
What do you mean travel the stars? Well, you can't travel to a star. It's too hot.
You can travel between stars. Okay. You mean planets, the planets that orbit the stars.
Maybe, maybe you build your own little, you know, space.
habitats around the stars, you know.
All right.
I think it's technologically possible to, and, you know, maybe.
Have you read Project Hail Mary, by the way?
No, I have not.
How long does it take?
Well, you should put that on your reading list?
Yeah, maybe I should.
If you're into hard sci-fi.
Anyway, I'm sorry, I interrupt.
How do you get to the stars?
Well, you know, one way to do it is to automate the process of technological
innovation and development and then be like, build me the fastest
possible vessel that can get there. You know, you're not going to exceed light speed, but,
you know, how close can you get? Well, who knows? You know, and also if you, if you've figured
out the technology to sort of synthesize things on that, on that distant planet, you don't need
to send, you know, the people in their meat suits. Maybe you can, you know, in the big bags of
flesh, you know, you could maybe send much smaller ships that can go much faster and that they could
then, like, use the energy and the resources on the other side to, you know, build, you know,
molecule by molecule copies of whatever you want to send over there.
And then the philosophers can debate all the time about whether that molecule by a molecule
copy over there is really you.
And maybe the you on Earth is like obviously not.
And the you this over there is like obviously yes.
And then, you know, due to the light speed limitations, you can bicker about it every four
years or whatever.
But, you know, this stuff looks technologically possible.
It also sounds so insane.
Yeah.
It's, you know, there's.
But maybe that's just me.
I mean, maybe my, you know, I'm limited in what I,
I'm with Jerry L. I don't think we're traveling too far beyond Earth.
I mean, my guess is we aren't either, you know, to be clear. I'm not saying this is going to happen.
I'm saying it looks possible under the laws of physics. And it's the sort of thing that if we, you know, like if humanity survived a million years, do you really expect we'd still be sort of like in...
Well, that's the big question. We're surviving a million. You don't seem to think we're going to survive. It's your 24th.
That's why I think we're not going to the stars. That's right.
Because we have time limitations.
Yeah, because we have killing ourselves limitations.
Well, but so your crusade, if I may use that term, is to end this book, in addition to, you know, I'm sure you're getting nice income is a bestseller, but is to warn people and to affect change.
Yeah, we put all the money towards advertising because we don't care about it.
Now, who else is, are you guys, you and who's your co-writer?
Elias Riedkowski.
Who's more pessimistic of the two?
um i'm not sure which of us uh thinks the situation looks more dire i would say you know again
the book title does start with if uh and uh as opposed to when that's right okay and you know
a lot of people like tossing their numbers around of what's the chance we all die here and i'm like
look it's still up to us well is it if like if i if i get anxiety before my cruise ship gig kind of if
or where it's a virtual certainty or a if where you really think we have a decent shot of avoiding
this?
I think this is a place where I'm more optimistic than many.
We're sort of in this crazy situation where I'm like, it looks to me like this tech is just
on track to kill us all for reasons X, Y, and Z, and it looks like we're not close to doing it
right for these reasons.
And then the people in the field, the people building this stuff are like, oh, don't listen
to that crazy guy.
This is only 10 to 20% chance likely to kill it early, everybody.
Right?
And it's like...
Yeah, that's the number I've heard people say.
Like, who's that guy I saw on YouTube?
He's like a big AI guru.
I forgot his name, but he put it at 10 to 20% chance.
Yeah.
These are the numbers that the optimists pass around, right?
And it's like you have people building an airplane, and I'm like, hey...
I mean, that's still a pretty heavy number.
It's a huge number.
Yeah, if I'm like, hey, this airplane has no landing gear.
maybe don't fly in it it's going to crash and the people building the airplane selling tickets are like don't listen to that crazy doomer
he's right the airplane has no landing gear but we're going to build landing gear on the fly
don't listen to him about how we don't have the materials we think we're going to do it we think
there's a 70 to 90 percent chance 80 to 90 percent chance that we build the landing gear on the fly
all aboard right like that's that's even more insane it's nuts yeah right yes and so
like, yeah, we're rushing ahead right now. People are like, oh, maybe we'll never change. But, like,
you know, the world leaders, you know, everyone in the field knows that there's like a huge
insane chance this kills us all. Our world leaders are like, oh, it's just chatbots. The chatbots are
dumb. What's going to happen? Jesus Christ. I mean, what are the chances that you would get on a
plane with what Nate just described? Obviously, I would not. Zero, right? And nobody would.
Yeah, you don't get on that plane. And if they're then like, I'm loading up your friends,
and you
I also don't fly spirit.
If Noam were here,
he would,
Noam,
I would suspect
would be optimistic
and think that
you're a crazy doomsayer
if he were here.
But he's not.
He's saying that
he keeps saying
that it starts with if.
Yes,
but he's still,
he's saying that
10 to 20%
of this happening
is with the optimist
and that's still a pretty
heavy percentage.
Yeah.
That is what the optimists
are saying, right?
But these are all numbers
about like if we rush ahead.
You know?
And people are like, oh, well, definitely going to rush ahead.
And I'm like, people in the field realize how dangerous this stuff is.
Everybody else hasn't really realized how dangerous this stuff is.
Yeah, it's really true, though.
He's right.
Because, like, even when you think about, like, all of those, like, Silicon Valley guys,
and, like, they all send their kids to, like, these private schools that have no screens,
none of them are allowed to touch screens.
But that's a different point.
It's a different point, but it's the same thing.
He's saying that everybody in the industry knows.
Nobody else knows because they just don't know.
Like, you don't know what you don't know.
What other than this weird synthesizing virus thing, like, what's like a couple of other options of, like, how we all go down?
You know, the one that's, like, really easy to visualize, people are already talking about, let's build the automated factories that produce robots that can build more automated factories and that can do the mining, right?
If that loop ever closes, you have made what's essentially a new species.
It's like a weird mechanical species that has like a robot phase of its life cycle and a factory phase of its life cycle.
A mecha species.
A mecha species.
Yeah.
And then, you know, that could outcompete us just like we've been outcompeted by all sorts of other species before if it sort of is, you know, pursuing its own strange ends.
That's sort of like people are literally trying to do that and we'll see if they succeed, you know.
How much anxiety is this giving you on a day-to-day basis, if any?
You seem like a calm guy and you said you were from Vermont.
Those people are generally pretty even killed.
Yeah, I'm even killed.
You know, I figured this out in 2012, and I was like, oh, man, humans just don't seem like the sort of creatures to figure this out without learning the hard way.
We're probably going to die.
But I'm going to do my best because, you know, it seems like this place is worth fighting for.
And then I do my best, and I don't tie myself up in knots about it.
Who else is the big Paul Revere alarm sounder in this?
You and your co-writer are the leading the charge?
Or, you know, is there any of the big names in the apocalyptic prophecy business?
I mean, in some sense, it's like half the field right now, you know, it's...
Altman?
So Altman the other day, like, Altman has writing from before he started Open AI where he says
this is, you know, one of the biggest risks to humanity and could wipe us all out.
And then he has been saying that less since he has been trying to raise a lot of money and, you know, talk to Congress or whatever.
But just a couple weeks ago, he was pressed by a reporter.
Do you still believe this?
Do you still believe, you know, there's at least a 2%-ish risk this kills us all?
He says 2%.
Yeah.
Well, he's a super optimist.
Super optimistic, yeah.
He's also trying to raise, like, trillions of dollars, isn't he?
Yeah, I mean, I was also being a bit, you know, there's a bit facetious here.
You know, two percent is still, like, that's a huge optimist.
You wouldn't get on a plane if it was a 2% chance of crashing.
Yeah, and if the engineers, you know, again, if the engineers are bickering over the numbers
and one's like, oh, no, no, it's just 2%.
That means he's acknowledging there's, like, you shouldn't take that as like a real statistical
number.
someone who's saying no the plane isn't going to go down or no the bridge isn't going to go down they're talking about the materials they're talking about the like their knowledge of the system the backups the statistics from when they've done very similar things before if some engineers like i don't know maybe two percent it goes down that means they don't know what the heck they're doing it also means that the number is probably not two percent the numbers made up and probably not two percent yeah uh but you know the like uh geoffrey hinton the godfather of ai who won the Nobel Prize
Maybe that's the guy that said 20% that I saw on YouTube.
Yeah, he's...
It's like an older white-haired dude?
Yeah, there's a couple of those.
But it could have also been Joshua Benjillo, one of the other...
Now, I would have remember that name.
Yeah.
You know, these guys, some of the top scientists won the highest awards in the field, they're like,
oh, yeah, big chance this kills us all, right?
The guy's running the labs.
You know, we've discussed Sam Altman, but Dario Amadee runs Anthropic, Elon Musk running XAI.
They're like, oh, yeah, 10, 20, 25%, right?
What do they care?
They got their fucking...
They're, you know, they're bunkers.
I don't think a bunker helps when an AI decides it needs all of the sunlight falling on the planet for its own purposes.
Jesus Christ.
You're scaring Perio.
You pronounce my name weird, too, just now.
Periel?
Emphasis on the wrong syllable.
It's not even a real name.
Your mother made it up.
Periel.
Okay.
You have any other questions?
No, I don't.
It's been illuminating, though.
And what?
I don't know.
I just remembered that tomorrow's Thanksgiving.
Oh, happy Thanksgiving.
Happy Thanksgiving.
Don't at the family dinner bring any of this show.
Absolutely, sorry.
We need more conversation on this.
That's how you get out of this.
There's more people realize just how crazy the situation.
Well, I'll write my Congress or whatever.
What would you suggest we do as us humble, non-tech people?
We're not in the field.
We can't affect change.
directly, but maybe there's something we can do, other than be nervous about it.
I do think writing congressmen, writing congress members helps.
I've actually spoken to a number of people in Congress who are worried and feel like they
can't talk about it because, you know, they don't want to piss off the super PACs from
big tech or they feel like they'll sound too weird.
And just a few constituents, you know, we're starting to see people talk about it now.
Often I think that's downstream of a few constituents calling and saying I'm worried about this.
Now, what percentage, since we're talking about percentages, that you would really write your Congressperson about this?
Well, at 2% seems like enough.
I mean, now I'm 56, so I'm not as nervous as somebody that's 20 that's starting out in life would be.
I would say that there's a 0% chance that you're going to write your Congress person.
We have a web page that makes it really easy to call if anyone builds it.com slash act.
We just try to make it as easy as possible.
Where can people find all of this information and find you?
Yeah, you can just, we have a website if anyone builds it.com,
or if you know, you just Google the book title, it's memorable.
Anyone builds it, everyone dies.
Well, even if you think this guy's just a crazy doomsayer,
it's certainly worth, you know, worth caution, you know.
I mean, you can't go wrong with caution.
No, I'm with you.
I think that everything you're saying,
other than flying to the stars sounds pretty...
You kind of lost her at flying to the stuff.
No, but the rest of it sounds really legit.
Yeah, well.
Well, thank you for coming.
If anyone builds it, everyone dies
why superhuman AI would kill us all.
Glad we're having the conversation.
What's that?
Glad we're having the conversation.
And, you know, for some light, fun reading.
And go to If Anyone Builds It.com.
The book ends on a hopeful note.
Okay, good, good.
Well, every book should have a happy ending.
Thank you for coming.
Nate Sorris?
Sorries.
Sorries.
You're from Argentina or something like that?
It's Portuguese.
Portuguese.
Okay.
Italian.
I only do one accent.
It was romance.
Thank you so much.
Podcast at Comedycellar.com for comments and questions and suggestions.
Bye-bye.
Bye.
For a while.
