Modern Wisdom - #650 - Geoffrey Miller - How Dangerous Is The Threat Of AI To Humanity?
Episode Date: July 6, 2023Geoffrey Miller is a professor of evolutionary psychology at the University of New Mexico, a researcher and an author. Artificial Intelligence possesses the capability to process information thousands... of times faster than humans. It's opened up massive possibilities. But it's also opened up huge debate about the safety of creating a machine which is more intelligent and powerful than we are. Just how legitimate are the concerns about the future of AI? Expect to learn the key risks that AI poses to humanity, the 3 biggest existential risks that will determine the future of civilisation, whether Large Language Models can actually become conscious simply by being more powerful, whether making an Artificial General Intelligence will be like creating a god or a demon, the influence that AI will have on the future of our lives and much more... Sponsors: Get 10% discount on all Gymshark’s products at https://bit.ly/sharkwisdom (use code: MW10) Get over 37% discount on all products site-wide from MyProtein at https://bit.ly/proteinwisdom (use code: MODERNWISDOM) Get 15% discount on Craftd London’s jewellery at https://craftd.com/modernwisdom (use code MW15) Extra Stuff: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hello everybody, welcome back to the show. My guest today is Jeffrey Miller. He's a professor
of evolutionary psychology at the University of New Mexico, a researcher, and an author.
Artificial intelligence possesses the capability to process information thousands of times faster
than humans. It's opened up massive possibilities, but it's also opened up huge debate about the
safety of creating a machine which is more intelligent and powerful than we are.
Just how legitimate are the concerns about the future of AI?
Expect to learn the key risks that AI poses to humanity, the three biggest existential
risks that will determine the future of civilization, whether large language models can actually
become conscious simply by being more powerful, whether making an artificial general intelligence
will be like creating a god or a demon,
the influence that AI will have on the future of our lives,
and much more.
Don't forget, this Monday, Chris Bumstead,
four time Mr. Olympia Classic Physique Champion
is on Modern Wisdom, and the episode is beautiful,
and heartfelt, and awesome, and inspiring.
It's great, And you don't
miss it, so press the subscribe button. I thank you. In other news, this episode is brought to you by
Jim Shark. Jim Shark Studio shorts are the best men's training shorts ever created. The ones in
Dusty Maroon are what I'm actually wearing on the training vlog if you saw me training with Mr.
Bumstead, Mr. Olympia himself earlier this week on his YouTube channel.
Their crest hoodie in light gray mile as well is what I wore to travel in and their
geos seamless t-shirt is what I was wearing to go from the hotel to the airport.
So everything that I wear at the moment is pretty much gym shark.
The fits are phenomenal, they're super lightweight, everything about it I absolutely adore.
Plus they've got 30
day money back guarantee with free returns and they just updated my product page on the
website with all of my brand new selections for men and women. So you can go and check
out everything that I use and recommend for guys and girls by heading to bit.ly slash
shark wisdom and if you use the code mw10, a checkout, you will get 10% of everything
sight wide. That's bit.ly slash shark wisdom and MW10. A checkout.
In other news, this episode is brought to you by my protein. They're the number one
sports supplement company worldwide. They make my favorite product and the best protein
on the planet is my protein's clear way. It is light and fruity, it looks and tastes like juice.
It's got as much protein in as a normal protein shake, but it doesn't give you any digestive
discomfort.
It comes in really refreshing flavors because it is like juice.
They can do things like apple and pineapple and orange and mango, which is my favorite.
It is so good that you can sip it during a workout.
And if you've been struggling with digestive discomfort or just generally being getting bored of your protein, this is so good that you can sip it during a workout. And if you've been struggling with digestive discomfort
or just generally being getting bored of your protein,
this is the product that you should switch to.
It changed my life and I started using it a couple of years ago,
really made me fall back in love with taking protein powder again.
And I highly highly recommend it.
On top of that, they've got the best layered protein bars
in the game.
They've got protein cookies, they've got crisps,
they've got additional protein in it, they've got accessories protein cookies, they've got crisps, they've got additional
protein in, they've got accessories, clothing, shakers, bags bottles, whatever it is that you're
after everything is available with worldwide shipping. Plus you can get up to 37% of everything
site wide and all that you need to do is go to bit.ly slash protein wisdom and use the code
modern wisdom. A checkout. That's b-i-t dot l-y slash protein wisdom and use the code modern wisdom. A checkout that's b-i-t dot l-y slash protein wisdom and modern wisdom.
A checkout.
And in final news, this episode is brought to you by crafted London.
If you see the episode that I do with Chris this Monday coming, I'm wearing crafted.
It is the only jewelry that I wear now.
They have just nailed the styling for men and it's really hard to find good men's jewelry
that doesn't look too gaudy or look too weird.
I absolutely love it.
All of their necklaces, chains, pendants, bracelets, rings, and earrings are all very cool.
They're in gold and silver, custom designs, their sweat proof, waterproof, heat proof, and
gym proof.
And best of all, they come with a lifetime guarantee.
So if you break your piece for any reason during the entire life of the product they will
send you a new one for free.
Also you can get 15% off everything sight wide from Crafted London with worldwide shipping
by going to bit.ly slash cdwisdom and using the code mw15 a checkout.
That's bit.ly slash letter c letter d wisdom and mw, a checkout that BIT.ly slash Lettucey, Lettady Wisdom, and MW15, a checkout.
But now, ladies and gentlemen, please welcome...
Jeffrey Miller. Why are you as an evolutionary psychologist researcher talking about AI?
I just love getting into trouble and making trouble on Twitter.
No, I actually look.
Long story short, when I started my PhD program at Stanford
in way back in 1987, right, I was studying cognitive psychology
and cognitive science.
And pretty quickly, we got one of the leading neural networks
researchers at Stanford Psychology,
a guy named David Rommelhart, who worked very extensively with Jeffrey Hinton
and lots of other people.
And so I started in grad school doing a lot of work on neural networks
and machine learning and genetic algorithms and so forth.
And then I spent most of my postdoctoral years at University of Sussex,
also in the cognitive
science department, doing autonomous robots and sort of applying genetic algorithms to
evolve neural networks.
So long time ago, I was sort of an early adopter of machine learning, and then I got sidetracked
into this evolutionized psychology thing, you know, for about 30 years. But recently, since about 2016, I've become concerned and fascinated by rapid progress
in AI, particularly deep learning and the large language models.
And I sort of fell in with this gaggle of effective alt-rists, right, this movement, who are
quite concerned about existential risks, right? This movement, who were quite concerned about existential
risks, risks to all of humanity, and a lot of them were very concerned about how AI could
play out. So, the last few years, I've been reading a lot about this since last summer,
I've been publishing a bunch of essays on the effective altismism forum and gotten pretty active on Twitter the last few months about AI X-risk.
So to the casual observer, they might go, who was this psychology dude, suddenly interested in AI?
Well, honestly, I've been fascinated by AI ever since I was a high school or reading science fiction,
and ever since grad school learning about cognitive science. So that's my long story short that wasn't actually very short. Is it right to say that most
of the researchers in the existential risk world see AI as one of the, if not the premier primary
risk that we're facing? Yeah, absolutely. There's a great book by Toby Ord, O-R-D
who's an Oxford moral philosopher, but he has worked a lot on existential risks
So he did a book called the precipice
right and he actually tries to quantify the different extinction risks that we face some of them are really really low
probability but really hard to fight.
Like if there's a local gamma ray burst or a supernova, very, very low likelihood, very
hard to defend against. Other stuff like asteroids, which get a lot of attention, very,
very low likelihood. There's probably less than a one in a million chance we're going
to get a dangerous asteroid in the next century. If it comes, that could be bad. like there's probably less than a one in a million chance we're going to get a dangerous asteroid in the next century.
If it comes, that could be bad, but you know, there's stuff we can do about it.
Whereas Toby Ward estimates that the risk of extinction, human extinction through AI
in this century is about one in six.
And I think that's sort of in line with a lot of the estimates that many experts
give.
The other big risks are basically nuclear war, which is still an issue, right, after,
you know, 70 years of thermonuclear weapons being around.
So nuclear war, possibly genetically engineered bio weapons, could be really bad.
It would be like COVID on steroids that could wipe out a lot of people.
But the other, the other one is AI.
So those seem to be the big three, AI, NUX and super germs.
Yeah, it's an interesting one man.
I remember that sheet, that chart that he has, it's burned into my memory.
It's one of the five books that I think everybody should read in my reading
list that I've sent to a million people. It's one of the five books that I think everybody should read. In my reading list that I've sent to a million people,
it's one of the five books that I think everybody should read.
If you want a primer on exorisk,
the precipice is the place to start.
And yeah, one in six chance that humanity goes extinct
within the next 100 years due to AI.
And I think that the word the precipice
is about a squeezed bottleneck, a treacherous
path beyond which there could be this sort of glorious future. But right now there is
a very particular, very important forced function for, we don't know how long it could continue,
but it's definitely, we're definitely not far off it at the moment.
Yeah, the way I visualize it, I like the precipice metaphor that it's this narrow path,
sort of up a mountain. But if you've ever seen the movie Free Solo about what's his name,
Alex Hullnell, climbing up Half Dome, right? I almost think of it as like we've been doing a walk
through the woods as humans for like the last 200,000 years and the level of risk we face is relatively low.
And suddenly we're climbing up half-dome
without ropes or petons or any safety gear.
And if we can just kind of make it to the top,
then I think we'll be at a relatively lower risk state,
hopefully in several decades.
But a lot of people in the effect of altruism movement
think this is what they call a key century,
a time of particularly high elevated risk
when humanity has to be extra, extra careful
and smart and risk averse
and very, very self-aware about what we're doing.
Okay, so lay out the landscape of AI risk.
Why is it something that we should be concerned about?
How have we got ourselves here? What's changed over time?
I think the basic intuition that lots of people are developing now is that there are gradations of intelligence in the natural world.
Squirrels are smarter than squid.
Monkeys are smarter than squirrels. We're smarter than monkeys. But we are not the ultimate level of intelligence.
We can easily surpass
human reasoning and planning abilities and lots of ways. And in fact, right, everybody on their smartphone already has apps that are better at doing certain narrow things than
we ever could. Like Google Maps is better at figuring out where to go than I would be with
paper map.
Face recognition, right, is way better than I am at face recognition. I've got a little
bit of prosibagnosia, like recognizing people, you know, like at the human behavior and
evolution society conference, we were both at two weeks ago. It's a little bit of prosomagnosia, like recognizing people, like at the human behavior and evolution society conference,
we were both at two weeks ago.
It's a little challenging for me.
Computer vision systems have gotten very good at it.
So one issue is we are approaching a point
where AI systems are getting more and more general purpose
and smarter and smarter across many, many different domains.
And that represents a kind of major evolutionary transition in intelligence, where we could
be outclassed.
A second thing that I worry a lot more about than some people seem to, is just the raw speed
issue, right?
If anybody's played around with chat GPT, right? And you've asked it to write
a little essay or an outline or any kind of language. You know, oh my God, it is so fast,
is a way faster at writing material than humans could be. And it's not even particularly optimized
to be fast in that way. So what we're facing is potentially AI systems that are smarter than any human
at doing a wide range of things,
but also that are potentially 100,
a thousand, maybe a hundred, a thousand times faster
than humans.
And the way I like to think about that
is you might be familiar with the speedster superheroes,
like the flash, right, or Quicksilver, where once they do
their kind of super speed and they're running round, it's as if everybody else is just frozen
in place. I think when we confront very fast, powerful general-purpose AI systems, that's
the kind of situation we're going to be in. We're going to be outclass,
not just in terms of intelligence, but also reaction speed. You could potentially have an AI
trading bot that's trading equities, or crypto, or whatever, just way, way faster than any human
can follow. You could have military AI applications that can kind of simulate scenarios in terms
of tactical applications or firefights where they can kind of run through like every possible
way that they could engage with an enemy and kind of spin out these simulations and
then just completely outclass an opponent
in terms of tactics and strategy.
So I think these two things, right,
and I's being smarter, potentially,
and also extremely fast.
It takes a while for the full implications of that
to sink in, but I think they're kind of worrying.
Well, let's think about a JCB digger, right?
It's larger and stronger than a human,
but it is a tool that is under our command.
So why should we be concerned about an AI,
which is faster and smarter than us?
It just makes us do things quicker and better
than we would have known how to do it, surely.
Yeah, if we're just outsourcing,
Julie. Yeah, if we're just outsourcing, you know, suggestions to the AI, like with Google Maps, we're saying, how do I get from A to B, right? And Google Maps is not actually taking
control of our Tesla. It's not driving for us. We're not outsourcing our agency, right?
That's relatively safe, right? The AI could still manipulate us in lots of ways.
It could nudge our decisions in certain directions for its own ends or for the interest of somebody
else, like the people who developed the AI. But once you give it agency, once you outsource
decision-making powers, that's where I think the real danger starts to come in.
So for example, there's a big difference between like a military AI that has the ability to suggest certain courses of like okay, it's time to launch a bunch of F 35s to go bomb this country
The trouble is because of the huge
Speed advantage of AI there will be very very strong incentives to give it agency
right to make it
able to make decisions in a sort of perception,
decision action loop that's like way faster than a human could do.
There'll be commercial incentives to do that if you're in any kind of competitive environment
like finance or military applications or even market research so that you can do
optimization of some, you know, anything you want to optimize better by
outsourcing some agency today.
That was a little vague, but if that made sense, let me know.
It does, but as of yet you haven't talked about artificial general intelligence or some
terminator, Arnold Schwarzenegger apocalypse, or it becoming self-aware, recursive self-improvement,
machine extrapolated volition. I haven't heard any of that. So presumably in between now and a concern
that I think was very prevalent, maybe a decade ago, also when Toby or Nick
Boss from Super Intelligence book came out, which was that once you reach singularity,
there's going to be all of these problems, it seems like there's some pretty dangerous
gradations between now and there.
So what's happened in the world of AI risk concern over the last decade in terms of how that
was a problem, it's changed, and then what are the gradations between now and where we
could get to in future?
Yeah, so when Nick Bostrom and Oxford philosopher published Super Intelligence, which is a great
book, another must read, it's a little technical, it's challenging.
Or double it.
Or double it, don't try and read it.
Okay, yeah, That was 2013.
Bostrom emphasized very heavily the notion of like a self-improving takeoff where like
an AI could get pretty smart and then it starts optimizing its own software and potentially
hardware and then you get a kind of explosion of capabilities that could potentially happen very, very quickly.
Like the AI could sort of go from human level intelligence to superintelligence that's smarter
than all humans who have ever lived in a matter of perhaps days or weeks or months.
Now that's a super scary scenario.
I think it's legit to worry about it. I take
a slightly different view, and I think a lot of people are also taking a different view, that even
fairly narrow AI, that's not even as smart as humans in many ways, could still be extremely
dangerous. For example, if you have a narrow AI that is very, very good at designing bio weapons, right,
at doing kind of gain of function research through computer simulating, here's how a virus
could spread through the world. You could have like a bio weapon narrow AI that actually
invents an extremely dangerous pandemic, right. And you might think nobody's saying would want to do that.
Well, there are lots of insane people in the world. There are lots of nihilists and terrorists
and, you know, de-grotherers and people who don't like human civilization who might be motivated
to use that kind of narrow AI. You can also have narrow AI applications that could be very
politically destabilizing. You know, you could have extremely good video deep fake technology that makes it look
as if Vladimir Putin or Xi Jinping or Joe Biden has declared war on some other major nation
state and that provokes an extremely panicked military response that could lead to like global
thermonuclear war. And we're not really very ready for that kind of thing. And
that stuff could happen, you know, within a couple of years, like people
debate when will we get artificial general intelligence. It could be five
years, 10 years, 50 years,
who knows.
But these narrow AI applications are coming down the pike very, very quickly, and they
could be pretty destabilizing.
What has happened with the development of neural networks and large learning models that
maybe caught a lot of AI X-riskrisk, safety, alignment research is unawares
and kind of how has that changed the viewpoint of concerns around AI?
What we're basically seeing is very rapid advances in hardware memory size and speed.
So when I was doing neural network research back in the late 80s,
we were playing around with networks that had maybe a few dozen,
maybe a few hundred units, and maybe a few hundred or a few thousand parameters,
meaning the weights in the network that kind of connect
a little simulated neurons together.
So a few thousand parameters back then.
Now it's like trillions of parameters
in the large language models,
because the hardware is just better, right?
We can make GPU chips that are much, much more powerful.
And once you get to a certain size of network
with enough hidden layers learning,
using the deep learning methods, you get kind of
powerful emergent properties popping out that nobody was really quite prepared for. And that's
where we've been blindsided by chat GPT. What like? Just like people kind of thought, well, if you make
these large language models kind of predict like the next language token and
You feed them a lot of information about the whole content of the internet
Then maybe they might be able to answer some basic questions
Right kind of like a Google search, but low and behold there
Way more powerful than people expected.
Like you can ask ChatGPT to do all kinds of things
that really wasn't designed to do that people didn't kind
of grow it to do, like write an outline for a screenplay.
Figure out how to write a summary of AI
existential risks.
GPD is pretty good at that in a kind of self-recursive way.
It can do math, it can do computer programming,
it can do an awful lot of capabilities that people really,
they thought like maybe that's 10 or 20 years away,
and boom, here it is in 2023,
available to everybody, available to the 100 million
users of chat GPT. So that's where the shock has come, that once you get to a multi-trillion
parameter large language model, man, it's looking pretty close to artificial general intelligence. It's not quite there yet, and it's still fallible,
and there's still a lot of can't do.
But, you know, if you had a time machine,
and you took Chatchee PT back in a laptop
to 10 years ago in 2013, and you ask people,
does this look like artificial general intelligence?
They go, holy moly, yeah, that's pretty effing close.
That's way more advanced than we expected it would be by 2023.
So that's the concern.
Like, the fact that you can be so surprised by the pace of development. It means maybe the next step, whatever that is,
is going to blind shut us as much or even more.
And this is the same group of people who said it's one in six
as the chance that humanity goes extinct due to AI
within the next century.
So you, I don't think by anybody's standard, that would be considered
a conservative estimate. It sounds crazy, you know, you're rolling a dice and on one side
is a button that destroys humanity. So, you know, again, just that's quite a stark claim, I guess.
Yeah, one in six is literally Russian roulette.
So if you're playing Russian roulette and the head is not like a single individual's head, but your whole species, that's what you're doing. So what is the definition of AGI and why
doesn't chat GPT yet? Breach it. AGI means artificial general intelligence, and the way that the research groups themselves
like deep mind and open AI and anthropic, the way they define it, is basically an AI system
that can do just about everything a human can do in terms of cognition and perception,
to about the standard that a professional would do it in a job.
Right? So, with an AGI, you should be able to train it,
to be as good at medical diagnosis as a doctor,
or as good at teaching a class as I am as a professor,
as good at playing chess, as a chess
grandmaster, as good at trading equities as a good Wall Street trader, and everything
else, right? And the real power comes from the fact that once you have an AGI that can
do all of that, that's fairly general purpose, you can copy it, right? You can copy it and you can make one copy of it do one human job and
other copy do another human job. And then they could potentially even trade information about how you how you do this. So, you know, the explicit goal of open AI and it's and its CEO, Sam Altman, is create a GI as fast as possible and make it so it can automate most human jobs as fast as possible.
So that raises lots of issues about unemployment. But it also raises lots of issues about existential risk because among the
many jobs that an AGI could learn to do would be like be a really good terrorist and making
bombs, be a really good military strategist in terms of overcoming Russia or China or
America, be a really good spy, be a really good propagandist who can shape the outcome
of elections, etc.
So the point of AGI is it should be able to do anything people can do about as well as
people can do it.
And that includes not just all the good stuff, right?
Like being a good veterinarian who fixes dogs or being a good nurse who takes care
of people with psychiatric disorders. It also means all the bad stuff that bad people can do.
What do you think of Sam Altman? I think he's a brilliant guy and I think he has
what to him is a compelling vision of the future. And I
think he genuinely believes that developing AGI will lead to a kind of awesome human utopia.
He talks as if he understands the extinction risks. But I don't, I think there's some deep,
deep cognitive dissonance, right?
Because on the one hand, if he really took
the extinction risks seriously, I think,
he would shut down open AI,
they would no longer do research.
They would say, this is radioactive, this is toxic,
this is crazy, this is Russian roulette,
let's not do this.
And he's not doing that, he's not shutting it down.
He's sort of pushing more or less full steam ahead
and he's making some little noises
about we need to be careful and we need to be,
you know, we need to get good regulation blah, blah, blah.
But I don't think it's really a heartfelt appreciation
of the kind of Toby Orde point that, dude, you're talking about one in six at least risk of human extinction this century.
To most people in the world, that sounds absolutely insane and reckless.
And it's something we did not consent to.
What do you say to the people that would push back and say, well, look, Sam Altman, at
least he's fighting for the good guys.
He's somebody that's from the West, the choices between us getting their first or Russia getting
their first or China getting their first.
This is a winner takes not only all, but takes the world and takes the world for the rest
of time.
And then you've got potentials of bad social lock-in or whatever it's called that William McCaskell came up with.
He's on the side of the good guys. If the good guys don't win, the bad guys win, therefore we need to
make sure that the good guys win. I think it's worth asking, okay, like, I'm American, lots of Americans think we are automatically the good guys by definition.
Anything America does is good, anything that a rival does is bad, and we must win it all
costs.
One thing I would say is, you know, if it's an arms race into a concrete wall, it's not
really a race C1 to engage in, right?
It's kind of like a game of chicken,
but where nobody can swerve.
I think if the likely outcome of the arms race
is extinction, then everybody who's contemplating the arms race
should try their best to avoid getting caught up in it.
How are you going to coordinate this? Are you going to coordinate China and Russia to get on board with your very
Lordable altruistic desire to get some ultimate to just go to the Canary Islands for the rest of his life without a laptop or an internet connection.
That's an egg connection. I think the traditional approach to this is what's
called the AI governance model, where you get a bunch of policy
wonks and Washington DC insiders and people in the UK
government and Whitehall thinking really hard about how
to regulate AI.
So it's benign and how to reduce the arms race dynamics.
And I think that's fine and that's a good worthy thing to do.
Unfortunately, I think it's way too slow and it's way too easy for the AI industry to
sort of capture what happens in their own interests.
And honestly, the politicians who involved in this simply don't understand
AI well enough, I think, to have a very sensible way of approaching this issue. A second
strategy, which is something I've been advocating recently, is a little more informal, a little
more grassroots, and kind of bottom up, which is I think there's certain industries
where it's okay to just look at them and go, that's bad, that's evil, that's reckless,
we should stigmatize that industry.
We should stigmatize everybody who works in that industry, who supplies anything necessary
to that industry, who finances that that industry and it's just a
bad industry and we want to try to slow it down. We have done that with many, many industries
in the past. Crypto has been handicapped very, very successfully by adverse PR and moral
stigmatization campaigns by politicians and central bankers and so forth.
The arms industry has been heavily stigmatized.
There's lots of ethical investment criteria for investors
that say things like, hey, let's not invest
in alcohol, tobacco, gambling, arms trading,
human trafficking, et cetera.
And I think at the moment, there's quite a bit of popular opposition to the AI industry
and concerned about these risks.
And I think we should kind of normalize people being able to say, I didn't vote for this.
I don't support this.
I don't want these extinction risks imposed on me and my kids.
And the people who are doing it should be stigmatized.
I suppose one of the problems is that with weapons or crypto or gain a function research,
coming out of a BSL level 2 lab or something that's not sufficiently secure. People are able to observe, experience and imagine the problems quite easily.
At the moment, all that they've found is a really cool way to get them to write short essays
or tell them jokes for their best man's speech.
So as of yet, the experience of AI, especially neural nets and large learning models,
has not been what you are concerned about moving forward,
which means where is the incentive for people to get on board with this?
Yeah, I think there's an important role here for imagination and fiction and scenario building.
So remember when people were very worried about global thermonuclear war, like when I was
in college in the 80s, this is all we talked about and all we worried about is like how
long is it going to be till the US and the Soviet Union have a massive exchange of ICBMs and we all die?
We could visualize that stuff very clearly because like Hollywood movies and TV series did a pretty good job of
Showing what that would look like none of us had personal experience of Hiroshima or Nagasaki, right?
We kind of read the accounts and it was it was horrible, but
When you're trying to imagine potential harms from new technology,
all you really have to go on is sort of what experts say the risks are
and then the way that that that screenwriters and directors
visualized those risks in a way that the the public can understand.
I don't disagree. The problem you have is that you're not only trying...
No one has ever said, look at that atomic bomb, it gives me so much pleasure and it's so cool.
No one has ever said the same thing about an engineered pandemic. But when it comes to chat GPT
and its iterations downstream, people are finding benefits in it in the now.
And they're trying to be told you need to let go of this thing
which you see as a positive in order for something which you can't foresee as a negative.
Yeah, and this is where I think it's important for people to kind of tap into their,
they're kind of, I think, increasing sophistication that there can be
very, very seductive technologies that can have very toxic kind of social side effects,
even the discussion about social media itself.
I think has moved from,
oh wow, this is cool, we can connect with our grandparents
and we can find people to date too.
Oh my God, is this creating like mass mental illness
in Gen Z in a way that we really need to
rethink how tech talk in Instagram and so forth operate.
So I think people have the ability to understand some new technology can be very seductive.
It can look great.
It's new and shiny, but you know there might be kind of a viper hiding inside it that
could be pretty poisonous. The other thing I would add is you know when I was I was teaching
online courses for Chinese University of Hong Kong, Shen Zhen a couple of years ago.
And we talked quite a bit about AI extinction risks.
And this is a bunch of Chinese undergrads in Shen Zhen.
Very smart.
They understand existential risk.
They're tuned in to nuclear war.
They understood bio weapons because the COVID pandemic
was raging.
And they think quite a bit about AI, right?
Because China had said, we want to be the leading AI superpower by 2030. That was their
plan a few years ago. And these Chinese students were like perfectly willing to understand
the risks and to stigmatize AI, right? They had a kind of moral imperative
and concern for the future of humanity.
Those just as strong as what my American undergrads have.
So I think we have this stereotype in the West
that other countries like China
are sort of full of unthinking automatons
who've been programmed
by their government and their authoritarian regime
so that they will just do whatever, you know,
Xi Jinping says.
But I think actually there's quite a bit more room
for a kind of global grassroots opposition to AI,
not just in America and Britain, but also in other countries
that are kind of key, you know, potential arms race players.
Well, isn't China a country in which they've refused to make public a lot of the neural
nets and the developments in AI unlike in America where citizens can just go
in and play around with it, China said, well, we can't be sure that this thing isn't going
to start telling everyone about Tiananmen Square.
So because we don't have control over it, we're not going to let it loose with the public,
which is, it has to be curtailing the development because it's not able to do any of the learning
that it would have done, had it have been iterated over 1.25 billion users, all of whom are asking
it how to make a cake this evening.
Yeah, I think it's very interesting because insofar as the Chinese government really wants
AI mostly for purposes of social control and
social stability and censorship and reducing crime and reducing terrorism.
Right?
To have a much narrower range of AI applications than your typical American AI company
wants.
Right?
The Chinese government, I think, is not really interested in the kind of techno-utopian vision
of the singularity and transhumanism and let's all upload our minds into the matrix, the
way that you see in a lot of the Bay Area AI enthusiasts.
They're not really into that.
They just want China to be stable and prosperous.
So here again, what's happening, I think, is the American AI companies are at the cutting
edge, right? They're far and away more advanced as far as we know than anything happening in
China, much less Russia or Brazil or the UK. And they're just kind of trying to play catch up. What's happening is like
we're far out in front in terms of the arms race. If I was a Chinese policy expert, I would
be freaking out. I'd be wanting to play catch up. I'd be very concerned about AI having
a kind of, a, America having a kind of AI hegemony.
And it would it would look like a threat to me that I have to respond to.
So we are setting the pace in terms of the arms race.
If we slow down, right, China might go, woo, thank God, like we don't have the talent
or the resources to play this arms race.
We can, okay, if America is relaxing, we can relax too. We can focus on other
issues like, no, how do you get Chinese people to have kids, stuff like that, right?
Yeah, I match making, right? That might be a key application for them. So I think here again,
Americans have to look in the mirror and go to what extent are we really setting the pace with
the arms raised? Are we forcing other countries to try to catch up? That's a really good point.
Maybe if we slow down, we can all, you know, take a step back, take a breath, pause, think,
what exactly are we doing?
Should we be playing this Russian roulette?
Yeah, that's a very good point that I hadn't considered.
I suppose as well that the wake of what is on the internet and what is available, what
is public knowledge about what's happening at all of these different neural net companies
will be slipstreamed in some regard by foreign actors.
The source was, I think, a long time ago, open sourced, and then it's no longer open sourced,
which means that you can't fully see what's going on side in terms of workings, but I'm
sure that you can determine a good amount of stuff, or at least a non-insignificant amount
of stuff from that.
One of the words that we haven't used yet that would have
been used an awful lot 10 years ago was alignment problem. So how relevant is discussing the alignment
problem now? And actually, before you can get to that, does it look like if you were to put your
money on the table, the front runner contender for creating AGI,
would it be neural networks?
Can a large learning model become sentient
and be the AGI thing?
I don't know, on this point, right?
There's a big variety of opinions.
And on the one hand, people like my friend
Gary Marcus at New York University argue that large language models just based on deep
learning and just based on neural networks cannot do many of the key cognitive tasks that
humans do. So can't reach AGI. And he's got various arguments for that and some pretty
good recent books on AI.
On the other hand, you have people saying, well, people keep underestimating deep learning
and keep underestimating what it can do. And as far as we can see, there aren't really very many
hard constraints, you know, against it. And moreover, the human brain looks an awful lot like a large neural network. We don't
exactly learn using deep learning methods as GPT does, but we're basically just a bunch
of neurons with connections.
So, what do you think? Can we just can we just layer transistor upon transistor upon
reinforcement and will we arrive at something that approaches sentience and or AGI?
Well, most of the work I did in neural networks back in the day was was based on the assumption that
no, you can't just train a big random kind of incoherent,
formless blank slate neural network to do anything.
It needs a bunch of structure.
It needs an architecture.
It might even need an evolved architecture where you have to try a bunch of different ways
of wiring up large networks before you can get something that's really smart. Now I'm more humble. I don't know.
You know, I was excited about that because I studied a lot of evolution of biology and animal
behavior and it looked like simple nervous systems often had quite a bit of architecture to them.
You know, you look at a bumblebee or an ant nervous system and it's highly architected. It's not just a glob of neurons.
There's like ganglia doing different things.
There's perceptual clumps of nerve cells doing particular things.
But at this point, I don't know.
Like, it wouldn't surprise me that much.
If you, you know, GPT 8 or nine proved to be able to do pretty much everything that humans can do without having a whole bunch of intrinsic structure.
On the other hand, Gary Marcus might be right.
That you actually need a radically different approach.
I'm not sure it matters all that much, though in terms of the AI safety issues.
It matters all that much, though, in terms of the AI safety issues. Because one way or another, the AI companies will figure out how to do AGI.
If we give them the money and the talent and the resources and the social support, they
will figure it out sooner or later.
Why haven't we talked about the alignment problem?
Why is that a term that I'm seeing less of as well?
Is it just that there's so much AI progression
that it's not a problem?
Or is there a new issue with regards
to this that neural nets have created?
People are still talking about AI alignment.
It's just there's so much more buzz right now
about the amazing new
capabilities, not just of the large language models,
but also of mid-Journey and Dolly and the other
generative AI systems for creating images and videos
and the ones that can create music and amazing audio.
So people are excited about that.
People are extra alarmed about the AI risks. And everybody interested
in so-called AI alignment is still tundling along, working on it, right? I'm still writing
essays about AI alignment. It's just gotten slightly overwhelmed by all the other, you know,
shiny new issues. What is AI alignment for the people who aren't inducted?
The basic notion is how do you get an AI system
to be, quote, aligned with human values
and preferences and goals?
And it's kind of like the so-called
principle agent problem in companies.
Like if you have a bunch of investors, right, and they are supporting
a company and you create like a board of directors and the board of directors gives power to a CEO.
How do you make sure the interests and decision-making of the CEO are aligned with those of the board
of directors and in turn with the shareholders with shareholders. That's a kind of
corporate alignment problem. A political alignment problem would be how do you elect a president
that actually serves the interests of the voters instead of their own interests? In terms of AI,
it would be how do you create an AI system that actually respects what the human users want it to do or what they really want
to do, even if they can't quite articulate everything about what they wanted to do.
There's lots and lots of myths and legends going back thousands of years about the pitfalls of getting some power,
like some genie that pops out of a lamp
where it says, what wishes would you like?
And you think my this turns everything into gold.
Right, and you give it wishes
and then it interprets what you want sort of over literally,
right, in a way that's absolutely disastrous.
of over-literally, in a way that's absolutely disastrous.
So AI alignment in principle is about making sure the AI is kind of doing what you want
and even in ways that you can't necessarily articulate
that fit with a whole background of human common sense
and moral norms, that we might not even be able
to articulate to the AI system,
but that we would still want it to act as if it understands.
Yes, however, different individual humans have got different values and conflict of interests
happen a lot. So which human values should AI's be aligned with?
Yeah, so this is exactly the issue that I've been writing about in my effective altruism
forum essays for the last year or so.
I've pointed out, like you can say, we want the AI to just respect the views of its end
user, the consumer who actually buys it.
But what if the end user is a terrorist?
What if the end user is a political propagandist?
What if the end user is in a hostile nation state?
What if they're a bad actor?
Do you really want the AI to do what they want it to do?
And second, if you try to aggregate
like the collective will of humanity, where you go,
well, the AGI should sort of do what people
in general would want it to do if it could summarize
all of our preferences.
Well, then you get into interesting issues like, well,
men and women have like different interests.
So should the AI go with the 50% of people who are male, the 50% who are
female or whatever. People have different political views, right? And in one
essay I pointed out 80% of humans are still involved in organized religion,
right? And yet the AI industry is dominated by secular atheists who largely
have contempt for religion.
So if you have an AI that's trying to be aligned with people in general, and people in
general have religious values that are being completely ignored and dismissed and mocked
by the AI industry itself, right?
How do you actually get alignment with people?
They don't really mean alignment with what people believe.
They kind of sort of mean like we want the AI to be aligned with what a good liberal,
secular, humanist, democratic, Bay Area, tech, bro values.
That's really the bottom line.
That's what AI alignment actually boils down to in practice.
Yes. Does it split the difference? I remember to not spoil the end of superintelligence,
but the good guy that comes in at the end is machine extrapolated volition, which is
in a world in which we can't be sure that we're going to give a program
an instruction and that program is going to take the instruction and turn us all into paper clips or
kill us so it can make a cup of coffee, telling the machine, do what you think we would have
asked you to do, had we have had sufficient wisdom to ask you to do it? That is roughly machine extrapolated
volition. But even that, modeling human preferences needs to be done based on a group of humans,
and which preferences do you mean? And when preferences come into conflict, which ones
win? And if it's 50, 50 between men and women, and you split the difference between the two,
like, is that optimal? Is that actually optimal? Or is it more optimal to swing it
toward one way or another?
One group might be incredibly vehement about something
but it might be immoral.
Okay, so given the fact that we've been trying
to work out ethics and virtue and morality ourselves,
philosophically for thousands of years
and have made some progress,
but not got anything that's too definitive,
please try and put that into code for me.
And which morals do you mean?
Do you mean the morals of modern secular 21st century,
Western industrialized, educated people?
But why don't we use the ones from the Roman era?
Or why don't we say, well, let's wait
for another 3,000 years and see what
which ones come up there?
Yeah, and I think there's two additional sort of alignment problems I've been worried
about.
One is what I've called embodied values.
So when we think of values and preferences, we typically think of things we can articulate
verbally.
Like if someone says, what do you want for dinner?
Like we have a verbal answer we can give.
But I made the point that like our brain is only 2% of
our body mass, right? Our body is full of all these other tissues and organs that have
evolved their own kind of values and agendas in a way, right? And I call these embodied
values. Like what your immune system really wants to do is fight off pathogens that might
infect you. So ideally you'd want an AI to be aligned, not just with the values that our brain can
articulate through our words, but that is also kind of like biometically aligned with the
interests of our bodies, right, so that we are kept healthy and well and live a long
time.
Now if you ask people, how do those embodied values
actually work?
Like, we have no idea, right?
We can't even articulate those.
So methods for training AI system based
on human verbal feedback cannot even in principle
align with all these embodied values of our bodies.
A second issue is, you know, from an evolutionary viewpoint, the development of AGI would be a major evolutionary transition.
It's a big thing that's comparable to the evolution of like DNA
or the evolution of multicellular life or the evolution of nervous systems.
It's a big deal that doesn't just affect humans, it also affects the other 70,000 species
of vertebrates and the other 10 million species of invertebrates and all the life on the planet.
Now if you ask, okay, how do you align AI, not just with humans, right?
But all the other kind of organic stakeholders, all the other life forms on Earth who might be affected by AI.
Okay, how does the AI learn the true interests of an elephant or a dolphin or a termite hive or, you know, all the other life forms that matter.
I've never seen anybody in the AI industry even sort of seriously talk about this.
And yet they portray AI as this kind of world historical thing that will re-engineer the entire
planetary ecosystem. Yeah, it could quite quickly become the most super intelligent, powerful Greta Tumburg ever created, just screaming
from the top of some transistor hill. I read a thing from a burn Hobart and Tobias Hubert
talking about AI safetyism being more of a risk to society than anything that AI doom
is predicting. Why? It seems like you've put forward a case that suggests this is something we should
be concerned about, et cetera, et cetera.
What is the other side of the fence to this?
What is the side of the fence that says, stop shitting yourself about safetyism.
Let's just crack on.
I think if I was going to like put for the best possible case against myself, right,
to steal man, the anti AI doomers, I would say something like, if you pause or stop the
development of AGI, there are huge potential opportunity costs.
There are lots of human problems, Maybe that AI could help solve.
Right?
And the usual ones that people talk about are,
oh, AGI could help us solve climate change.
It could help create peace and prosperity.
It could reduce global conflict, blah, blah, blah.
I do not actually find those very compelling.
I think there are often ways to solve lots of
those problems without developing AI. The one potential application that does give me serious
pause that I've talked and thought about a lot is longevity issues. If you had an AI that could
seriously help develop longevity treatments and anti-aging treatments and help regenerative medicine
research and help biotech so that we don't all have to die.
I'm 58, I would like not to die.
I would like to live another 100, 200 years.
However long I want to live, I would love to live that long.
Maybe if we pause AI, I don't get that benefit.
Right? Maybe if we pause AI, I don't get that benefit.
Now personally, I'm willing to die for my kids.
I'm willing to forego longevity treatments
if we reduce existential risk.
Some people might have a different view. Some people might be like, well, screw you Miller,
I don't want to have to die just because you're scared of AI.
I understand that.
I respect that viewpoint.
Let's have that debate.
Let's talk about it.
My personal hunch is that if we invested
as much directly into longevity research,
as we're investing in AI,
we could actually probably fix aging within like 20 or 30 years.
But people, what's happening instead is people are like, we know it's really hard to get
people to directly support longevity treatment, people are in a kind of pro-death trance.
They think it's weird and creepy to head towards immortality.
So we know we can't sell that as a research program, right?
So instead, we're going to sell them on AI, and then AI will like magically deliver the
anti-aging cures.
I've often seen this argument, right?
People won't directly support longevity research, but AI can solve longevity. So that's why we need AI. Does that make sense?
Yeah, it does. It's, um, I don't know, I'm really trying to stay open-minded to this, but coming from a boss, Remite background, you know, I've always for nearly a decade now, being so
I've always for nearly a decade now been so wary and touchy,
feely about anything that even looks remotely smart that comes out of, you know, Siri,
Siri was something that I remember there was a,
oh my God, like, you know,
Siri gonna be conscious of what sort of problems are going
to be caused, et cetera, et cetera.
But there definitely,
there feels a part of me that's like, well, look,
if you're going to work incredibly hard to fix longevity problems so that you keep people alive, you keep some people alive that are alive right now.
And the outcome of that is that everybody is dead within two centuries.
That seems like a bad deal. It seems like a relatively pointless deal. So we're talking about terrible outcomes that could occur
the tip of the spear or the top of the mountain, so to speak,
looking at some of the stuff that's going to occur in the interim between now and then, you know, because this is
how many transistors is it going to take? Will the large learning models get us there? Is it GPT-8? Is it GPT-never?
But some of the things that we probably can be certain about
are what the tools and the technologies
are enabling at the moment.
Stuff like deception, deepfakes, information,
sanitization and misinformation, elections,
politics, persuasion, friend bots, loneliness.
What are you most concerned about that is very high likelihood?
Maybe it's already here.
What are the things that people should be looking out for
with the rise of AI over the coming years?
I think the 2024 election cycle in America
is going to be absolutely wild and shocking
and is going to involve a lot of narrow AI applications in political propaganda
and ads and deepfakes and speech writing and sort of the mass customization of political
propaganda.
Because one thing the AI's can do is track, you know, individual preferences and values
and priorities through social media interaction and get a pretty good model of what every
Potential voter really cares about and then potentially customize
political messaging
directly to each
Voter in a way that like pushes their own hot buttons very effectively
Right, so like if you're a Democrat and you care a lot about racism, but you don't really
care about abortion, then you'll start to see customized ads that are like very anti-racist,
but they don't talk about abortion. That will be new. We haven't seen that before.
Before, political ads were kind of like lowest common denominator, TV spots or magazine ads that sort of tried
to hit like the typical undecided voter. But now I think in 2024, my humble prediction
is we're going to see a lot more narrow AI used for purposes of political manipulation. And I think people will be shocked at how
effective and persuasive it is. And I have no idea what the outcome of the election will be,
but I think the outcome for a lot of voters will be to go, oh my god, we don't have a sort of
political immune system that can fight this very well.
Does that mean that AI at the moment has got theory of mind?
So theory of mind is like your ability to understand the beliefs and preferences of others.
I think large language models do have pragmatically speaking theory of mind and that they have absorbed
you know all the lessons that are available from the internet about how to understand people's
beliefs and desires. So I think if you ask like a GPT can you please write really good ad copy
to advertise particular good or service?
It's pretty good at that. Like I know people in advertising who are like, oh my God, GPT is better at ads
than than than we are and so we have to use it because our rivals are using it and I think that also applies to political ads and speech writing
Yes, so functionally it is able to achieve the same thing
as somebody that has theory of mind.
And this comes back to a conversation that everyone is having,
which is one, do you know, if something is conscious on it,
the touring test turned out to kind of be a bit of a insufficient
barometer for working out whether or not something is sufficiently intelligent
because there's no way that you can speak to a well-trained chat GPT model and not think
that there is something inside of there.
But you don't know if there is any there, right?
Is this just a P zombie?
Is it just making all of the movements and the sounds and the shakes and you roll this
forward into a sufficiently fleshy, sufficiently pink, sufficiently right proportioned robot that can move around
your house.
And you go, that's a human.
That's a human.
I know it's a human.
It looks like a human.
It walks and talks like a human.
So at the moment, yeah, the function and the outcome that these things are able to achieve
is so close to somebody that is able to do that.
So 2024 election, lots of persuasion, lots of speech writing,
what about social media and the news landscape
and the information landscape and people
both producing and consuming content.
You know, we're coming, we're still in this world
of content creators at the moment.
How do you think that that's going to be influenced?
is at the moment, how do you think that that's going to be influenced? You know, I think what we've got at the moment is, well, you know,
Nome Chomsky wrote this book 30 years ago about, you know, manufacturing consent
about the way that in, in so-called democratic societies, the way that power actually works is through subtle,
semi-voluntary propaganda, right? People are willing to go to school to absorb the indoctrination
that we get in public schools, and we expose ourselves voluntarily to certain kinds of
newspapers and magazines and TV news shows, and then they sort of guide us into what we should
think and value.
And the engineering of all of that has been done before by humans, by people who do
who are Hollywood screenwriters and political speech writers and ad people, marketing people,
public relations people, right? There are millions of humans involved in that venture of trying to manipulate public opinion
in various directions.
But now they're going to have these AI systems that hugely increase their reach, their ability
to customize messages, to particular individuals, their ability to
capitalize on big data gathered through social media, that can do extremely fast iterative
testing of messaging, not just split tests or just focus groups like in traditional marketing,
but that do the kind of things that Facebook does in terms
of testing all the time, which of millions of ads are most effective.
So we're going to have kind of this AI-powered like hegemony warfare, like worldview warfare,
where people are advocating their political, ideological, religious belief systems and they're fighting against enemies, right?
And there's going to be a massive, kind of ongoing culture war, which in a way is the only war that matters anymore.
But it's going to be heavily, heavily shaped, increasingly by AI tools. What about friend bots and people who just abscond the real world for some virtual best
mate or a girlfriend or whatever in their apartment? Do you think that this is something
that's likely? Yeah, absolutely. This is something I wrote about in my most recent essay on anti-Ai backlash.
One thing that will provoke, I think, a backlash is people using AI as fake boyfriends, girlfriends
and friends, and getting a kind of pseudo intimacy and sort of validation from AI systems.
And you might think, oh, surely people can't be that gullible, that they would prefer
interacting with just a smart chatbot.
But if you think, like we're talking about a chatbot that could potentially remember
every single personal detail about you, there are members all previous conversations with you
that can try out different ways of interacting with you to see what you like and what you don't
like that is much more attentive than any real life boyfriend or girlfriend that has infinite
patience for listening to all your shit and all your dumb stories and all your neurotic
woes right in a way that no living lover would ever put up with and it would be like the perfect
you know, combination psychotherapist and friend and girlfriend and and
mentor and confidant and and all of that and we're pretty close to being able to do that.
So I do worry that real life social interaction
will look like a pretty poor substitute
for those kinds of AI pals.
Why would that cause a backlash?
I think it'll be a backlash
because people who don't get caught up
in having an AI boyfriend or girlfriend,
will look at people who are caught up in it,
like Gen Z is staying alone in their apartments
and never going out on dates
and not getting married and not reproducing.
And they might go, oh my God,
this is the most socially toxic technology ever invented.
Like the birth rate is dropping, nobody's dating, nobody's having real relationships.
This is not sustainable, and therefore we are going to do a moral backlash or religious
backlash or political backlash that says this is not the way we want civilization to go. I saw an headline which you'll have seen,
50% of AI researchers believe that there is a 10% or greater risk
that humans go extinct due to our inability to control AI.
The Center for AI Safety had a turn of researchers agree with the statement,
mitigating the risk of extinction from AI should be a global priority alongside other societal
scale risks such as pandemics, nuclear war, the open letter has been signed by more than 350
executives, researchers and engineers working in AI, plus Elon Musk as well was a part of this
movement. How effective are AI experts at predicting this AI development and the risks moving forward?
And if 50% of the people say that there is a 10% chance that the technology that they're working on is going to end us,
what happens next?
Is there going to be picket lines of disgruntled, terrified AI researchers outside of open AI's lab? What's happening?
identified AI research as outside of OpenAI's lab? What's happening?
Yeah, I mean, there is a movement actually to do
literal protests and demonstrations and picket lines.
I'm involved in a Slack group that, you know,
some members of that are actually organizing like
pickets in front of OpenAI or in front of DeepMind and London.
Now, of course, the point of these letters,
and I think I've signed all the letters
that are going around myself,
we know that the letters themselves
will not stop the AI industry from doing what it's doing.
However, the whole point of the letters
is to draw public attention to the issue, right,
to get press coverage, to get the general public
thinking about these things, you know,
hopefully tuning into like podcasts like this and reading the books they need to read
and becoming politically motivated to take this seriously as an issue.
And I think in that regard, the letters have been surprisingly effective, like the amount of press coverage
given to AI risk in the last couple of months is hugely greater than anything in the previous
10 years.
The government responds, like Biden inviting AI industry executives to the White House
and Prime Minister Britton taking this very seriously and tweeting about it and having
like AI safety summits in London.
There's a lot of public support for slowing down and the AI industry being held accountable.
And people asking hard questions about like what exactly is the end game here?
Like mass unemployment and then extinction?
Is that really the direction we want to go in?
And I think the AI experts themselves, I hope have a new humility
about their ability to predict things.
Because they'll know they've been blindsided
by GPT. They didn't expect these kind of capabilities this quickly. So when certain AI
researchers like, let's say, Lawn, Yon, Lacoon, who makes fun of AI Doomer's a lot on Twitter,
hopefully people like him will stay open-minded enough that they might
actually kind of reexamine their biases. And maybe they'll have a kind of Jeff Hinton
moment, right, like leading a researcher Jeff Hinton going at age 75. Oh no, oh no,
I think my life's work might have been really actually kind of evil and imposing risks on
people. And I'm going to resign from Google and blow the whistle and make a big fuss about this.
Yeah, there was a tweet as well from Tim Urban that you responded to. Tim said, whether you're optimistic or pessimistic about the future, the factors that were facing an existential risk at worst and a vastly different future world at best.
vastly different future world at best. The world that we're used to very well may not be around for much longer, so let's really enjoy the world that we have it. Visit the places you've always wanted to visit,
dive into that hobby you've always wanted to try, spend quality time with your loved ones,
save your each sunny Saturday, great mail, each moment of fun. If we end up looking back on these days
with great nostalgia, we want to at least know we made the most of the time that we had. And you responded and said existential risk isn't anywhere near the worst thing that we could have.
There's something called S-risk as well above and beyond X-risk.
What's S-risk?
S-risk is suffering risk.
So with extinction, everybody's dead and then they don't experience anything anymore.
And that's like pretty bad compared to experiencing things and being happy.
But as anybody who's ever suffered chronic pain or torture will attest,
like there are things worse than death.
There are levels of suffering that could potentially be imposed
by new technologies that would make us wish we'd gone extinct. I personally haven't
taken S risk that seriously. I don't know that much about it. There's other people who are
more expert on it than me. I've only read like a few science fiction novels that depict
like really, really bad es risks that could
be imposed.
There's a novel called Surface Detail by E&M Banks, where long story short, in the far
future some religious fundamentalists decide, haven't in hell don't really exist, but we
should make them exist, so we're going to upload everybody's brain before they die into a simulated reality.
And if we think they've been bad and naughty, we're going to make them live in a simulated
virtual hell for like subjective millennia and like thousands of years of suffering and
torture and mayhem and death.
And like that would be worse than being dead forever.
So that's a S risk. Some people worry a lot about it. I haven't really
focused on it very much, but I think when people like Tim Urban say, yeah, it'll either be great
or we'll all be dead. So just smell the daffodils and enjoy your steak and visit beautiful Austin, Texas or whatever they think we should be doing.
I think no, I want my daughters to grow up being very confident that A, they won't suffer
an extinction risk and B, they won't suffer an S risk.
and be they won't suffer an S-risk. Yeah, take it a little bit more seriously.
One final person who waited in was Mark Andreessen.
I'm going to guess that you read his big essay,
Why AI Will Save the World recently.
What were your thoughts on that?
You know, I used to have so much respect for Andres and I think he's just so
witlessly stupid about the extinction risks
regarding a truly baffles me like how you can have a brain that big and be so
that big and be so, honestly, I think what happens with a lot of these sort of well-respected rich elders like LeCoon or Andreessen is, like they've had a great career, they've done
well in business, they've made a bunch of money, they are technically savvy, I have no doubts,
they have high IQs, but I think they radically
overestimate their ability to understand issues that they just haven't read very much about.
And I think if you haven't read, you know, Nick Boss from Super Intelligence, you haven't read
Toby Ords, the precipice. You haven't read some of the other key ideas if you haven't read
Eliezeria Kalski's work. Then, then like you're just going to be reinventing
a bunch of very amateurish objections to that kind of work that like we're already addressed
10 or 20 years ago by the actual experts who have been thinking about this.
So this is one of those topics where it's very important not to just defer to people because they're rich or famous or smart.
Like you really want to dive deep on have they thought through these issues, have they engaged in meaningful conversations with other experts in the area. If they haven't, they probably do not know what they're talking about.
area, and if they haven't, they probably do not know what they're talking about. What would you do if you could step in? What would your prescription or policy be if
you had an omnipotent, omniscient God's eye view? I think I would love for the general public, just to tune into the issue,
to apply their natural survival instincts
and their parental instincts to go, wow,
this looks like a threat to me, my family, my kids,
my grandkids.
This is a legit threat.
I should take it just as seriously
as I would take a local crime wave, or just
as seriously as people took nuclear war back in the Cold War.
And this is a matter of potentially life and death, you know, for me and my family.
I want them to personalize it.
I don't want them to just think this is some abstract science fiction scenario that is
in the like the distant future. This is the stuff that could affect my 26 year old daughter or my 15 month old daughter
or my actual kids and cousins and nieces and nephews and whatever.
And once you take it seriously, then you're motivated to learn more.
And to start moralizing the issue and to go, are the people who are charging headlong into this,
right, through some combination of like greed and hubris and prestige and whatever,
are they on our side, are they prohuman, are they fighting for my family,
or are they just caught up in some kind of sort of delusional project that is at heart reckless
and evil.
And then what?
And then if they decide this is reckless and evil, let's let's effing morally stigmatize
it.
Let's say if you work in the AI industry and you are not spending most of your time worried
about AI safety, then you're a bad person and I don't want to associate with you.
I don't want to date you. I don't want to be your friend. I don't want to invest with you.
I don't want to supply you with hardware or software or anything else.
I don't want you to be a blooperate in our city, state, or country.
And I want to, you know to shut it down. Now, I'm not advocating for a violent so-called
like, butlerian jihad, people rise up and they're like, smash all the machines and kill
all the AI. No, I don't want that. Most of the cause would be counterproductive in terms
of PR. Violence doesn't work in the social media era.
It just deligitimizes your cause.
But I think short of violence, it's important to be aware that we can use all the techniques
of persuasion and activism in this domain that people used in every other social movement
that we're familiar with, civil rights movement, the gay rights movement, you know, the
libertarian movement, crypto, like anything that's been at all successful in terms of the
public rising up and saying, we want to change the values. We can use all that in fighting reckless AI
development.
I suppose the challenge you're getting that you have is everyone is being distracted
into a beautiful field with daisies growing in it and free cake recipes and essays written for them for school and university. So yeah, I
worry that the novelty and the immediate convenience that's been afforded by
these new advances is causing it to seem both so benign and enjoyable and
distracting and entertaining that galvanizing people to see this is going
and presumably this is the one of the primary challenges that you're facing.
I think what I would say to people is like you can kind of have your cake and eat it to in terms of like this whole range of incredible software and narrow AI that's
absolutely wonderful. Like if you look at the progress in computer graphics and
Hollywood movies, it's amazing. Like a love of Endure's movies. And a lot of that
would be considered very advanced generative AI, right, by the standards of the 1970s.
The fact that many modern movies are largely kind of AI-graphic generated,
that's pretty cool.
The fact that Google Maps can actually get you from A to B pretty reliably
and taking into account traffic, that is narrow AI. That's a huge benefit to many people.
And there are probably hundreds of cases where you can have quite safe narrow AI applications that deliver huge quality life benefits to people.
But you don't have to go down the road towards AGI, right, or towards other kinds of highly risky narrow
AI. And even in longevity research, like you could have narrow AI that does biomedical
research and synthesizes scientific literatures about which molecules might be helpful, and
that maybe even can help you run larger scale longevity studies, right?
And maybe we can get longevity through that without having to go down the road of extremely
dangerous AGI.
Jeffrey Miller, ladies and gentlemen, if people want to keep up to date with the work that
you're doing, whether it be evolutionary psychology or AI risk. Where's your to go? Then go to my website primalpoly.com and they can also see my essays on the
effective altruism forum, which is EA forum, where I publish quite a bit these days.
Jeffrey, I appreciate you. I'm looking forward to seeing what we get to talk about
next time as well. My pleasure, Chris. Oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh,