Dwarkesh Podcast - David Deutsch - AI, America, Fun, & Bayes
Episode Date: January 31, 2022David Deutsch is the founder of the field of quantum computing and the author The Beginning of Infinity and The Fabric of Reality.Read me contra David on AI.Watch on YouTube. Listen on Apple Podcasts,... Spotify, or any other podcast platform.Read the full transcript with helpful links here.Follow David on Twitter. Follow me on Twitter for updates on future podcasts.Timestamps(0:00:00) - Will AIs be smarter than humans? (0:06:34) - Are intelligence differences immutable / heritable?(0:20:13) - IQ correletation of twins seperated at birth(0:27:12) - Do animals have bounded creativity?(0:33:32) - How powerful can narrow AIs be?(0:36:59) - Could you implant thoughts in VR?(0:38:49) - Can you simulate the whole universe?(0:41:23) - Are some interesting problems insoluble?(0:44:59) - Does America fail Popper's Criterion?(0:50:01) - Does finite matter mean there's no beginning of infinity?(0:53:16) - The Great Stagnation(0:55:34) - Changes in epistemic status is Popperianism(0:59:29) - Open ended science vs gain of function(1:02:54) - Contra Tyler Cowen on civilizational lifespan(1:07:20) - Fun criterion(1:14:16) - Does AGI through evolution require suffering?(1:18:01) - Would David enter the Experience Machine?(1:20:09) - (Against) Advice for young people Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Transcript
Discussion (0)
Okay, today I'm speaking with David Doish. Now, this is a conversation that I've been eagerly wanting to have for years. So this is very exciting for me. So first, let's talk about AI. Can you briefly explain why you anticipate that AIs will be no more fundamentally intelligent than humans?
I suppose you mean AGIs. Yes.
And by fundamentally intelligent, I suppose you mean capable of all the same types of cognition as humans are in principle.
Yes.
So that would include, you know, doing science and doing art and in principle also falling in love and being good and being evil.
and all that. So the reason it, the reason is twofold. And one half is about computation hardware,
computation hardware, and the other is about software. So if we take the hardware, we know that
our brains are touring complete bits of hardware.
and therefore can exhibit the functionality of running any computable function,
um, function program for any computable function.
Now, when I say any, I don't really mean any because you and I sitting here,
you know, we're having conversation and we could say, you know, we could have any conversation.
Well, uh, we can assume that maybe in a hundred years,
time we'll both be dead and therefore the number of conversations we could have is strictly limited
and also some conversations depend on speed of computation so you know if we're going to be
solving the traveling salesman problem then there are there are many traveling salesman problems
that we wouldn't be able to solve in the age of the universe so when I say
any, what I mean is that we're not limited in the programs we can run apart from by speed and
memory capacity. So all limitations on us, hardware limitations on us, boil down to speed and
memory capacity. And both those can be augmented to the level of any other entity that is in the
universe because, you know, if somebody builds a computer that can think faster than the brain,
then we can use that very computer or that very technology to make our thinking go just as fast
as that. So that's the hardware. As far as explanations go, can we reach the same kind of
explanations as any other entity, let's say, usually this is said not in
terms of AGI's, but in terms of extraterrestrial intelligences. But also it's said about AGI's,
you know, what if they are to us as we are to ants and so on? Well, again, part of that is just hardware,
which is easily fixable by adding more hardware. So let's forget about that. So really,
the idea is, are there concepts that we are inherently incapable of comprehending?
I think Martin Rees believes this.
He thinks that, you know, we can comprehend quantum mechanics.
Apes can't.
And maybe the extraterrestrials can comprehend something beyond quantum mechanics, which we can't
comprehend and no amount of brain add-ons with extra hardware can give us that because they have
the hardware that is adapted to having these concepts which we haven't. The same kind of thing
is said about maybe certain qualia that maybe we can experience love and an AGI couldn't
experience love because it has to do with our hardware, not just memory and speed, but
specialized hardware. And I think that falls victim to the same argument. The thing is this specialized
hardware can't be anything except a computer. And if there's hardware that is needed for love,
let's say that somebody is born without that hardware, then that hardware, that bit of the brain
that does love or that does mathematical insight or whatever,
it's just a bit of the brain and it's connected to the rest of the brain
in the same way that any other part of the brain is connected to the rest of the brain,
namely by neurons passing electrical signals and by chemicals whose concentrations are altered and so on.
So therefore, an artificial device that computed which signals were to be sent
and which chemicals were to be adjusted could do the same job,
and it would be indistinguishable and therefore a person augmented
with one of those who couldn't feel love, could feel love after that augmentation.
So those are, and I think those two things are the only relevant ones.
So that's why I think that agi's and.
and humans have the same range in the sense I've defined.
Okay, interesting.
Okay, so I think the software question is more interesting
than the hardered one immediately,
but I do want to take issue with the idea that the memory and speed
of human brains can be arbitrarily and easily expanded,
but we can get into that later.
We can just start with this question.
Can all humans explain everything that even the smartest humans can explain, right?
So if I took the village idiot and I asked him to create the theory of quantum computing,
should I anticipate that if you wanted to, he could do this and just for a frame of reference,
about 21 to 24% of Americans on the National Belt Literacy Survey,
they fall in level one, which means that they can't even perform basic tasks,
like identifying the expiry date of a driver's license, for example, or totaling a bank deposit slip.
So are these humans capable of a.
explaining quantum computing or creating the Deutsche Jose algorithm.
And if they're not capable of doing this, doesn't that mean that the theory of universal
explainers falls apart?
Well, there are people who, so these tasks that you're talking about are tasks that no
ape could do.
However, there are humans who are brain damage to the extent that they can't even do
the tasks that an ape can do.
And there comes a point when when installing the program that would, you know, be able to read a driver's license or whatever would require augmenting their hardware as well as their software.
So if a person, we don't know that much, we don't know enough about the brain yet.
But if some of the people that you're taught, if it's 24% of the population, then it's definitely not hardware.
So I would say that for those people, it's definitely software.
If it was hardware, then getting them to do this would be a matter of repairing the imperfect hardware.
If it's software, it is not just a matter of them wanting to or them wanting to be taught or whatever.
it is a matter of whether the existing software is, what word can I use instead of wants to,
is conceptually ready to do that.
For example, Brett Hall has often said that he would like to speak Mandarin Chinese.
and so he wants to, but he will never be able to speak Mandarin Chinese
because he's never going to want it enough to be able to go through the process
of acquiring that program.
But there is nothing about his hardware that prevents him learning Mandarin Chinese,
and there's nothing about his software either,
except that, well, what word can we use to say that he doesn't want to go through that process?
I mean, he does want to learn it.
He does want to learn it, but he doesn't want to go through the process of being programmed with that program.
But if his circumstances changed, he might well want to.
So, for example, many of my relatives a couple of generations ago,
were forced to migrate to very alien places where they had to learn languages that they never
thought they would ever speak and never wanted to speak. And yet very quickly they did speak those
languages. Again, was it because what they wanted changed? In the big picture, perhaps you
could say what they wanted changed. So if you're driving licensed blind people
wanted to be educated to read driving licenses in the sense that my ancestors
wanted to learn languages, then yes, they could learn that.
There is a level of dysfunction below which they couldn't, and I think those are
hardware limitations.
On the borderline between those two, there's not that much difference.
It's like, you know, that's like the question of could apes be
programmed with a fully human intellect.
I think the answer to that is yes, but there, although programming them would not require
hardware, you know, surgery in the sense that repairing a defect, that it would be repairing a
defect, but it would require intricate changes at the neuron level and that so that to
transfer the program from a human mind into the ape's mind. I would guess that that is possible,
because although the ape has far less memory space than humans do and also doesn't have certain
specialized modules that humans have, neither of those things is a thing that we use to the full
anyway. I mean, when I'm speaking to you now, there's a lot of
of knowledge in my brain that I'm not referring to at all, like, you know, the fact that I can
play the piano or drive a car is not being used in this conversation. So I don't think the fact
that we have such a large memory capacity would affect this project, although the project
would be highly immoral because you'd be intentionally creating a person in some of the project.
decided deficient brain, hardware.
So suppose it's harder differences that distinguish, you know,
different humans in terms of their intelligence.
If it were just up to the people who are not even functionally literate, right?
So these are, again, people who...
Wait, wait, wait.
I said that it could only be hardware at the low level,
well, either at the level of brain defects or at the level of using up the whole of our
allocation of memory or speed or whatever.
So apart from that, I don't think it can be hardware.
By the way, is software analogous or is hardware synonymous with genetic influences for
you or it can software be genetic too?
Software can be genetic too, though that doesn't mean it's immutable.
It just means it's there at the beginning.
Okay.
The reason I suspect it's not software is because these people also happen.
be the same people who, let's suppose it was offered something that they chose to do or
if something they could change. It's mysterious to me why these people would also choose to
accept jobs that have lower pay but are less cognitively demanding or why they would choose to,
you know, do worse on academic test or, you know, IQ test. So why they would choose to do
exactly the sort of thing somebody who's less cognitively powerful would do. It seems the more
personal explanation there is just that they are cognitively less powerful.
Not at all. Why would someone choose not to go to school, for instance, if they were given the choice and not to have any lessons?
Well, there are many reasons why they might choose that. Some of them good, some of them bad.
And the people who, you know, calling some jobs cognitively demanding is already begging the question.
because you're just referring to a choice that people make,
which I think is a software choice,
as being by definition forced on them by hardware.
It's not cognitively deficient.
It's just that they don't want to do it.
The same way, if there was a culture that required Brett Hall
to be able to speak to,
fluent Mandarin Chinese in order to do a wide range of tasks and if he if he didn't
know Mandarin Chinese he'd be relegated to low level tasks then he would be
quote quote choosing the low level tasks rather than the quote cognitively
demanding task but it's only culture that makes that cognitively demanding
task that that assigns a hardware interpretation to the difficulty
of doing that task.
Right.
I mean, it doesn't seem that arbitrary to say that the kind of jobs you could do sitting down on a laptop
are, it require different cognitive,
it require probably more cognition than the ones you can do in a construction site.
And if it's not cognition that distinguishes,
or if there's not something like intelligence or cognition or whatever you want to call it,
that is a thing that is measured by both these literary agency tests and by what you're doing
at your job, then what is the explanation for why there's such a high correlation
between people who are not functionally literate?
or I guess an anti-correlation between people who are not functionally literate
and people who are doing like let's say programmers right like I guarantee you people working at
Apple all of them are above level one on this literacy survey why do they just happen to make
the same choices why is that their correlation well there are correlations everywhere
and and culture is built in order to use in order to make certain uh
abilities make use of certain abilities that people have.
So if you're setting up a company that is going to employ 10,000 employees,
then it's best to make the way that the company works.
You know, it's best, for example, to make the signs above the doors or the signs on the doors
or the numbers on the dials, all be ones that people in that culture who are highly educated
can read.
You could, in principle, make each label on each door a different language.
I don't know, you know, there are thousands of human languages.
Let's say there are 5,000 languages and 5,000 doors in the company.
You could, given the same meaning, make them all different languages.
The reason that they're all the same language and what's more, not just any old language,
it's a language that many educated people know fluently.
that's why. And then you can misinterpret that as saying, oh, there is something, there is some
hardware reason why everybody speaks the same language. Well, no, there isn't. It's a cultural reason.
Okay. So if the culture was different somehow, maybe if there was some other way of communicating
ideas, is it, do you think that the people who are currently designated,
as not functionally literate,
could be in a position to learn about quantum computing, for example.
And if they made the right choices,
or not the right choices,
but the choices that could lead to them understanding quantum computing.
Well, so I don't want to evade the question.
The answer is yes,
but the way you put it is, again, rather begs the question.
It's not only language that is like this.
It's all knowledge.
So just learning, so if someone doesn't speak English,
quantum computing is a field in which English is the standard language.
Used to be German.
Now it's English.
Now someone who doesn't know English is at a disadvantage learning about quantum computers,
but not only because of their different.
efficiency in language. If they come from a culture in which the culture of physics and of mathematics
and of logic and so on is is is is equivalent and only the language is different then if they
just learn the language, they they will find it as easy as anyone else. But if a whole load of
things are different, if a person doesn't think in terms of, for example, logic, but thinks in
terms of pride and manliness and and and fear and you know all sorts of concepts that
govern that fill the lives of let's say prehistoric people or pre-enlightenment people
then to be able to understand quantum computers they would have to learn a lot more
than just the language of the civilization. They'd have to learn all of other, well, not all,
but a range of other features of the civilization. And on that basis, the people who can't read
driving licenses are similarly in a different culture, which they would also have to learn
if they are to increase their IQ, i.e. their ability to function at a high level,
in intellectual culture in our civilization.
Okay.
If they did, they would be able to.
Okay.
So if it's those kinds of differences,
then how do you explain the fact that identical twins separated at birth
and adopted by different families?
They tend to have a, you know,
the most of the variance that does exist between humans in terms of IQ
doesn't exist between identical twins.
In fact, the correlation is 0.8,
which is the correlation that you would have
when you took the test on different days,
like depending on how well good a day you were having.
And these are, you know, people who are adopted by families who have different cultures,
who are often in different countries.
Yet, in fact, a hardware theory explains very well why they would have similar scores on IQ tests,
sort of themselves correlated with literacy and job performance and so on,
whereas I don't know how software would explain why being adopted by different families.
Well, the hardware theory explains it in the sense that it might be hardware, might be true.
So it doesn't have an explanation beyond that and nor does the software theory.
Sorry, go on.
I mean, so there are actually like differences at the level of brain that are correlated with IQ, right?
So the actual skull size has like a point three correlation with IQ.
There's a few more like this.
They don't explain the entire variance in the human intelligence or the entire genetic variance in human intelligence.
But we do have we have identified a few actual Harvard differences.
that correlative. Well, suppose, suppose on the contrary, that suppose that the results of these
experiments had been different. Suppose that the result was that people who are brought up in the same
family and differ only in the amount of hair they have or in the amount of hair they have or in the amount
in their appearance in any other way
that none of those differences
make any difference to their IQ.
Only
who their parents were
makes a difference. Now wouldn't that be surprising?
Wouldn't it be surprising that there's nothing else
correlated with IQ other
than who your parents are?
Yes.
Now,
how much correlation should we expect?
there are correlations everywhere.
There are these things on the internet which joke,
memes or whatever you call it,
but they make a serious point where they correlate things like
how many adventure movies have been made in a given year
correlated with how much the GNP per capita,
and that's a bad example because there's an obvious relation.
But you know what I mean?
It's the number of films made by a particular actor against the number of outbreaks of bird flu.
And part of being surprised by randomness is the fact that correlations are everywhere.
It's not just that correlation isn't causation.
it's that correlations are everywhere.
It's not a rare event to get a correlation between two things.
And the more things you ask about, the more you are going to get correlations.
So the, it's not, what is surprising is that the things that things that are correlated,
are things that you expect to be correlated and measure.
For example, when they do these twin studies and measure the IQ,
they control for certain things.
And like you said, identical twins reared together.
They've got to be reared together.
Or apart.
Or apart.
Yeah, yeah.
So, but, but there's infinitely more things that they don't control for.
So it could be that the real determinant of IQ is, for example, how well a child is treated between the ages of three and a half and four and a half, where well is defined by something that we don't know yet, but, you know, something like that.
Then you would expect that thing, which we don't know about and nobody has bothered.
to control for in these experiments,
we would expect that thing to be correlated with IQ.
But unfortunately, that thing is also correlated
with whether someone's an identical twin or not.
So it's not the identical twinness
that is causing the similarity.
It's this other thing.
Right.
This is, say, an aspect of appearance or something.
And if you were to surgically change a person
with a view, if you knew what this thing was and surgically changed the person,
you would be able to have the same effect as making an identical twin would have.
Right. But I mean, as you say in science or to explain any phenomenon,
there's an infinite amount of possible explanations, right? You had to pick the best one.
So it could be that there's some unknown trait, which is so obvious to adopted parents,
different adoptive parents, that they can use it as a basis for discrimination or for different
treatment. But that is, I mean, I would assume they don't,
know what it is.
But then aren't they using it as a basis to treat kids differently at the age of three,
for example?
Not by not by consciously identifying it.
It's like it would be something like getting the idea that this child is really smart.
Sure.
But I'm just trying to show you that it could be something that the parents are not aware of.
If you ask parents to list the traits in their children that cause them to behave differently
towards their children, they might list like 10 traits.
but then there are another thousand traits that they're not aware of which also affect their
behavior.
So we first need an explanation for what this trait is that researchers have not been able to
identify it, but it's so obvious that even unconsciously parents are able to reliably use it as a way
to treat.
It wouldn't have to be obvious at all because parents have a huge amount of information about
their children, which they are processing in their minds. And most of it, they don't know what it is.
Okay. All right. Okay. So I guess let's leave this topic aside for now. And then let me,
let me bring us to animals. So if it, if creativity is something that doesn't exist increments,
it's or, you know, the capacity to create explanations, you can just use a simple example,
go on YouTube and look up cat opening a door, right?
So you'll see, for example, a cat develops a theory that applying torque to this handle, to this metal thing will open a door.
And then what it'll do is it'll climb onto a countertop and it'll jump on top of that door handle.
It hasn't seen another cat to it.
It hasn't seen another human like get on a countertop and try to open the door that way.
But it conjectures that this is a way given its morphology that it can access the door.
And then, you know, so that's this theory.
And then the experiment is, will the door open?
This seems like a classic cycle of conjecture and reputation.
Is this compatible with the cat not being, at least having some bounded form of creativity?
I think it's perfectly compatible.
So animals are amazing things.
And instinctive animal knowledge is designed to make animals easily capable of
thriving in environments that they've never seen before.
In fact, if you go down to the level of detail,
animals have never seen the environment before.
I mean, maybe a goldfish in a goldfish bowl might have.
But when a wolf runs through the forest,
it sees a pattern of trees that it has never seen before,
and it has to create strategies for avoiding each tree,
and not only that, for actually catching the rabbit that it's running after as well,
in a way that has never been done before.
So the way to understand this, I think,
now this is because of a vast amount of knowledge that is in the wolf's genes.
What kind of knowledge is this?
Well, it's not the kind of knowledge that says first turn left, then turn right, then jump, and so on.
It's not that kind of instruction.
It's an instruction that takes input from the outside and then generates a behavior that is relevant to that input.
It doesn't involve creativity, but it involves a degree of sophistication in the program that human robotics has not yet reached anywhere near.
that. And by the way, then when it sees a wolf of the opposite sex, it may decide to leave the rabbit and go and have sex instead. And a program for a robot to locate another robot of the right species and then have sex with it is again, I think, beyond present day robotics. But it will be done and it clearly does not require creativity.
because that same program will lead the next wolf to do the same thing in the same circumstances.
The fact that the circumstances are ones that it's never seen before and it can still function
is a testimony to the incredible sophistication of that program,
but it has nothing to do with creativity.
So humans do do tasks that require much, much less programming sophistication than that, such as sitting around a campfire, telling each other a scary story about a wolf that almost ate them.
Now, animals can do the wolf running away thing.
They can enact a story that's more complicated even than the one the human is telling,
but they can't tell a story.
They don't tell a story.
Telling a story is a sort of typical creative activity.
It's the same kind of activity as forming an explanation.
So I don't think it's at all surprising that cats can jump on.
on handles because it's the same, I can easily imagine that the same amazingly sophisticated
program that lets it jump on a branch so that the branch will get out of its way in some sense
will also function in this new environment that it's never seen before. But there are all sorts
of other things that it can't do. That's definitely true, which was my point, is that it has a
bounded form of creativity and if bounded forms of creativity can exist, then humans could be in one
such. So I'm having a hard time imagining the ancestral circumstance in which a cat could
have gained a genetic knowledge that jumping on a metal rod would get a wooden plank to
open and give it access to the other side. Well, I thought I just gave an example. I mean,
if we don't know, at least I don't know, what kind of environment, the ancestor of the domestic
cat lived in. But if it was, for example, if it contained undergrowth, then dealing with undergrowth
requires some very sophisticated programs. Otherwise, you will just get stuck somewhere and starve
to death. Now, I think a dog, if it gets stuck in a bush, it has no program to get out other than just
shaking itself about until it gets out. It doesn't have a concept of doing something which
temporarily makes matters worse and then allows you to get out. I think dogs can't do that.
But it's just, it's not because that's a particularly complicated thing. It's just that it's
programming just doesn't have that. But an animal's programming easily could have that if it,
if it lived in an environment in which that happened a lot.
Is your theory of AI compatible with AIs that have narrow objective functions, but functions
which, if fulfilled, would give the creator of the AI a lot of power.
So if, for example, I wrote a deep learning program, I traded over financial history,
and I asked it, make me a trillion dollars on the stock market.
Do you think that this would be impossible?
And if you think this would be possible, then it seems like I do, it's not an AGI,
but it seems like a very powerful AI, right?
So it seems like AI is getting somewhere.
Yeah.
Well, if you want to be powerful, you might do better inventing a weapon or something.
Or a better mousetrap is even better because it's non-violent.
So you can invent a paperclip to use an example that is often used in this context.
If paper clips hadn't been invented, you can invent a paper clip and make a fortune.
And that's an idea, which is, but it's not an AI because it's not the paperclip that's going out there.
It's really your idea in first place that has caused the whole value of the paper clip.
And similarly, if you invent a dumb arbitrage machine, which seeks out complicated trades to make, which are more complicated than anyone else's.
trying to do and that makes you a fortune well the thing that made you a fortune was not the arbitrage
machine it was your idea for how how to search for arbitrage opportunities that no one else sees
right that's what was valuable and that's the usual way of making money in the economy you have an
idea and then you implement it right that that it was an AI is beside the point it could have been a paper
But the thing is, so the models that are used nowadays are not expert systems like the chess engines
of the 90s. They're, you know, something like alpha zero or off a go. This is just like a,
almost a blank neural net and that they were able to help, you know, let it win go or, so if such a,
if such a neural network that was kind of blank and if you just arbitrarily throw financial history
at it, wouldn't it be fair to say that the AI actually figured out what the right trades were,
even though it's not a general intelligence?
Well, I think it's possible in chess, but not in the economy because the value in the economy is being created by creativity.
And most, you know, arbitrarious is one thing.
That can sort of skim value off the top by taking opportunities that were too expensive for other people to take.
So you can, you know, you can make money, you make a lot of money that way if you know, if you have a good idea about
how to do it. But most of the value in the economy is created by the creation of knowledge.
Somebody has the idea that a smartphone would be good to have, even though most people think
that that's not going to work. And that idea cannot be anticipated by anything less than an AGI.
An AGI could have that idea, but no AI could.
Okay. So there's definitely other topics I want to get to. So let's talk about virtual reality. So in the fabric of reality, you discuss a possibility that virtual reality generators could plug in directly into our nervous system and give us sense data that way. Now, as you might know, many meditators, you know, people like Sam Harris speak of both thoughts and senses as intrusions into consciousness that have a sort of similar, they can be welcome intrusion, but they are both things that come into consciousness. So, um,
Do you think that a virtual reality generator could also place thoughts as well as sense data into the mind?
Yes, but that's only because I think that this model is wrong.
It's basically the Cartesian theater, as Daniel Dennett puts it, with the stage cleared of all the characters.
So that's that's conscious, pure consciousness without content, as Sam.
Harris envisages it. But I think that all that's happening there is that you are conscious of this
theatre and you're envisaging it as having certain properties, which by the way it doesn't have,
but that doesn't matter. We can imagine lots of things that don't happen. In fact, you know, that's
in a way, characterizes what we do all the time. So one can interpret one's
thoughts about this empty stage as being thoughts about nothing.
One can interpret the actual hardware of the stage that one is imagining as being pure contentless
consciousness. But it's not, it has the content of a stage or a space or, you know,
however you want to envisage it.
Okay. And then let's talk about the touring principle.
So this is the term you coined.
It's otherwise been called the Church-Turing Deutsch principle.
Would this principle imply that you could,
so by the way, it states that a universal computer can simulate any physical process.
Would this principle imply that you could simulate the whole of the universe, for example,
in a compact efficient computer that was smaller than the universe itself?
Or is it constrained to physical processes of a certain size?
Again, no, it couldn't.
It couldn't simulate the whole universe.
That would be an example of a task where it was computationally able to do it,
but it wouldn't have enough memory or time.
So the more memory and time you gave it, the more closely it could simulate the whole universe.
But it couldn't ever simulate the whole universe or anything near the whole universe, probably,
because it, well, if you want to be able to simulate the universe, probably, because it, well, if you want to
wanted to simulate itself as well, then there are logical reasons why there are limits to that.
But even if you want to dissimulate the whole universe apart from itself, just the sheer size of the
universe makes that impossible. Even if we discovered ways of encoding information extremely densely,
like some people have said maybe quantum gravity would allow, you know, totally amazing density
of information, it still couldn't simulate the universe because that would mean because of the
universality of the laws of physics, that would mean the rest of the universe also was that complex
because quantum gravity applies to the whole rest of the universe as well. So, so, but I think it's
significant being limited by the available time and memory to, to separate that from being limited by,
computational capacity, because it's only when you separate those that you realize what computational
universality is. And I think that's universality, like Turing or quantum universality, is the
most important thing in the theory of computation, because computation doesn't even make sense
unless you have a concept of a universal computer.
what could falsify your theory that all interesting problems are soluble?
So I ask this because, as I'm sure you know, there are people who have tried offering explanations for why certain problems or questions like, why is there something rather than nothing?
Or how could mirror physical interactions explain consciousness?
They've offered expectations why these problems are in principle insoluble.
Now, I'm not convinced they're right, but do you have a strong reason for in principle believing that they're wrong?
No. So this is a philosophical theory and could not be proved wrong by experiment.
However, I think I have a good argument for why they aren't, namely that each individual case of this is a bad explanation.
So let's say that some people say, for example, that simulating a human brain is impossible.
Now, I can't prove that it's possible.
Nobody can prove that it's possible until they actually do it or unless they have a design for it, which they prove will work.
So pending that, there is no way of proving that it's not true.
that this is a fundamental limitation. But the trouble is with that idea that it is a fundamental
limitation, the trouble with that is that it could be applied to anything. For example,
it could be applied to the theory that you have recently just a minute ago been replaced by
a humanoid robot, which is going to say for the next few minutes just to prearrange
set of things and you're no longer a person. I can't believe you figured it out. Yeah. Well, that's the
first thing you'd say. So there is no way to refute that by experiment, short of actually doing it,
short of actually talking to you and so on. So it's the same with all these other things.
In order for it to make sense to have a theory that something is impossible, you have to have an
explanation for why it is impossible. So we know that, for example, almost all mathematical
propositions are undecidable. So that's not because somebody has said, oh, maybe, maybe we can't
decide everything because thinking we could decide everything is hubris. That's not an argument.
You need an actual functional argument to prove that that is so. And then,
at being a functional argument in which the steps of the argument make sense and relate to other things and so on, you can then say, well, what does this actually mean? Does this mean that maybe we can never understand the laws of physics? Well, it doesn't, because if the laws of physics included an undecidable function, then we would simply write, you know, F of X and F of X is an undecidable function.
we couldn't evaluate f of x it would limit our ability to make predictions but then
lots of our ability to make predictions is totally limited anyway but it would not affect our
ability to understand the properties of the function f and therefore the properties of the physical
world okay is a system of government like americas which has distributed powers and checks and balances
Is that incompatible with Popper's criteria on?
So the reason I ask is the last administration had a theory that if you build a wall,
there will be positive of the consequences.
And, you know, that theory could have been tested,
and then the person could have been evaluated in whether that theory succeeded.
But because our system of government has distributed powers,
you know, Congress opposed the testing of that theory, and so it was never tested.
So if the American government wanted to fulfill Popper's criterion,
would we need to give the president more power, for example?
It's not as simple as that.
So I agree that this is a big defect in the American system of government.
No country has a system of government that perfectly fulfills proper criterion.
We can always improve.
I think the British one is actually the best in the world, and it's far from optimal.
Making a single change like that is not going to be the answer.
The constitution of a polity is a very complicated thing, much of which is inexplicit.
So the founding fathers, the American founding fathers, realized they had a tremendous problem.
What they wanted to do, what they thought of themselves as doing was to implement the British constitution.
In fact, they thought they were the defenders of the British Constitution and that the British king had violated it and was bringing it down. They wanted to retain it. The trouble is that they all, in order to do this, to gain the independence to do this, they had to get rid of the king. And then they wondered whether they should get an alternative king, whichever way they did it, there were problems. The way they decided to do it,
I think made for a system that was inherently much worse than the one they were replacing,
but they had no choice.
If they wanted to get rid of a king, they had to have a different system for having a head of state.
Therefore, they had to have, they wanted to be democratic.
That meant that the president had a legitimacy in legislation that the king never had.
sorry, never had it, the king did used to have it in medieval times, but the king, by the time of the
of the Enlightenment and so on, no longer had full legitimacy to legislate. So they had to
implement a system where him seizing power was prevented by something other than tradition.
And so they instituted these checks and balances.
So the whole thing that they instituted was immensely sophisticated.
It's an amazing intellectual achievement.
And that it works as well as it does is something of a miracle.
But the inherent flaws are there.
And one of them is the fact that there are checks and balances means that responsibility is dissipated.
And nobody is ever to blame for anything.
in the American system, which is terrible.
In the British system, blame is absolutely focused.
You know, everything is sacrificed to the end of focusing blame and responsibility
down to the government.
You know, past the law courts, past the parliament, right to the government.
That's where it's all focused into.
And there are no systems that do that better, but as you well know, the British system also has flaws.
And we recently saw with the sequence of events, with Brexit referendum, and then Parliament bulking at implementing some laws that didn't agree with.
and then that being referred to the courts.
And so there was the courts and the parliament and the government and the prime minister all blaming each other.
And there was a sort of mini constitutional crisis, which could only be resolved by having an election and then having a majority government, which is by the mathematics of how the government works.
That's how it usually is in Britain, although, you know, we have been unlucky several times.
times recently in not having a majority government.
Okay, so this could be wrong, but it seems to be in an expanded universe, there will be
like a finite amount of total matter that will ever exist in our light cone, right?
There's a limit.
And that means that there's a limit on the amount of computation that this matter can, you know,
execute the amount of energy it can provide, perhaps even the amount of economic value we can
sustain, right?
So it would be weird if the GDP per atom could be arbitrarily large.
So does this impose some sort of limit on your concept of the beginning of infinity?
So what you've just recounted is a cosmological theory.
The universe could be like that.
But we know very little about cosmology.
We know very little about the universes in the life.
large, like theories of cosmology are changing on a time scale of about a decade.
So it doesn't make all that much sense to speculate about what the ultimate
asymptotic form of the cosmological theories will be.
At the same time, we don't have a good idea about the asymptotic form of very small things.
Like, we know that our conception of physical process.
is must break down somehow at the level of quantum gravity, like 10 to minus 42 seconds and
that kind of thing. But we have no idea what happens below that. Some people say it's got to stop
below that, but there's no argument for that at all. It's just that we don't know what happens
beyond that. Now, what happens beyond that may be a finite limit. Similarly, the way what happens
on a large scale may impose a finite limit, in which case computation is bounded by a finite limit
imposed by the cosmological initial conditions of this universe, which is still different from
its being imposed by inherent hardware limitations. For example, if there's a finite amount of
GMP available in the distant future,
then it's still up to us, whether we spend that on mathematics or music or political systems
or any of the thousands of even more worthwhile things that have yet to be invented.
So it's up to us which ideas we fill the 10 to the 10 to the 10 bits with.
Now, my guess is that there are no such limits,
but my worldview is not affected by whether there are such limits
because, as I said, it's still up to us what to fill them with.
And then if we get chopped off at some point in the future, then everything will have been worthwhile up to them.
Gotcha.
Okay.
So the way I understand your concept of beginning and infinity, it seems to me that the more knowledge we gain, the normal knowledge were in the position to gain.
So there should be like an exponential growth of knowledge.
But if we look at the last 50 years, it seems that there's been a slowdown in or decrease in research, productivity, economic growth, productivity growth.
And this seems compatible with the story that, you know, that there's a limited amount of fruit on the tree that we pick the low-hanging fruit.
And now there's less and less fruit and harder and harder fruit to pick.
And, you know, eventually the orchard will be empty.
So do you have an alternate explanation for what's going on in the last 50 years?
Yes.
I think it's very simple.
There are sociological factors in academic life which have stultified.
the culture.
And not not totally and not everywhere,
but that has been a tendency
in what has happened and it has resulted
in a loss of productivity
in many sectors, in many ways,
but not in every sector, not in every way.
And the
for example, I think there was a
I've often said, there was a stultification in theoretical physics starting in, let's say, the 1920s, and it still hasn't fully dissipated.
If it wasn't for that, quantum computers would have been invented in the 1930s and built in the 1960s.
So that is just an accidental fact, but it just goes to show that there are no guarantees.
The fact that our horizons are unlimited does not guarantee that we will get anywhere,
that we won't start declining tomorrow.
I don't think we are currently declining.
I think these declines that we see are parochial effects caused by specific mistakes.
that have been made and which can be undone.
Okay, so I want to ask you a question about Bayesianism versus Paparianism.
So one reason why people prefer Bayes is because there seems to be a way of describing changes in epictemic status
when the relative status of a theory hasn't changed.
So give you an example.
Currently, the many worlds explanation is the best way to explain quantum mechanics, right?
But suppose we in the future, yeah, okay.
But suppose in the future we were able to build an AGI on a quantum computer
and be able to design some clever interference experiment, as you suggest,
to have it be able to report back being in a superposition across many worlds.
Now, it seems that even though many worlds remains the best or the only explanation,
somehow its epistemic status has changed as a result of the experiment.
And in a basium terms, you could say the creed,
of this theory has increased, how would you describe these sorts of changes in a Paparian view?
So what has happened there is that at the moment we have only one explanation that can't be
immediately knocked down. If we had, if we did that thought experiment, we might well decide
that this will provide the ammunition to knock down even
ideas for alternative explanations that have not been thought of yet. I mean, obviously, it wouldn't be
enough to knock down every possible explanation, because for a start, we know that quantum theory
is false. We don't know for sure that the next theory will have many worlds in it. I mean, I think
it will, but we can't prove anything like that. But I would replace the idea of increased
credence with a theory that the experiment will provide a quiver full of arrows or a
a repertoire of arguments that goes beyond the the known arguments, the known bad arguments,
and we'll reach into other types of arguments because the reason I would say that is that some of the existing misconceptions about quantum theory reside in misconceptions about the methodology of science.
Now, I've written a paper about what I think is the right methodology of science where that doesn't apply.
but many physicists and many philosophers would disagree with that,
and they would advocate a methodology of science that's more based on empiricism.
Of course, I think that empiricism is mistaken, can be knocked down in its own terms,
but not everybody thinks that.
Now, once we have an experiment,
such as in my thought experiment, if that was actually done,
then people could not use their arguments based on a fallacious idea of empiricism
because their theory would have been refuted even by the standards of empiricism,
which shouldn't have been needed in the first place.
But, you know, so that's why I think that that's the way I would express that the,
repertoire of arguments would become more powerful if that experiment were done successfully.
The next question I have is, how far do you take the principle that open-ended scientific
progress is the best way to deal with existential dangers? To give it one example, many people
have suggested, you have something like gain-of-function research, right? And it's conceivable
that it could lead to more knowledge and how to stop dangerous pathogens. But I guess,
at least in Basing terms, it could say it seems even more likely,
that it can or has led to the spread of a man-made pathogen that would have not otherwise been
naturally developed. So would your belief in opening a scientific progress allow us to say,
okay, let's stop gain of function of research? No, it wouldn't allow us to say let's stop it.
It might make it reasonable to say, let us do research into how to make it,
make laboratories more secure before we do gain a function research.
It's really part of the same thing.
It's like saying, let's do research into how to make the plastic hoses through which
the reagents pass more impermeable before we actually do the experiments with the reagents.
So it's all part of the same experiment.
I wouldn't want to stop something just because new knowledge might be discovered.
that that's that's the no-no in my view but but which knowledge we need to discover first that's the
problem of scheduling which is non-trivial non-trivial part of any research and of any learning
but would it be considerable for you to say that until we figure out how to make sure these
laboratories are safe to a certain standard we will stop the research as it exists now and then
meanwhile we will focus on doing the other kind of research so gain a function can restart,
but until then it's not allowed.
Yes, in principle, that will be reasonable.
I don't know enough about the actual situation to have a view.
I don't know how these labs work.
I don't know what the precautions consist of.
And when I hear people talking about, for example, lab leak,
I think, well, the most likely lab leak is that one of the people who works there walks out of the front door.
So the leak is not a leak from the lab to the outside.
The leak is from the test tube to the person and then from the person walking out the door.
And I don't know enough about what these proportions are or what the state of the art is to know to what
extent the risk is actually minimized. It could be that the culture of these labs is not good
enough, in which case it would be part of the next experiment to improve the culture in the labs.
But I am very suspicious of saying that all labs have to stop and meet a criterion because
I'm sure that the, well, I suspect that the stopping wouldn't be necessary.
and the criterion wouldn't be appropriate.
Again, which criterion to use depends on the actual research being done.
When I had Tyler Cowan on my podcast, I asked him why he thinks,
so he thinks that humans and sedent al-Asian is only going to be around for 700 more years.
And then so I asked him, I gave him, you know, your rebuttal,
or what I understand to be a rebuttal, that, you know,
creative, optimistic societies will innovate ways of, you know,
safety technology is faster than totalitarian static societies can innovate way destructive technologies.
And he responded, you know, maybe, but the cost of destruction is just so much lower than
the cost of building. And, you know, that trend has been going on for a while now.
What happens when a new cost $60,000? Or what happens if there's a mistake like the kinds that,
you know, we saw many times over in the Cold War? How would you respond to that?
First of all, I think we've been getting safer and safer throughout the entire history of civilization.
There were these plagues that wiped out a third of the population of the world or half,
and it could have been 99% or 100%.
We went through some kind of bottleneck 70,000 years ago, I understand,
which they can tell from genetics.
All our cousin species have been wiped out.
So we were much less safe than now.
Also, if a asteroid, 10-kilometer asteroid,
had been on target with the Earth
at any time in the past 2 million year
or whatever it is history of the genus Homo,
that would have been the end of it.
Whereas now it'll just mean higher taxation for a while.
You know, that's how much amazingly safer we are now.
I would never say that it's impossible that we'll destroy ourselves.
That would be the contrary to universality of the human mind.
We can make wrong choices.
We can make so many wrong choices that we'll destroy ourselves.
And on the other hand, the atomic bomb accident sort of thing would have had no zero chance of destroying civilization.
All they would have done is cause a vast amount of suffering.
But I don't think we have the technology to end civilization even if we wanted to.
I think all we would do if we just deliberately unleashed hell all over the world is we would cause a vast amount of suffering.
But there would be survivors and they would resolve never to do that again.
So I don't think we're even able to, let alone that we would do it accidentally.
But as for the bad guys, well, I think we are doing.
the wrong thing largely in regard to both external and internal threats. But I don't think we're doing
the wrong thing to an existential risk level. And over the next 700 years or whatever it is,
well, I don't want to prophesy, because I don't know most of the advances that are going to be
made in that time. But I see no reason why if we are sold.
solving problems, we won't solve problems.
I don't think this, to take another metaphor,
Nick Bostrom's jar with white bulls and there's one black ball,
and you take out of white ball and white ball,
and then you hit the black ball and that's the end of you.
I don't think it's like that, because every white ball you take out
and have reduces the number of black ball
balls in the jar. So again, I'm not saying that's a law of nature. It could be that the very next
ball we take out will be the black one. That'll be the end of us. It could be. But I think all
arguments that it will be are fallacious. I do want to talk about the fun criterion. Is your
definition of fun different from how other people define other positive emotions like
eudaomonia or well-being or satisfaction? Is it fun a different emotion?
I don't think it's an emotion.
And all these things are not very well defined.
They can't possibly be very well defined until we have a satisfactory theory of qualia, at least,
and probably more of satisfactory theory of creativity, how creativity works and so on.
I think that the choice of the word fun for the thing that I...
explain more precisely, but still not very precisely, as a creation of knowledge without,
where the different kinds of knowledge, inexplicit, unconscious, conscious, explicit,
are all in harmony with each other. I think that is actually the only way in which the
everyday usage of the word fun differs from that is that fun is considered frivolous
or seeking fun is considered as seeking frivolity.
But I think that isn't so much a different use of the word.
It's just a different pejorative theory about whether this is a good or a bad thing.
But nevertheless, I can't define it precisely.
the important thing is that there is a thing which has this property of fun that you can't
you can't compulsorily enact it so in in in in in in some views you know no pain no gain
well then you can find out mechanically whether the thing is causing pain and whether it's
doing it according to the theory that said that says that you will have gain if you have that
pain and so on. So that can all be done mechanically. And therefore, it is subject to the criticism.
And another way of looking at the fun theory is that it's a mode of criticism. It's subject to the
criticism that this isn't fun, i.e., this is making and privileging one kind of knowledge arbitrarily
over another rather than being rational and letting content decide.
is this placing a limitation on universal explainers then if they can't create some sort of theory
about why a thing could or should be fun why anything could be fun and it seems to me that
sometimes we actually can make things fun that aren't like for example take exercise no pain
no gain it's like when you first go it's not fun but you know once we start going you understand
the mechanics you develop a theory for why it can and should be fun yes yes well that that's
quite a good example because there you see that fun
One cannot be defined as the absence of pain.
So you can be having fun while experiencing physical pain.
And that physical pain is not sparking, suffering, but joy.
However, there is such a thing as physical pain, not sparking joy, as Marie Kondo would say.
And that's important because if you are dogmatically or uncritically implementing in your life,
a theory of the good that involves pain and which excludes the criticism that maybe this can't be fun,
or maybe this isn't yet fun, or maybe I should make it fun, and if I can't, that's a reason.
to stop, you know, all those things. If all those things are excluded, because by definition,
the thing is good and your pain, your suffering doesn't matter, then that opens the door to,
not only to suffering, but to stasis. You won't be able to get to a better theory.
And then why is fun central to this instead of another emotion? So, you know, like for
about Aristotle thought that, like, I guess, a sort of widely defined sense of happiness
is what should be the goal of our endeavors. Why fun instead of something like that?
Well, that's defining it vaguely enough so that what you said might very well be fun.
The point is the underlying thing is, as far as, you know, going one level below,
we're really to understand that we'd need to go about seven levels below that, which we can't
do yet. But the important thing is that there are several kinds of knowledge in our brains.
And the one that is written down in the exercise book that says you should do this number of
reps and you should power through this and it doesn't matter if you feel that and so on.
That's an explicit theory and it contains some knowledge, but it also contains error.
that's like all our knowledge is like that.
We also have other knowledge which is contained in our biology.
It's contained in our genes.
We have knowledge that is inexplicit, like our knowledge of grammar is always my favorite example,
as we know why certain sentences are acceptable and why they're unacceptable,
but we can't state explicitly or in every case why it isn't or why it is.
and then as there's
so there's explicit and inexplicit knowledge
there's conscious and unconscious knowledge
all those are
bits of program in the brain
their ideas they are they are bits of knowledge
if you define knowledge as
information with causal power
they are all information with causal power
they all contain truth
and they all contain error
And it's always a mistake to shield something, to shield one of them from criticism or replacement.
Not doing that is what I call the fun criterion.
Now, you might say that, so that's a bad name, but it's the best I can find.
So why would creating an AGI through evolution necessarily entail suffering?
Because the way I see it, or it seems to me your theory is that you need to be a general intelligence in order to feel suffering.
but by the point an evolved, a simulated being is a general intelligence,
we can just stop the, we can just stop the simulation.
And so where is the suffering coming from?
Okay.
So the kind of simulation by evolution that I'm thinking of,
there may be several kinds,
but the kind that I'm thinking of,
and which I said would be the greatest crime in history,
is the kind that just simulates the actual evolution of humans,
from pre-humans that weren't people.
So you have a population of non-people,
which in this simulation would be some kind of NPCs,
and then they would just evolve.
We don't know what the criterion would be.
We just have an artificial universe
which simulated the surface of the earth,
and they'd be walking around,
and some of them might or might not become people.
And now the thing is,
when you're part of the way there, what is happening is that you have, the way that I, the only way that I can imagine the evolution of personhood or creative, the explanatory creativity happened was that the hardware needed for it, for it was first needed for something else. I have proposed that it was needed to transmit memes.
So there'd be people who were transmitting memes creatively,
but they were running out of resources.
So they weren't running out of resources before
it managed to increase their stock of memes.
So in every generation, there was a stock of memes
that was being passed down to the next generation.
And once they got beyond a certain complexity,
they had to be passed down by the use of creativity,
by the recipient.
So there may well have been a time, and as I say, I can't think of any other way it could have
been, where there was genuine creativity being used, but it ran out of resources very quickly,
but not so quickly that it didn't increase the mean bandwidth.
Then in the next generation, there was more mean bandwidth.
And then after a certain number of generations, there would have been.
some opportunity to use this hardware or whatever it is, you know, it's firmware, I expect,
to use this firmware for something other than just blindly transmitting memes, or rather
creatively transmitting memes, but they were blind memes. So in that time, it would have
been very unpleasant to be alive. It was already very unpleasant to it, to, sorry, it was
very unpleasant to be alive when we did have enough resources to think as well as do the memes.
But I don't think there would have been a moment at which you would say, yes, now the suffering begins to matter because it's not just blind memes.
I think the people were already suffering at the time when they were blindly transmitting memes.
Because they were using genuine creativity.
They were just not using it to any good effect.
Gotcha.
Would being in the experience machine be compatible with the fun criteria on?
So you're not aware that you're in the experience machine.
It's all virtual reality.
But you're still doing the things that would make you have fun.
In fact, more so than in the real world.
So would you be tempted to get into the experience machine?
Would it be compatible with the fun criteria on?
I guess there are different questions.
but I'm not sure what the experience machine is.
I mean, if it's just, so I mean, is it just a virtual reality world in which things work better than in the real world or something?
Yeah.
So it's a thought of experiment by Robert Nozik.
And the idea is that you would enter this world and but you would forget that you're in virtual reality.
So all, I mean, the world would be perfect in every possible way that it could be.
be perfect, or not perfect, but it would be better in every possible way, it could be better.
But you would think the relationships you have here are real, the knowledge you're discovering
here is novel and so on. Would you be tempted to enter the social world?
Well, no, I certainly wouldn't want to enter a world, any world, which involves erasing
the memory that I have come from this world. Related to that is the fact that the laws of physics
in this virtual world
couldn't be the true ones
because the true ones aren't yet known.
So I'd be in a world in which I was trying to learn
laws of physics which aren't the actual laws
and they would have been designed by somebody
for some purpose to manipulate me as it were.
Maybe it would be designed to
be a puzzle that would take 50 years to solve.
But it would have to be by day,
definition, a finite puzzle. And it wouldn't be the actual world. And meanwhile, in the actual world,
things are going wrong. And I don't know about this. And eventually they go so wrong that my computer
runs out of power. And then where will I be? The final question I always like to ask people
interview is what advice would you give to young people? So somebody in their 20s, is there
something that you would like to some advice you would give them um well i i try very hard not to give
advice because uh it's it's it's not a good relationship to be with some in with somebody to give
them advice i can have opinions about things so uh for example i i may have an opinion that that
it's dangerous to condition your short-term goals by reference to some long-term goal.
And I have what I think is a good epistemological reason for that, namely that if your short-term goals are subordinate to your long-term goal, then you won't find, if your long-term goal is wrong or deficient in some way, you won't find out until you're dead.
So it's a bad idea because it is subordinating the things that you could error correct now or in six months time or in a year's time to something that you could only error correct on a 50 year time scale.
And then it'll be too late.
So I'm suspicious of advice of the form set your goal and even more suspicious of make your goal be so and so.
interesting but but why is it uh but why is it uh why do you think the uh the relationship between
advicey and um advice giver is dangerous oh well because it's one of authority again you know
i i tried to make this example of quote advice that i just gave i tried to make it non-authoritative
I just gave an argument for why certain other arguments are bad.
But if it's advice of the form a healthy mind in a healthy body,
or don't drink coffee before 12 o'clock or, you know, it's something like that,
it's, it's, well, it's a non-argument.
It's, it's, if I, if I have an argument, I can give the argument to not tell the person what to do,
who knows what somebody might do with an argument.
They might change it to a better argument,
which actually implies different behavior.
I can contribute to the world arguments, make arguments as best I can.
I don't claim that they are privileged over other arguments.
I just put them out because I think that this argument.
works and I expect other people not to think that they work. I mean, we've just done this in this very podcast. You know, I put out an argument about AI and that kind of thing and you criticize it. You know, if if I was in the position of making that argument and saying that therefore you should do so and so, that's a relationship of authority, which I
think is immoral to have. Well, David, thanks so much for, um, thanks so much for coming on
the podcast and thanks so much for giving you so much your time. Fascinating. Thank, thank you for inviting me.
