Making Sense with Sam Harris - #57 — An Evening with Richard Dawkins and Sam Harris (1)
Episode Date: December 19, 2016Sam Harris speaks with Richard Dawkins at a live event in Los Angeles (first of two). They cover religion, Jurassic Park, artificial intelligence, elitism, continuing human evolution, and other topics.... If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely
through the support of our subscribers. So if you enjoy what we're doing here,
please consider becoming one.
On today's podcast, I'll be playing the audio from the first of two live events I did with Richard Dawkins in Los Angeles last month. These were fundraisers for his foundation,
the Richard Dawkins Foundation for Reason and Science, which is also in the process of merging
with the Center for Inquiry, making them the largest
foundation for defending science and secularism from politically weaponized religion. Their work
is suddenly even more relevant in the U.S. because although Trump himself isn't a religious demagogue,
he's promised to appoint a few to the Supreme Court. And he's also put a creationist in charge
of the Department of Energy, which both stewards our nuclear weapons and funds more basic science research than any
other branch of government. So now we have Rick Perry in charge of all that. His immediate
predecessors were each physicists. One was a Nobel laureate. And Perry is a man who I would be willing to bet my life couldn't utter three coherent sentences
on the topic of energy as a scientific concept. So I would urge you to become a member of CFI or
the Richard Dawkins Foundation. One membership now covers both organizations. And once you are
a member, you'll occasionally receive action alerts requesting that you contact your elected
representatives on matters of public
policy. As many have noted, non-believers are somewhere between 10 and 20, 25 percent of the
U.S. population. It's hard to know for sure, but we almost certainly outnumber many other subgroups
in the U.S., and we are disproportionately well-educated, needless to say, and yet we have
almost no political power. Right now, this will only change once we make ourselves heard. So
Richard was doing a speaking tour to raise funds for his foundation and for CFI, and he asked me
to join him at one of these events. And our event in L.A. sold out almost immediately. And so we
booked the hall for a second night, and that sold out too.
And I'll bring you the audio from that second event in a later podcast. But as you'll hear,
we had a lot of fun, and it was a great crowd, and it was really satisfying to have a conversation
like this live, as opposed to privately over Skype. So as I'll say at the end, this has given
me an idea for how to produce some more podcasts
like that. And now I give you an evening with Richard Dawkins, the first night.
Thank you all for coming. This is really, it's an honor to be here, and it really is an honor to be
here with you, Richard. I get to return the favor.
He had me at Oxford I think five years ago. So welcome to Los Angeles.
So I'm going to, this is going to be very much a conversation, but what I did, I was worried about this. I wasn't worried about tonight. I was worried about tomorrow night. My fear was that
Richard and I would have a scintillating conversation tonight, and then tomorrow night try doggedly to recapitulate it word for word,
and yet feign spontaneity. And if you know my position on lying, you know, that doesn't work.
So what I did is I went out to all of you asking for questions, and I got thousands.
And so I picked among what looked promising.
So I can guarantee that the two nights will be reasonably different because different questions will come up.
But we won't hew too narrowly to the questions.
We'll just have a conversation.
But as we come out here, I find that I want to ask you, Richard, about your socks.
And I'm not sure what the question is.
I've just come from Las Vegas, the conference of PsyCon.
And one of the things we had was a workshop on cold reading,
which is the technique whereby so-called mentalists are supposed to read each other's thoughts.
And what they're really doing is just simply looking at the clothes and the general appearance and
assessing it and we had to pair off for this workshop and I was with a nice
young woman and we sort of sized each other up and I said to her I think I'm
getting that you come from somewhere in the west of the States I think maybe
maybe not California maybe a bit further north and of course I was simply reading her label which said she came from Oregon
and then she's some some me up when she said I think you may have some problem
with your eyes maybe colorblind and I I'm serious about this
I'm trying to spread a meme for wearing odd socks
there's a kind of tyranny
of forcing us to buy socks in pairs
shoes have chirality left shoe and right shoe are not interchangeable.
But socks don't.
And when you lose one of a pair of socks, you're forced to throw the other one away.
It's absurd.
So what I want...
Although, honestly, Richard, you just told me a story that suggests that shoes are interchangeable.
Oh, my God, that's right. That's rather an embarrassing story.
Someone is going to find the relevant video on the Internet.
I will tell the story. Now you've let the cat out of the bag.
I was doing a television film called Sex, Death, and the Meaning of Life.
And in the death episode, we were talking about suicide.
And there's a famous suicide spot.
It's a bit like San Francisco, the Golden Gate Bridge,
where people have famously jumped to their death.
And all around this place, Beachy Head,
is a very, very high cliff in the south of England.
There are rather sad little crosses where people have jumped off.
And we were filming the sequence on suicide,
and I had to walk very solemnly and slowly
and in a melancholy frame of mind past these crosses,
and the camera was focused on my feet,
walking past these little low crosses.
And I felt incredibly uncomfortable.
I had this sort of uncanny feeling of being uncomfortable.
I couldn't understand why.
And then eventually, it was my feet that were uncomfortable,
walking past these crosses.
And eventually the director called cut and we went off
and I took my shoes off because they were so painful.
And only then did I realise I'd put them on the wrong way round.
So this is preserved for posterity in close-up.
I want to see that video, someone find that video.
The weird thing is, none of the television audience ever wrote in to complain about this.
So maybe this at least will arouse their attention.
So maybe this at least will have roused their attention.
So the first question, Richard,
which I thought could provoke some interesting reflection,
is why do you both court so much controversy?
Well, we don't do it.
We don't court it. It pursues us.
Well, I think, I mean, what I've noticed is that there are undoubtedly people who are friends of ours, colleagues of ours, who agree with us down the line,
who seem to feel no temptation to pick all of the individual battles we pick.
And one doesn't have to be a coward not to want to fight all of these
culture war battles, although it helps. But we have friends who are decidedly not cowards who,
I mean, someone like Steve Pinker, he stakes out controversial positions, but he is not
in the trenches in quite the same way as we are. And I'm wondering what you think about that.
Did you see a choice for yourself?
Do you find yourself revisiting this choice periodically?
I think it's a perfectly respectable position to take
that a scientist has better things to do.
And I don't take that position, and I think you don't either.
I do think it's important to fight the good fight
when we do have, when science, when reason
has vocal and powerful and well-financed enemies.
And so I'm not sure what particular battles
the questioner has in mind when he says we caught controversy.
But I suppose I believe so strongly in truth.
And if I see truth being actively
threatened by competing ideologies which actually not
only would fight for the opposite of truth,
but would indoctrinate children in the opposite of truth, I feel
impelled to fight only verbally. I mean, I don't feel impelled to actually get a rifle or something.
Well, there's time yet.
So, I guess the dogma that has convinced so many fellow scientists and intellectuals,
academics, that there is no reason to fight, certainly one of those dogmas is Stephen Jay
Gould's idea of Noma, non-overlapping magisteria.
That strikes me as a purely wrong-headed and destructive idea.
Do you want to explain that to me?
I think so. I think we probably agree about that. strikes me as a purely wrong-headed and destructive idea. Do you want to explain that to me?
I think we probably agree about that.
Non-overlapping magisteria.
He wrote a book called, what was it again called?
The Rock of Ages?
The Rock of Ages, that's right.
So science has the age of the rocks
and religion has the rock of ages.
And the idea was that science and religion
both have their legitimate territories,
which they shouldn't impinge upon each other.
Science has the truth about the real world,
and that's science's department.
Religion has what he described as moral questions
and, I think, deep questions of existence.
Meaning and morality.
Well, I would strongly dispute
the idea that we should get our morals from religion.
For goodness sake, whatever else we get our morals from,
it must not be religion.
If you imagine what the world would be like if we actually
did get our morals from the Bible or the
Koran, it would be totally appalling and was
appalling in the time when we did get it from the Bible or the Koran would be totally appalling and was appalling in
Time when we did get from the Bible it is now appalling in those countries where they get it from the from the Koran
So don't let's get our morals from religion as for the deep fundamental questions. I take those to be things like
Where did the laws of physics come from?
Hmm. What is the origin of all things? What is the origin of the cosmos?
What happened before the Big Bang?
Those are scientific questions.
It may be that science can never answer them,
but if science cannot answer them,
sure as hell religion can't answer them.
I don't actually think anything can answer them if science can't.
It's an open question whether things like the origin of the physical constants,
those numbers which physicists can measure but can't explain,
the origin of the laws of physics, whether those will ever be explained by science.
If they are, well and good.
If they're not, then nothing will
explain them. The idea, I mean, Steve Gould was careful to say that these separate magisteria
must not encroach on each other's territory. And so the moment religion encroaches on science's
territory, for example, in the case of miracles, then it's fair game for scientific
criticism. But my feeling about that is that if you take away the miracles from religion,
you've taken away most of why people believe in them. People believe in the supernatural
because they believe biblical or Koranic stories which suggest that there have been supernatural miracles.
And if you deprive them of that, then they've lost everything.
To take Christianity as only one example, that has been spelled out in every generation.
I mean, starting with Paul, he said, you know, if Christ be not risen, your faith is vain.
Yeah, exactly.
Or something close to that.
Yes.
So it's, you can't get around the fact that religious people care about what's true,
and they purport to be making claims, truth claims, about the nature of reality.
They think certain historical figures actually existed.
Some of them may be coming back.
Yes, virgin birth.
Books issued occasionally from a divine intelligence.
And so there's no way to...
I never met Gould,
but I just can't believe the currency this idea has in science.
No, I agree.
It's become very fashionable among the scientific establishment.
It was more or less endorsed by the US National Academy of Sciences.
As for the separation, as for the idea
that religion doesn't stray into sciences territory,
imagine the following scenario.
Imagine that some sort of scientific evidence,
perhaps DNA evidence, were discovered,
perhaps somewhere in a cave in Palestine,
and it was demonstrated that, say,
Jesus never had a father.
I mean, it's inconceivable how that could happen.
Just suppose it was, suppose there was scientific evidence.
Can you imagine theologians saying,
oh, that's science, that's not our department,
we're not going to, they're not going to,
they would love it, it would be meat and drink to them.
Yeah, yeah. Many people who are not atheists believe They would love it. It would be meat and drink to them.
Many people who are not atheists believe that your efforts against religion are wasted and that the net result of your work is to simply offend religious people.
There's a widespread myth that people can't be reasoned out of their faith.
Please talk about this.
It's just uncanny that there are the most memorable quips and quotes and phrases.
Anything that is aphoristic seems to have undue influence on our thinking. And there's this
aphorism that is usually attributed to Swift. And I think he says something like it. It's not
quite the version that has been passed down to us, but this idea that you can't reason someone
out of a view that he wasn't reasoned into. And this just strikes the mind of Homo sapiens as so obviously true, and if you
look at my inbox, it is so obviously false. So tell me about your experience reasoning with
your readers. I think it would be terribly pessimistic to think that you cannot reason.
I mean, I think I'd just give up,
probably die if I thought I couldn't
reason people out of their silliness.
I would accept, would you agree with this,
that there are some people
who demonstrably do know all the evidence
and even understand the evidence but yet still persist in yeah well so there's there'll be a
couple of questions that will bring us onto that territory because i think there's more
to reason about than science has tended to allow or that secular culture has tended to allow, or that secular culture has tended to allow. So people have these intense transformative experiences,
or they have these hopes and fears
that aren't captured by you saying,
don't you understand the evidence for evolution?
But this is more of a conversation
that people don't tend to have.
But yeah, I would agree that people certainly resist
conclusions that they don't like the taste of. I can think of two examples. One I mentioned in
the reception beforehand. A professor of astronomy somewhere in America who writes
papers, mathematical papers, in astronomical journals in which his mathematics, his mathematical ideas, accept that the universe is 13.8 billion
years old, and yet he privately believes it's 6,000 years old. So here is a man who knows
his physics, he knows his astronomy, he knows the evidence that the universe is 13 billion
years old, and yet so split-brained is he
that he actually privately departs from everything in his professional life.
Well, surely we have to accept that he cannot be reasoned out.
I mean, he already knows the evidence
and will not be reasoned out of his foolishness.
Yeah, I didn't say that you
could always succeed, but I think
and clearly there are
may have this
bias as you do, that if the conversation
could just proceed long enough,
the ground for science would
continually be conquered and
it never gets reversed.
And it is being and will be.
Yeah.
Yeah.
And you never see the, I mean, this
is a unidirectional conquest of territory.
So you never see a point about which science
was once the authority, but now the best answer is religious.
Yeah, that's right.
Right?
But you always see the reverse of that.
And that's.
And actually, most scientists who call themselves religious,
if you actually probe them,
they don't believe really stupid things
like six-day creation and things.
Most of them don't.
Although I find that Christian scientists,
not Christian scientists as in the cult,
but scientists who happen to be Christian,
believe much more than your average rabbi.
This is a way.
That's true.
Yeah, Christianity and Muslim scientists
no doubt return the favor.
I get the feeling your average rabbi,
like your average chaplain of an Oxford college,
doesn't actually believe in God at all.
Yeah.
I've met that rabbi.
So, there's a couple of fun questions here that I just want, I just wanted to hear Richard
react to.
Are there any biological extinctions that you would consider virtuous?
For instance, should we eradicate the mosquito?
You have 10 seconds to decide.
It would have to be more than one mosquito. Should we eradicate the mosquito? You have 10 seconds to decide.
It would have to be more than one mosquito.
There's the malaria mosquito, the yellow fever mosquito.
Yeah, all mosquitoes.
Mosquitoes are unbelievably beautiful creatures.
That's the most irrational thing ever.
The great expert on fleas.
She
presented the Department of Zoology
in Oxford with a gigantic
blown up photograph of
a mosquito and it was a fantastic
piece of work of art.
By a malevolent god.
Yes.
If ever there were proof of God's malevolence
it's got to be the mosquito.
I have no hesitation in killing
individual mosquitoes.
Wouldn't you want to be a little more efficient than that with CRISPR or something?
I haven't thought about it before. I think I would not wish to completely extinguish.
Can I throw a little more on the balance?
We've had reliably year, year after year,
2 million people killed by mosquito-borne illness.
Now it's cut down to, I think, 800,000,
so we're making progress with bed nets.
For some reason, I find myself less reluctant
to extinguish the malarial parasite that the mosquito bears,
but that's probably not very logical.
I mean, we have extinguished the smallpox virus, mosquito bears, but that's probably not very logical.
We have extinguished the smallpox virus, except for a few lab cultures.
Yes, and then like geniuses,
then we tell people how to synthesize it online.
So the flip side of that, of course,
is the Jurassic Park question.
Should we reboot the T-Rex?
Yes.
Yes.
That's fantastic.
I thought the Jurassic Park method of doing it was incredibly ingenious, and I love that.
incredibly ingenious and I love that. What was not ingenious was the ludicrous injection of chaos theory or one of those nine days wonder fashionable things.
I don't remember.
But the idea of getting mosquitoes in amber and extracting DNA and reconstructing
dinosaurs, that's an amazingly good science fiction idea,
if only it were possible.
Unfortunately, the DNA is too old for that to happen.
If it were, I would definitely wish to see that done.
What could go wrong?
Richard seems to want to live in a maximally dangerous world.
Filled with mosquitoes and T-Rexes.
So now, you and I were speaking about your books.
You've written some very important books on 10 years apart. And so you have an anniversary this year of the selfish gene,
which is the 40th.
And the blind watchmaker has its 30th anniversary.
And climbing Mount Improbable is the 20th.
And then the god delusion is the 10th.
is the 20th, and then the God delusion is the 10th.
So actually, I wanted to give you a chance to talk about the titles of the first two. The selfish gene has provoked an inordinate amount of confusion, and the blind watchmaker
is a phrase that is useful to understand.
So do you want to talk about that?
The selfish gene is misunderstood, I think mostly by those who've read it by title only,
as opposed to the rather substantial footnote to the title, which is the book itself.
It could equally well have been called the altruistic individual
because one of the main messages of the book
is that selfish genes give rise to altruistic individuals.
So it is mostly a book about altruism,
mostly a book about the opposite of selfishness.
So it certainly should not be misunderstood as advocating selfishness
or saying that we are, as a matter of fact, always selfish.
All it really means is that natural selection works at the level of the gene as opposed
to any other level in the hierarchy of life.
So genes that work for their own survival are the ones that survive, tautologically
enough, and they are the ones that survive, tautologically enough, and they are the ones
that build bodies. So we, all of us, contain genes that are very, very good at surviving
because they've come down through countless generations and they are copied accurately
with very high fidelity from generation to generation, such that there are genes in you
that have been around for hundreds of millions of years.
And that's not true of anything else in the hierarchy of life.
Individuals die.
They survive only as a means to the end of propagating the genes that built them.
So individual bodies, organisms should be seen as vehicles, machines built by the genes that ride inside them for passing on those very same
genes. And it is the potential eternal long-livedness of genes that makes them the unit
of selection. So that's really the meaning of the selfish gene. As I said, the book could
have been called the altruistic individual. It could have been called the cooperative
gene for another reason. It could have been called the immortal gene, which is a more
sort of Carl Sagan-esque title. It's a more poetic title. And in some ways, I rather regret
not calling it the immortal gene.
But anyway, I'd like to start with it now.
There's a common misunderstanding of evolution
that leads people to believe that absolutely everything about us
must have been selected for, otherwise it wouldn't exist.
Yes.
So people ask about what's the evolutionary rationale for
post-traumatic stress disorder or depression? I'm not saying that there is no conceivable one,
but it need not be the case that everything we notice about ourselves was selected for,
or that there's a gene for that. This is very interesting. I mean,
this, I mean, I'm actually a bit of an outlier here.
I mean, I'm about as close as biologists come to accepting what you've described as a misconception,
because I do think that selection is incredibly powerful, and mathematical models show this. J.B.S. Haldane, the great...
one of the three founding fathers of population genetics,
did theoretical calculation
in which he postulated an extremely trivial character.
He didn't mention it, but it might have been eyebrows.
Suppose you believe that
eyebrows have been selected because they stop sweat running down your forehead into your
eyes. And it sort of sounds totally trivial. How could that possibly save a life? Until
you realize, the first thing you might realize is that it could save your life if you were about to be attacked by a lion.
And just a slight split second difference in how quickly you see the lion,
because you've got sweat in your eyes.
Since the invention of sunblock, I think that's undoubtedly true.
Yeah, okay.
But Haldane actually did a mathematical calculation.
He said, let us postulate a character so trivial that the difference between an individual
who has it and an individual who doesn't have it is only one in a thousand.
That's to say, for every thousand individuals who have this, say, the eyebrows and survive,
999 who don't have it survive.
So from any actuarial point of view, a life insurance calculator would say, well, that's
totally trivial.
But it's not trivial when you think that the genes concerned is represented in thousands of individuals in the population
and through thousands of generations,
that multiplies up the odds.
And Haldane's calculation was that
if you postulate that one in a thousand advantage,
he then worked out how long would it take
for the gene to spread from being,
I forget exactly the figures,
but say
1% of the population up to 50% of the population. And it was a number of generations so short
that it would be negligible on the geological time scale. So it would appear to be an instantaneous
piece of evolutionary change, even though the selection pressure was trivial.
Well, actually, selection pressures in the wild, when they've been measured, have been
far, far stronger than that.
But there's another way of approaching the question you raise when you say something
like selective advantage in various psychological diseases or something like that.
It may be that you're asking the wrong question.
It may be that by focusing on the particular characteristic
which you asked the question about,
you're ignoring the fact that there's something associated
with that, which you've, let me think of an example.
There's a, you know that at night,
if you've got a lamp outside,
or a candle is better, if you've got a candle,
insects, moths, say, come and sort of, as it were, commit suicide.
I mean, they just burn themselves up in the candle.
And you could ask the question,
what on earth is the survival value of suicidal self-immolation behaviour in moths?
Well, it's the wrong question, because a probable explanation for it is that many insects use
a light compass to steer a straight line.
Lights at night, until humans came along and invented candles, lights at night were always
at optical infinity.
They were things like the moon, the stars, or the sun during the day.
And if you maintain a fixed angle relative to these rays
that are coming from optical infinity,
then you just cruise at a straight line, which is just what you want to do.
A candle is not at optical infinity.
And if you work out mathematically what happens
if you maintain a fixed acute angle
to the rays that are emanating in all directions out of a candle,
you perform a neat logarithmic spiral into the candle flame.
So this is an accidental byproduct of a mechanism which really does have survival value.
You have to rephrase the question, what is the survival value of
maintaining a fixed angle at light rays? And then you've got the answer. So to ask the
question, what's the advantage of suicidal self-immolation, you've shifted to the wrong
question.
Right. And there are related issues. So there are things which provide some survival advantage if you have one copy of the gene,
but if you have both copies, then it's deleterious.
Yes, like sickle cell anemia.
Right, right.
So what do you do with the concept of a spandrel, though?
Gould's concept of a spandrel, is that useful to think about?
Yeah, okay, yes.
Yeah, okay, yes.
Spandrels are... Lewontin and Gould wrote a notorious and overrated paper
in 1979,
in which Gould went to King's College, Cambridge,
where there's the most beautiful building,
and the Gothic arches have gaps, inevitably
form gaps which are called spandrels and they actually have a name and they're often filled
with ornamentation and the spandrels themselves are accidental byproducts of something which
really matters which is the Gothic arch and so the point they were making is that things that we...
It's really almost the same point that I was making just now
about asking the wrong question. Spandrels are...
You can't ask what's the purpose of a spandrel.
That's right, yes.
It's derivative of the thing you were building.
Exactly, yes.
What are your thoughts about artificial intelligence?
Please discuss its relationship to biological evolution
and how it could develop in the future.
I think it's a question for you, Sam.
Yes.
Well, I fear everyone's heard my thoughts on artificial intelligence.
I find this increasingly interesting.
It's something that I became interested in very late.
And in fact, unless you were in the AI community until very recently,
the dogma that had been exported from computer science to neuroscience and psychology and
adjacent fields was that AI basically hadn't panned out. I mean, there was no
real noticeable success there that should get anyone worried or particularly excited.
Then all of a sudden people started making worried noises, and then there were obvious gains in
narrow AI that were getting sexier and sexier. And now it was really the first time I thought
about the implications of ongoing progress in building intelligent machines and progress at any rate. It really doesn't have
to be that Moore's Law continues indefinitely. We just need to keep going. And at a certain point,
we will find ourselves in the presence of machines that are as intelligent as we are.
They may not be human-like, although presumably we'll build them to be
as much like ourselves in all the good ways as possible. But this interests me for many
different reasons because it, one, I'm actually worried in terms of existential risk, it's
on my short list for things to actually worry about. But the flip side of that is that it's
one of the most hopeful things. If anything seems intrinsically good, it's intelligence and we want more of it.
So insofar as it's reasonable to expect that we are going to make more and more progress
automating things and building more intelligent systems,
that seems very hopeful and I think we can't but do it.
And the other point of interest for me, and this is kind of my hobby horse,
is actually what we were talking about on stage last time
some years ago when I wrote The Moral Landscape.
I'm interested in collapsing this perceived distance
between facts and values,
the idea that morality somehow is uncoupled
to the world of science and truth claims.
And I think that once we have to start building,
and we even have to start even now with things like self-driving cars,
once we start building our ethics into machines
that within their domain are more powerful than we are,
the sense that there are no better and worse answers to ethical questions,
that we should all be moral relativists, that all cultures are equal with respect to what constitutes a good life.
That just, I mean, there's going to be somebody sitting at the computer waiting to code something,
and if you don't put…
You've actually got to build in some moral values.
You have to build in the values, and if you don't build it in, you are building in those
values. So if you build a self-driving car that isn't distinguishing between people and mailboxes,
well, then you've built a very dangerous self-driving car.
The more relevant tuning, which people have to confront, is do you want a car that,
and the car's going to have to make a choice between protecting the occupant and protecting pedestrians, say.
So how much risk do you want as the driver of the car to assume in order to spare the lives of occupants?
You're constantly facing a trolley problem, and you're the one to be sacrificed.
And your point is that whereas trolley problems are these hypothetical things where you have to imagine
you've got a runaway trolley and you're standing at points, and it's about to mow down five people,
and if you pull the lever to swing the points, it'll kill one person.
So you, with holding the lever in your hand, have the dilemma, should I save five people and kill one? But you know
that by your action in pulling the lever, you're going to kill a person who wouldn't
otherwise have died. And I think, Sam, you're making the point that AI, I mean, automatic
machines, robotic machines, are going to need to have a moral system built into them. And
so that the trolley problem is going to be faced by
the programmer who's actually writing the software.
Oh, it's already the case, yeah.
Yes.
And it just will proceed from there.
So just imagine a system more intelligent than ourselves that we have seeded with our
morality.
And again, this is going to be a morality that the smartest people we can find doing this work
will have to agree by some consensus
is the wisest morality we've got,
and so obviously the Taliban and al-Qaeda
are not going to get a vote in that particular project.
At that first pass, everything you've heard
about moral relativism just goes out the window because we will be desperate to find
the best answer we can find on every one of these questions and desperate to build a machine that when it,
in the real limit case where it begins to make changes to itself,
it doesn't make changes that we find, in the worst case, incompatible with our survival.
Making changes to itself is what more conventionally worries people.
The von Neumann machine, which is capable of reproducing
and thereby possibly evolving by natural selection
and completely supplanting humans, completely taking over.
This is, of course, a science fiction scenario,
but it's not totally unrealistic.
Not at all, given the fact that one path toward developing AI
is to build genetic algorithms that function along similar lines.
There's a Darwinian principle of just it getting better and better
in response to data and error correction,
and it may not even be clear how it has gotten better.
So we could look forward to a time in the distant future
when we have a hall like this filled with silicon and metal machines
looking back and speculating on some far distant dawn age
When the world was peopled by soft squishy
organic water based
life forms
But then the data transfer would be instantaneous. So there's been no reason to come out here
You just take off the firmware upgrade
But but I haven't but where the world will be a better and a happier place.
Well, my real fear is that it won't be illuminated by consciousness at all. Because I'm agnostic at the moment as to whether or not
mere
information processing and a scaling of intelligence, by definition, gets you consciousness. It may,
in fact, be the case that it gets you consciousness. I'm not conscious, by the way.
It is a genuine, a very difficult philosophical problem, I think. Why? I mean, it would seem to
be perfectly possible to build a machine or an animal or a human which can do all the things
that we do, all the intelligent things that we do, all which can do all the things that we do, all the intelligent
things that we do, all the life-saving things that we do, and yet not be conscious. And it's
genuinely mysterious why we need to be conscious, I think. Yeah, and I think it remains so. I think
it's because consciousness is, the conscious part of you is generally the last to find out about what your mind just
did. You know, you're not, you're playing catch-up. And what you call consciousness
is, in every respect, an instance of some form of short-term memory. Now, it's, you
know, there's different kinds of memory, and this is integrated in different ways, but you are, I mean, there's just a transmission time for everything. So it's, you can't be aware of
a perception or a sensation the instant it hits your brain because it's hitting your brain isn't
one discrete moment. And so there's a whole time of integration. So the present moment is this layered,
subjectively speaking,
it's this layering of memories,
even when you are distinguishing the present
from what you classically call a memory.
And so it's not, it is a genuine mystery
why consciousness would be necessary,
or what couldn't a machine as complex as a human brain do
but for the emergence of this subjective sense,
this inner dimension of experience.
I don't even know what the solution would look like
and whether it would be solved by biologists or by philosophers
or by computer
scientists.
Well, I'm just worried that that is, you've just articulated what philosophers call the
board from unconsciousness.
If you'd like to continue listening to this conversation, you'll need to subscribe at
SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along Thank you. Harris.org.