Making Sense with Sam Harris - #84 — Landscapes of Mind
Episode Date: July 1, 2017Sam Harris speaks with Kevin Kelly about why it’s so hard to predict future technology, the nature of intelligence, the “singularity,” artificial consciousness, and other topics. If the Making S...ense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely through
the support of our subscribers. So if you enjoy what we're doing here, please consider becoming Today I'm speaking with Kevin Kelly.
Kevin helped launch Wired Magazine and was its executive editor for its first seven years.
So he knows a thing or two about digital media.
And he's written for the New York Times, The Economist, Science, Time Magazine,
The Wall Street Journal,
and many other publications. His previous books include Out of Control, New Rules for the New Economy, Cool Tools, and What Technology Wants. And his most recent book is The Inevitable,
Understanding the 12 Technological Forces That Will Shape Our Future.
And Kevin and I focused on this book and then spent much of the conversation talking about AI,
the safety concerns around it, the nature of intelligence, the concept of the singularity,
the prospect of artificial consciousness, and the ethical
implications of that. And it was great. We don't agree about everything, but I really enjoyed the
conversation. And I hope you enjoy it as much as I did. And now I bring you Kevin Kelly.
I am here with Kevin Kelly.
Kevin, thanks for coming on the podcast.
Oh man, I'm enjoying this right now.
Listen, so many people have asked for you and obviously I've known you and about you for many years.
I'll talk about how we first met at some point.
You're so on top of recent trends
that are subsuming everyone's lives
that it's just great to get a chance to talk to you. Well, thanks for having me. So before we
jump into all these common topics of interest, how would you describe what you do? I package
ideas, and they're often visual packages, but I like to take ideas, not necessarily my ideas,
but other people's ideas, and present them in some way. And that kind of is what I did with
magazines, beginning with the Whole Earth Review, formerly called Coalition Corley,
the Whole Earth Catalogs, Wired, websites like Cool Tools, and my books.
So you've written these two recent books on technology, What Technology Wants, and your most recent one, The Inevitable.
How would you summarize the arguments you put forward in those books?
At one level, I'm actually trying to devise a proto-theory of technology.
So before Darwin's theory of biology, the evolutionary theory,
there was a lot of naturals and they had these curiosity cabinets
where they would just collect biological specimens
and there was just one weird creature after another. There was no
framework for understanding how they were related or how they came about. And in many ways,
technology is like that with us. We have this sort of parade of one invention after another,
and there's really no theory about how these different species of technology are related and how they come together.
So at one level, my books were trying to devise a rough theory of their origins.
And perhaps no surprise, cutting to the punchline, I see these as an extension and acceleration of the same forces
that are at work in natural evolution or cosmic evolution for that matter. And that if you look
at it in that way, this system of technology that I call the technium is in some ways the extension and acceleration of the self-organizing forces that are running through the cosmos.
So that's one thing that I'm trying to do.
And the second thing I'm trying to do is to say that there is a deterministic element in this, both in evolution and in technological systems. And a lot of, at the very high level,
a lot of what we're going to see and have seen is following kind of a natural progression,
and so therefore is inevitable. And that we as humans, individuals, and corporately need to
embrace these things in order to be able to steer the many ways in which we do have control
and choice. The character of these. So I would say like the, once you invented electrical wires
and you invented switches and stuff, you'd have telephones. And so the telephone was inevitable,
but the character of the telephone was not inevitable. You know, iPhone was not inevitable.
And we have a lot of choices about those.
But the only way we make those choices is by embracing and using these things rather
than prohibiting them.
So now you start the book, The Inevitable, with some very amusing stories about how clueless
people were about the significance of the internet in particular.
I was vaguely aware of some of
these howlers, but you just wrap them all up in one paragraph, and it's amazing how blind people
were to what was coming. So you cite Time and Newsweek saying that more or less the internet
would amount to nothing. One network executive said it would be the CB radio of the 90s.
There was a W wired writer who bought the
domain name for McDonald's, McDonald's.com, and couldn't give it away to McDonald's because they
couldn't see why it would ever be valuable to them. Now, I don't recall being quite that clueless
myself, but I'm continually amazed at my inability to see what's coming here. And if you had told me five years
ago that I would soon be spending much of my time podcasting, I would have said, what's a podcast?
And if you had told me what a podcast was, essentially describing it as on-demand radio,
I would have been absolutely certain that there was no way I was going into radio.
Just it would not apply. I feel personally no ability to see
what's coming. Why do you think it is so difficult for most people to see into even the very near
future here? Yeah, it's a really good question. I don't think I have a good answer about why we
find it hard to imagine the future. But it is true that the more we know about that, in other words,
the experts in a certain field are often the ones who are most blinded by the changes. We did this
thing at Wired called reality check, and we would poll different experts and non-experts in
some future things, like whether they're going to use laser drilling in dentistry
or flying cars and stuff like that, and they would have dates.
And when these came around later on in the future,
it was the experts who were always underestimating,
who were, I guess, overestimating when things were
going to happen.
They were more pessimistic.
And it was sort of the people who, so the people who knew the most about things were
often the ones that were most wrong.
And so I think it's kind of like we know too much and we find it hard to release and believe things that seem impossible.
I think the things that will continue to surprise us in the next 30 years all have to do with the fact that the things that are most surprising are actually things that are work and collaboration at a scale that was just really unthinkable before.
And that's where a lot of these surprises have been originating, is our ability to collaborate
in real time in scales that were just unthinkable before. And so they seemed
impossible. And for me, most of the surprises have had that connection.
Well, I know you and I want to talk about AI because I think that's an area where we'll find
some, I think, significant overlap, but also some disagreement. And I want to spend
most of our time talking about that. But I do want to touch on some of the issues you raise
in The Inevitable, because you divide the book into these 12 trends. I'm sure some of those
will come back around in our discussion of AI. But take an example of, I mean, let's say this podcast. I mean, one change
that a podcast represents over radio is that it's on demand. You can listen to it whenever you want
to listen to it. It's instantly accessible. In this case, it's free. So there's no barrier to
listening to it. People can slice it and dice it in any way they want. People remix it. People have taken snippets of it
and put it behind music, so it becomes the basis for other people's creativity. Ultimately, I would
imagine all the audio that exists and all the video that exists will be searchable in a way
that text is currently searchable, which is a real weakness now, but eventually you'll be able to search and get
exactly the snippet of audio you want. This change in just this one domain of how people
listen to a conversation, that captures some of these trends, right?
Exactly. So there was the flow or the verb of the remixing was, to your point, the fact that
that was the big change in music,
which the music companies didn't kind of understand.
They thought that the free aspect of downloads of these files
was because people wanted to cheat them and get things for free.
But the chief value was that the freeze in freedom
is that people could take these music files,
they could get less
than an album they could kind of remix them into singles they could then manipulate them make them
into playlists they could do all these things that make it much more fluid and liquid um and
manipulable and fungible and um and that was the great attraction for people. The fact that it doesn't cost anything was sort of a bonus that wasn't the main event.
And that all the other things that you mentioned about this aspect of podcasts, of getting
them on demand, the shift from owning things to having access to things if you have instant
access anytime, anywhere in the world. That's part of the shift, the shift away from
things that are static and monumental to things that are incomplete and always
in the process. The movement from centralized to decentralized is also made possible when you have things in real time.
When you're in a world of like the Roman era, when there was very little information flows,
the best way to organize an army was to have people give a command at the top and everybody below would follow it because the commander had the most information.
and everybody below would follow it because the commander had the most information.
But in a world in which information flows liquidly and pervasively everywhere,
then a decentralized system is much more powerful because you can actually have the edges and steer as well as the center,
and the center becomes less important.
And so all these things are feeding into it and your example of of the podcast is just a perfect example where all these trends
in general conspire to make this a new genre and i would say in the future we would continue to
continue to remix the elements inside a podcast and that we would, you know, have podcasts within VR that will have podcasts, as you said, that are searchable and have AI remix portions of it,
or that we would, you know, begin to do all the things that we've done with text and annotations and
footnoting would be brought to this as well. So if you just imagine what we've done with
podcasts and now multiply that by every other medium from GIFs to YouTube, we're entering into an era where we're going to have entirely brand new genres of art,
expression, and media. And we're just, again, at the beginning of this process.
What do you think about the new capacity to fake media? So now I think you must have seen this,
I think it was a TED Talk initially where I saw it, but it's been unveiled in various formats now where they can fake audio so well that given the sample that we've just given them, someone could produce a fake conversation between us where we said all manner of reputation destroying things.
things. And it wouldn't be us, but it would be, I think by current technology, undetectable as a fraud. And I think there are now video versions of this where you can get someone's mouth to move
in the appropriate way. So it looks like they're delivering the fake audio, although the facial
display is not totally convincing yet, but presumably it will be at some point.
What do you think about that? I've, in a hand-waving way, not really knowing what I'm talking about,
I've imagined there must be some blockchain-based way of insuring against that.
But where are we going with that?
So in, I don't know, 1984 or something,
I did a cover story for the Whole Earth Review of CQ,
I think it was called at the time
um it was called photography as the end of evidence for anything and we were we used a very expensive
um scitex machine it was like multi-million dollar machine which cost
tens of thousands of dollars in hours to basically what we would now call Photoshop. This is early Photoshop.
So the National Geographic and Time and Life magazine had access to things,
and they would do little retouching stuff.
But we decided to Photoshop flying saucers arriving in San Francisco.
And the point of this article was that, okay,
this was the beginning of using photography as the evidence of anything.
And what I kind of concluded back then was that the only – well, there's two things.
One was the primary evidence of believability was simply going to be the reputation of the source.
So for most people, you wouldn't be able to tell.
And we already have that thing with text all right i mean it's like words you know you could quote somebody you can say
put some words and say sam harris says this and it would look just like yes it was done yeah exactly
so how would you know well the only way you could know was basically you have to trust the the
source and the same thing was going to happen with photography.
And now it'll be with video and audio.
And so they're coming up to the place where text is,
which is basically you can only rely on the source.
The second thing we discovered from this was that,
and this also kind of applied to this question of like,
when you have AI and agents,
how would you be able to tell if they're human or not?
And the thing is, is that for most cases, like in a movie right now, you can't tell whether something has been CGI, whether it's real actor or not. We've already left that behind.
But we don't care in a certain sense. And when we call up on a phone and there's an agent there and we're trying to do a service problem, in some ways we don't really care whether it's a human or not, if they're giving us good service.
But in the cases where we do care, there will always be ways to tell, and they may cost money.
to tell, and they may cost money. There's forensic ways to really come decide whether this photograph has been doctored, whether a CGI has actually been used to make a frame,
whether this audio file has been altered. There will always be some way if you really,
really care. But in most cases, we won't care. And we will just have to
rely on the reputation of the source. And so I think we're going to kind of get to the place
where text is already, which is the same thing. If someone's making it up, then you have no way
to tell by looking at the text. You have to go back to the source. But that doesn't address
the issue of fake news. And for there, I think what we're going to see is a truth signaling layer
added on, somewhat maybe using AI, but mostly to devise what I would think is going to be kind of
like a probability index to a statement that would be made in a networked way rather than – it'll involve Wikipedia and Snopes and places, you know, maybe other academics.
But it'll be like PageRank, meaning that you'll have a statement, you know, London is the capital of England.
They'll be like – that statement has a 95% probability or 98% probability of being true.
And then other statements will have a 50% probability of being true, and others will have a 10% probability.
And that will come out of a networked analysis of these sites or these, you know, the Encyclopedia Britannica or whatever says so.
So these other sources have a high reliability because in the past they had been true.
And this network of corresponding sources, which are ranked themselves by other sources in terms of their reliability, will generate some index number to a statement.
And as the statements get more complex, that becomes a more difficult job to do. And that's where the AI could become involved in trying to detect the pattern out of
all these sources. And so you'll get a probability score of this statement is likely truthfulness.
That's kind of like a prediction market for epistemology.
Yes.
That's interesting.
So in light of what's happening
and the trends you discuss in The Inevitable,
if you had a child going to college next year,
what would you hope that he or she study
or ignore in light of what opportunities will soon exist one of the things i talk about in
the book is this idea that we're all in going to be perpetual newbies no matter whether we're 60
or 16 or six that um we're feeling very good that we've mastered you know smartphones and we know
laptops but the gestures and how things work this kind of of literacy. But, you know, in five years from now, there'll be a new platform, virtual reality, whatever
it might be.
And we'll have to learn another set of gestures and commands and logic.
And so the digital natives right now have a past because they are dealing with technology
that was invented after they were born.
But eventually, they're going to have to learn new things, too.
And they're going to be in the same position as the old folks of having to learn these things.
They're going to be newbies again, too.
So we're all going to be perpetual newbies.
And I think the really only literacy or skill that should be taught in schools is so that when you graduate,
you have learned how to learn. So learning how to learn is the meta skill that you want to have.
And really, I think the only one that makes any difference, because whatever language you're
going to learn is not necessarily going to be the one that you are going to get paid for.
to learn is not necessarily going to be the one that you are going to get paid for. Knowledge,
if you want an answer, you ask a machine. So I think this idea of learning how to learn is the real skill that you should graduate with. And for extra bonus, for the ultimate
golden pass, if you can learn how you learn best yourself, if you can optimize your
own style of learning, that's the superpower that you want. That, I think, almost takes a lifetime
to get to. And some people like Tim Ferriss are much better at dissecting how they learn and
understanding how they can optimize their self-learning. But if you can get to that state where you have really understand
how you personally learn best,
then you're golden.
And I think that's what we want to aim for
is that every person on the planet today
will learn how to learn
and will optimize how they learn best.
And that I think is what schools should really be aiming for.
I was going to say, our mutual friend Tim seems well-poised to take advantage of the future.
I'm just going to have to keep track of him.
Let's talk about AI.
I'll set this up by just how this podcast got initiated,
because though I long knew that I wanted you on the podcast, you recently
sent me an email after hearing my podcast on robot ethics with Kate Darling. And in that email,
you sketched ways where you think you and I disagree about the implications and safety
concerns of AI. You were also reacting to my TED Talk on the topic and also a panel discussion
that you saw where I was on stage with Max Tegmark and Elon Musk and Jan Talen and other people who
were at this conference on AI at Asilomar earlier this year. And you wrote in the setup to this
email, and now I'm quoting you, there are at least five assumptions the super AI crowd
hold that I can't find any evidence to support. In contradistinction to this orthodoxy, I find
the following five heresies to have more evidence. One, intelligence is not a single dimension,
so, quote, smarter than humans is a meaningless concept. Two, humans do not have general purpose minds and neither will AIs. Three, emulation of
human thinking will be constrained by cost. Four, dimensions of intelligence are not infinite.
And five, intelligences are only one factor in progress. Now, I think these are all interesting
claims, and I think I agree with several of them, but most of them don't actually touch what concerns me about AI.
So I think we should talk about all of these claims because I think they get at interesting points.
But I think I should probably start by just summarizing what my main concern is about AI so we can, as we talk about your points, we can also just make sure we're hitting that.
And, you know, you, when you talk about AI
and when you talk about this one trend in your book,
perhaps the most relevant,
cognifying, you know, essentially putting intelligence
into everything that can be made intelligent,
you can sound very utopian
and I can sound very dystopian in how I talk about it,
but I actually think we overlap a fair amount.
I guess my main concern can be summarized under the heading of the alignment problem,
which is now kind of a phrase of jargon among those of us who are worried about AI gone wrong.
And there are really two concerns here with AI. And I think
there are concerns that are visited on any powerful technology. And the first is just
the obvious case of people using it intentionally in ways that cause great harm. So it's just kind
of the bad people problem. And that's obviously a real problem.
It's a problem that probably never goes away, but it's not the interesting problem here. I think that the interesting problem is the unintended consequences problem. So it's the situation where
even good people with the best of intentions can wind up committing great harms because the
technology is such that it won't reliably conform to the best intentions of good people.
So for powerful technology to be safe or to be operating within our risk tolerance,
it has to be the sort of thing that good people can reliably do good things with it
rather than accidentally end civilization or do something else that's terrible.
And for this to happen with AI, it's going to have to be aligned with our values. And so again,
this is often called the alignment problem. When you have autonomous systems working in ways,
increasingly powerful systems, and ultimately systems that are more powerful than any human being
and even any collection of human beings, you need to solve this alignment problem.
But at this point, people who haven't thought about this very much get confused,
or at least they wonder, you know, why on earth would an AI, however powerful,
fail to be aligned with our values? Because after all, we built these
things or we will build these things. And they imagine a kind of silly Terminator-style scenario
where just robot armies start attacking us because for some reason they have started to hate us and
want to kill us. And that really isn't the issue that even the most dystopian people are
thinking about. And it's not the issue I'm thinking about. It's not that our machines
will become spontaneously malevolent and want to kill us. The issue is that they can become
so competent at meeting their goals that if their goals aren't perfectly aligned with our own,
goals, that if their goals aren't perfectly aligned with our own, then the unintended consequences could be so large as to be catastrophic. And there are cartoon versions
of this, as you know, which more clearly dissect the fear. I mean, they're as cartoonish as the
Terminator-style scenarios, but they're different. I mean, something like Nick Bostrom's
paperclip maximizer. To review, I think many people are familiar with this, but so Nick Bostrom imagines a machine
whose only goal is to maximize the number of paperclips in the universe, but it's a super
powerful, super competent, super intelligent machine. And given this goal, it could quickly
just decide that, you know, every atom accessible, including the atoms in your own body, are best suited to be turned into paperclips.
And, you know, obviously we wouldn't build precisely that machine.
But the point of that kind of thought experiment is to point out that these machines, even super intelligent machines, will not be like us.
super intelligent machines will not be like us and they'll lack common sense or they'll only have the common sense that we understand how to build into them. And so the bad things that they
might do might be very counterintuitive to us and therefore totally surprising. And just, you know,
kind of the final point I'll make to set this up, I think we're misled by the concept of intelligence.
Because when we talk about intelligence, we assume that it includes things like common sense.
In the space of this concept, we insert something fairly anthropomorphic and familiar to us.
But I think intelligence is more like competence or effectiveness,
But I think intelligence is more like competence or effectiveness, which is just an ability to meet goals in an environment or across a range of environments.
And given a certain specification of goals, even a superhumanly competent machine or system of machines might behave in ways that would strike us as completely absurd,
and yet we will not have closed the door to those absurdities, however dangerous,
if we don't anticipate them in advance or figure out some generic way to solve this alignment problem. So I think a good place to start is where we agree. And I think where we, the first thing I think we both agree on is that we have a very poor understanding of what our own intelligence is as humans.
we have of IQ is a very misleading notion of intelligence in humans, that we can kind of rank intelligences in a relative scale, a single dimension of, you know, and this
is taken from Nick Bostrom's own book, that, you know, you have a single dimension and
you have the intelligence of a mouse, say, or the IQ of a mouse, and then a rat's a little
bit more, and then that chimpanzee's a little bit more, and then you have the kind of a really dumb human, an average human,
and then a super genius like Albert Einstein.
And then there's the super AI, which is kind of off the charts in terms of how much smarter
along this IQ it can be.
And that, I think, is a very, very misleading idea of what intelligence is.
very misleading idea of what intelligence is. It's obviously the human intelligence is a suite,
a symphony, a portfolio of dozens, 20 maybe, who knows how many different modes or nodes of thinking. There's perception, there's symbolic reasoning, there's deductive reasoning, inductive
reasoning, and emotional intelligence, spatial navigation, long-term memory, short-term memory.
There's many, many different nodes of thinking.
And, of course, that complex varies person by person.
And when we get into the animal kingdom, we have a different mixture of these, and most of them are maybe simpler complexes.
But in some cases, a particular node that we might have
may actually be higher and maybe superior in an animal
in terms of, I mean, if you've seen some of these...
The chimpanzee, yeah.
Chimpanzees, remembering the locations of numbers, it's like, oh, my gosh, obviously, we're just, they're smarter than us and sort of in that dimension.
We should just describe that so that people are aware of what, because they should find that video online.
What it is, is a chimpanzee has a screen and there's a series of numbers in sequence or numbers that appear in different positions on the screen very, very briefly.
It's like a checkerboard that suddenly illuminates with, let's say, 10 different digits, and you have to select all the digits in order and select all the, you know, you have to hit the right squares.
Then the numbers then disappear, and then you just have a blank checkerboard.
Right.
And you have to remember, you seize it for like a split second,
and you have to remember where they are, and you have to go back and hit the locations in order.
And no human can do this, but for some reason, chimps seem to be able to do this very easily.
So they have some kind of a short-term memory or a long-term memory,
I'm not sure which kind of memory, a spatial memory, that really would amaze us and we would find superhuman. And so I think we both agree that
the human intelligence is very complex. And my suggestion about thinking about AI is always to
use plural, to try to talk about AIs, because I think as we make these synthetic
types of minds, we're going to make thousands of different species of them with different
combinations of these primitive, these kind of primitive modes of thinking. And that what we
think of ourselves, our own minds, we think of kind of as a singular intelligence.
It's very much like the illusion of us having an I or being center.
There's an illusion that we have a kind of a very, very specific combination of elements in thinking that are not really general purpose at all.
It's a very specific purpose to survive on this planet and in this regime of biology.
in this regime of biology, when we compare our intelligence to the space of possible intelligences, we're going to see that we're not at the center of something universal,
but we're actually at the edge, like we are in the real galaxies, of possible minds.
And what we're doing with AI is actually going to make a whole zoo of possible ways of thinking,
AI is actually going to make a whole zoo of possible ways of thinking, including inventing some ways of thinking that don't exist in biology at all today, just as we did with flying. So the
way we made artificial flying is we looked at natural flight and mostly birds and bees and
bats as flapping wings. And we tried to artificially fly by flapping wings.
It just didn't work. The way we made artificial flying is we invented a type of flight that does
not exist in nature at all, which was a fixed wing and a propeller. And we were going to do
the same thing of inventing ways of thinking that cannot really occur in biological tissue that will be a different way of thinking.
And we'll combine those into maybe many, many new complexes of types of thinking to do
and achieve different things. And there may be problems that are so difficult in science or business
that human-type thinking alone cannot reach, that we will have to work with a two-step process of
inventing a different kind of thinking that we can then together work to solve some of these
problems. So I think just like there's a kind of a misconception in thinking that humans are sort of on this ladder of evolution where we are superior to the animals that are below us.
In reality, the way evolution works is that it kind of radiates out from a common ancestor of 3.7 billion years ago.
We're all equally evolved. a common ancestor of 3.7 billion years ago,
we're all equally evolved.
And the proper way to think about it is like, are we superior to the starfish, to the giraffe?
They have all enjoyed the same amount of evolution as we have.
The proper way to kind of map this
is to map this in a possibility space
and saying these creatures excel in this saying, these creatures excel in this niche,
and these creatures excel in this niche, and they aren't really superior to us in that way.
It's even hard to determine whether they're more complicated than us or more complex.
So I think a better vision of AI is to have a possibility space of all the different possible ways you can think.
And some of these complexes will be greater than what humans are, but we can't have a
complex of intelligence that maximize everything. That's just the engineering
principle. The engineering maximizes you cannot optimize everything you want they do.
You can always excel in another version, another dimension by just specializing in that particular
node of thinking and thought.
And so this idea that we're going to make this super version of human intelligence that somehow excels us in every dimension.
I think is I don't see any evidence for that.
Let me try to map what you just said on to the way I think about it, because I agree with most of what you said, I think the last bit I don't agree with. But I certainly, and I come to a different conclusion or I have a different, at least I have a very vivid concern that survives contact with all the intelligence of other species. To ask, you know, what is the IQ of an octopus
doesn't make any sense. And it's fine to think about human intelligence not as a single factor,
but as a constellation of things that we care about. And our notion of intelligence could
be fairly elastic, that we could suddenly care about other things that we haven't cared about
very much, and we would want to wrap that up that we haven't cared about very much. And we
would want to wrap that up in terms of assessing a person's intelligence. You mentioned emotional
intelligence, for instance. I think that's a discrete capacity that doesn't segregate very
reliably with something like mathematical intelligence, say. And it's fine to talk
about it. I think there are reasons why
you might want to test it separately from IQ. And I think the notion of general intelligence
as measured by IQ is more useful than many people let on. But I definitely take your point that
we're this constellation of cognitive capacities. So putting us on a spectrum with a chicken, as I did in my TED talk, is more or less
just saying that you can issue certain caveats, which I didn't issue in that talk, but issuing
those caveats still makes this a valid comparison, which is that of the things we care about in
cognition, of the things that make us able to do the extraordinarily heavy lifting and unique things we do,
like building a global civilization and producing science and art and mathematics and music and everything else
that is making human life both beautiful and durable,
there are not that many different capacities that we need to enumerate in order to capture those abilities.
It may be 10. It's not 1,000.
And a chicken has very few of them.
Now, a chicken may be good at other things that we can't even imagine being good at,
but for the purposes of this conversation, we don't care about those things,
and those things are clearly not leading to chicken civilization and chicken science and the chicken version of the internet. So of the things we care about in cognition,
and again, I think the list is small, and it's possible that there are things on the list that we
really do care about that we haven't discovered yet. Take something like emotional intelligence.
Let's say that we roll back the clock, you know, 50 years or so, and there's very few people thinking about anything like emotional intelligence, and then
put us in the presence of, you know, very powerful artificial intelligent technology, and we don't
even think to build emotional intelligence into our systems. It's clearly possible that we could
leave out something that is important to us
just because we haven't conceptualized it. But of the things we know that are important, there's not
that many of them that lead us to be able to prove mathematical theorems or invent scientific
hypotheses or propose experiments. And then if you add things like even emotional intelligence, the
ability to detect the emotions of other people in their tone of voice and in their facial
expression, say.
These are fairly discrete skills, and here's where I begin to edge into potentially dystopian
territory.
Once the ground is conquered in artificial systems, it never becomes unconquered.
Really, the preeminent example here is something like chess, right? So for the longest time,
chess-playing computers were not as good as the best people. And then suddenly they were more or
less as good as the best people. And then more or less 15 minutes later, they were better than the
best people. And now they will they were better than the best people and
now they will always be better than the best people and where i think we're living in this
bit of a mirage now where you have human computer teams you know cyborg teams you know much
celebrated by people like gary kasparov who's been on the podcast talking about them which are for
the moment better than the best computer. So having
the ape still in the system gives you some improvement over the best computer. But ultimately,
the ape will just be adding noise, or so I would predict. And once computers are better at chess
and better than any human-computer combination, that will always be true, but for the fact that we might merge with computers
and cease to be merely human.
And when you imagine that happening
to every other thing we care about
in the mode of cognition,
then you have to imagine building systems
that escape us in their capacities.
They could be highly alien in terms of what we have
left out in building them, right? So again, if we had forgotten to build in emotional intelligence
or we didn't understand emotional intelligence enough to build everything in that humans do,
we could find ourselves in the presence of, you know, say,
the most powerful autistic system, you know, the universe has ever devised, right? So we've left
something out and it's only kind of quasi-familiar to us as a mind, but, you know, godlike in its
capacities. I think it's just the fact that once the ground gets conquered in an artificial system, it stays conquered.
And by definition, you know, the resource concerns that you mentioned at the end, you know, if you build a Swiss Army knife, it's not going to be a great sword.
And it certainly isn't going to be a great airplane. think that doesn't actually describe what will happen here because when you compare the resources
that a superhuman intelligence will have, especially if it's linked to the internet,
you compare that to a human brain or any collection of human brains, I don't know how
many orders of magnitude difference that is and in terms of the time frame of operation. I mean,
you're talking about systems operating a billion times faster
than a human brain, there's no reasonable
comparison to be made there.
And that's where I feel like the possibility
of something like
the singularity or something like an
intelligent explosion is there
and worth worrying about.
So, again, I'd like to go where
we agreed. So, do you
use the term? If you'd like to go where we agreed. So do you use the term?
If you'd like to continue listening to this conversation,
you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes
of the Making Sense podcast,
along with other subscriber-only content,
including bonus episodes and AMAs
and the conversations I've been having on the Waking Up app.
The Making Sense podcast is ad-free and relies entirely on listener support,
and you can subscribe now at samharris.org.