Making Sense with Sam Harris - #153 — Possible Minds
Episode Date: April 15, 2019Sam Harris introduces John Brockman's new anthology, "Possible Minds: 25 Ways of Looking at AI," in conversation with three of its authors: George Dyson, Alison Gopnik, and Stuart Russell. If the Maki...ng Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense Podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely through
the support of our subscribers. So if you enjoy what we're doing here, please consider becoming Welcome to the Making Sense Podcast. This is Sam Harris.
Okay, a few things to announce here. I have an event in Los Angeles on July 11th.
If you're a supporter of the podcast, you should have already received an email.
This is actually the first event for the app. It's the
first waking up event. It is at the Wiltern on July 11th, and it is with a great Tibetan Lama
by the name of Mingyur Rinpoche. And Mingyur is a fascinating guy. He's the youngest son of the greatest Dzogchen master I ever studied with,
Tukurgen Rinpoche. And I wrote about him in my book, Waking Up, so that name might be familiar
to some of you. I studied with him in Nepal about 30 years ago. And I've never met Mingyur,
and he's about, I don't know, seven years younger than me.
I was in my 20s when I was in Nepal, and he was a teenager, and he was on retreat for much of that time.
He did his first three-year retreat when he was, I think, 13, and he was always described as the superstar of the family.
I studied with two of his brothers, Chokinima Rinpoche and Sokin Rinpoche,
but I've never met Mingyur and I'm really looking forward to it.
He has a very interesting story because at some point he started teaching
and started running monasteries.
I believe he has three monasteries he's running, as well as a foundation.
I believe he has three monasteries he's running, as well as a foundation.
But then in 2011, when he was 36, he just disappeared from his monastery in India and spent the next four and a half years wandering around India as a mendicant yogi,
living in caves and on the streets and encountering all kinds of hardships. I believe he got very sick
and almost died. Anyway, he's written a book about this titled In Love With The World,
which I haven't read yet, but I will obviously read it before our event. And we will discuss
the book and the nature of mind and the practice of meditation, and take your questions.
And again, that will be happening at the Wiltern in Los Angeles on July 11th. And you can find more
information on my website at samharris.org forward slash events. And tickets are selling quickly
there, so if you care about that event, I wouldn't wait,
and the audio will eventually be released on the podcast.
Okay, the Waking Up app. There have been a few changes. We've added Annika's Meditations for
Children, which are great, and there are some meta meditations coming from me as well.
Also, we'll soon be giving you the ability to sit in groups, where you can organize a virtual
group with your friends or colleagues and sit together, either in silence or listening to a
guided meditation. And very soon there will be a web-based version of the course.
You can get more information about all that at wakingup.com.
So this podcast is the result of three interviews, and it is organized around a new book from my agent John Brockman, who edited it. And the book is titled Possible Minds, 25 Ways of Looking at AI.
And you may have heard me mention John on the podcast before. He's not just a book agent,
though between him and his wife, Katinka Mattson, and their son, Max Brockman,
they have a near monopoly on scientific nonfiction. It's really quite impressive.
Many of the authors you know and admire, Steve Pinker, Richard Dawkins, Dan Dennett, and
really most other people in that vein you could name, and many who have been on this
podcast, are represented by them.
But John is also a great connector of people and ideas.
He seems to have met every interesting person in both the literary and art
worlds since around 1960. And he's run the website edge.org for many years, which released its annual
question for 20 years, and got many interesting people to write essays for that. And there have
been many books published on the basis of those essays. He's also put together some great meetings and small conferences,
so he's really facilitated dialogue to an unusual degree and at a very high level. And he's written
his own books, The Third Culture and by the late John Brockman. But this new book is another one of his anthologies, and it's organized around a
modern response to Norbert Wiener's book, The Human Use of Human Beings. Wiener was a mathematical
prodigy and the father of cybernetics, and a contemporary of Alan Turing and John von Neumann
and Claude Shannon and many of the people who are doing foundational work on
computation. And Wiener's thoughts on artificial intelligence anticipate many of our modern
concerns. Now, I didn't wind up contributing to this book. I had to sit this one out, but
I will be speaking with three of the authors who did. The first is George Dyson. George is a historian of technology,
and he's the author of Darwin Among the Machines and Turing's Cathedral. My second interview is
with Alison Gopnik. Alison is a developmental psychologist at UC Berkeley. She's a leader in
the field of children's learning and development, and her books include The Philosophical Baby.
And finally, I'll be speaking with Stuart Russell, who's been on the podcast before.
Stuart is a professor of computer science and engineering at UC Berkeley,
and he's also the author of the most widely used textbook on AI, titled Artificial Intelligence,
A Modern Approach. This is a deep look at the current state and near and perhaps distant future
of AI. And now, without further delay, I bring you George Dyson.
I am here with George Dyson. George, thanks for coming on the podcast.
Thank you. Happy to be here.
So the occasion for this conversation is the publication of our friend and mutual agent's book, Possible Minds, 25 Ways of Looking at AI. And this was edited by the great John
Brockman. I am not in this book. I could not get my act together when John came calling, so unfortunately, I'm not in this very beautiful and erudite book.
Previously, you wrote Turing's Cathedral, so you've been thinking about computation for quite
some time. How do you summarize your intellectual history and what you focused on?
your intellectual history and what you focused on.
Well, my interest goes back much farther than that.
Turing's Cathedral is a recent book.
So 25 years ago, I was writing a book called Darwin Among the Machines at a time when there actually were no publishers publishing
any general literature about computers except Addison Wesley.
So they published it thanks to John.
The thing to remember about John, John and Katinka, it's a family business.
Katinka's father was a literary agent, and John's father, I think, was in the flower
merchant business, so they have this very great combination of flowers have to be sold
the same day, and books have to last forever.
It sort of works really well together.
Yeah. And your background is, you also have a family background that's relevant here because your father is Freeman Dyson, who many people will be aware is a famous physicist.
He got inducted into the Manhattan Project right at the beginning as well, right?
He was at the Institute for Advanced Study.
Correct my sequencing here.
First of all, the important thing in my background is not so much my father, but my mother.
My mother was a mathematical logician.
She worked very closely with Kurt Gödel and knew Alan Turing's work in logic very well.
And that's where the world of computers came out of that.
My father, they both came to America at the same time in 1948.
So the Manhattan Project was long over.
My father had nothing to do with it.
Oh, okay.
He was working for the conventional bombing campaign for the Royal Air Force during the war, but
not the Manhattan Project.
So your mother, so you have deep roots in the related physics and logic and mathematics
of information, which has given us this now century of, or near century of computation
and has transformed everything.
And it's a fascinating intellectual history because the history of computing is intimately
connected with the history of war, specifically code breaking and bomb design.
And you did cover this in Turing's Cathedral.
You're often described as a historian of technology. Is that correct?
Does that label fit well with you? That's true, yes. I mean, more a historian of people,
of the people who build the technologies, but somehow the label is historian of technology.
I'm not a historian of science. That's also, I don't know why that's always, you know,
it's just sort of a pigeonhole they put you into.
So maybe we can walk through this topic by talking about some of the people.
There are some fascinating characters here, and the nominal inspiration for this conversation, for John's book, was his discovery or rediscovery of Norbert Wiener's book, The Human Use of Human Beings. But there were two, there were different paths through the history of thinking about information and
computation and the prospect of building intelligent machines. And Wiener represented
one of them, but there was another branch that became more influential, which was due to Alan Turing and John von Neumann.
Maybe, I guess, who should we start with?
Probably Alan Turing at the outset here.
How do you think of Alan Turing's contribution to the advent of the computer?
Well, it was very profound.
Norbert Wiener was working, you know, in a similar way at almost the same
time. So they all sort of came out of this together. Their sort of philosophical grandfather
was Leibniz, the German computer scientist and philosopher. So they all sort of were
disciples of Leibniz and then executed that in different ways.
Von Neumann and Wiener worked quite closely together at one time.
Turing and Wiener never really did work together, but they were very aware of each other's work.
The young Alan Turing, which also people forget, he came to America in 1936.
So he was actually in New Jersey when his great paper on computation was published.
So he was there in the same building with von Neumann.
He was a bright kid and offered him a job, which he didn't take.
He preferred to go back to England.
Yeah, so that's, I don't know how to think about that. So just bring your father into the picture here and perhaps your mother if she knew all these guys as well. Did they know von Neumann and Turing and Claude Shannon and Wiener? What of these figures do you have some family lore around?
lore around. Yes and no. They knew, you know,
they both knew Johnny von Neumann
quite well because he was sort of
in circulation.
My father had met Norbert Wiener, but
it never worked with him, didn't really know
him. And
neither of them actually
met Alan Turing, but of course my father
came from Cambridge where Turing had been
sort of a fixture.
My father said he read Turing's paper when it came out
and he thought, like many people, he thought this was sort of the least likely,
you know, this was interesting logic, but it would have no great effect on the real world.
I think my mother was probably maybe a little more prescient that, you know,
logic really would change the world.
Von Neumann is perhaps the most colorful character here.
There seems to be an absolute convergence of opinion that regardless of the fact that
he may not have made the greatest contributions in the history of science, he seemed to have just
bowled everyone over and given a lasting impression that he was the smartest person they had ever
met.
Does that ring true in the family as well, or have estimations of von Neumann's intelligence
been exaggerated?
No, I don't think that's exaggerated at all. I mean, he was impressively sharp and smart, extremely good memory, phenomenal calculation skills, sort of everything. Plus, he had no shyness about just asking for money.
That was sort of in some ways almost his most important contribution,
was he was the guy who could get the money to do these things that other people simply dreamed of.
But he got them done, and he hired the right people.
He's sort of like the orchestra conductor who'd get the best violin player and put them all together.
Yeah, and these stories are,
I think I've referenced them occasionally on the podcast,
but it's astounding to just read this record
because you have really the greatest physicists
and mathematicians of the time
all gossiping essentially about this one figure who,
certainly Edward Teller was of this opinion. And I think there's a quote from him somewhere,
which says that, you know, if we ever evolve into a master race of super intelligent humans,
we'll recognize that von Neumann was the prefiguring example.
Like this is how we will appear when we are fundamentally different from what we are now.
And Wigner and other physicists seem to concur. The stories about him are these two measures of
intelligence, both memory and processing speed, you grab both of those knobs
and turn them up to 11, and that just seems to be the impression you make on everyone,
that you're just a different sort of mind.
Yeah, it's sort of, in other ways, it's a great tragedy, because he was doing really
good work in, you know, pure mathematics and logic and game theory, quantum mechanics, and those
kinds of things, and then got completely distracted by the weapons and the computers.
Never really got back to any real science, and then died young, like Alan Turing, the
very same thing.
So we sort of lost these two brilliant minds who not only died young, but sort of professionally died very early because they got sucked into the war, never came back. I think it was 47 published a piece in The Atlantic, more or less vowing never to let his intellectual property have any point of contact with military efforts.
And so at the time, it was all very fraught, seeing that physics and mathematics was the engine of destruction, however ethically purposed.
of destruction, however ethically purposed. You know, obviously there's a place to stand where the Manhattan Project looks like a very good thing, you know, that we won the race to fission
before the Nazis could get there. But it's an ethically complicated time, certainly.
Yes. And that's where, you know, Norbert Wiener worked very intensively and effectively for the military in both World War I.
He was at the proving ground in World War I and World War II.
But he worked on anti-aircraft defense.
And what people forget was that it was pretty far along at Los Alamos when we knew,
when we learned that the Germans were not actually building nuclear weapons.
And at that point, people like Norbert Wiener wanted nothing more to do with it.
And particularly, Norbert Wiener wanted nothing to do with the hydrogen bomb.
There was no military justification for a hydrogen bomb.
The only use of those weapons still today, it's against, you know, it's genocide against civilians.
They have no military use.
Do you recall the history on the German side?
I know there is a story about Heisenberg's involvement in the German bomb effort, but I can't remember if rumors of his having intentionally slowed that or not are, in fact, true.
Well, that's a whole otherher subject stay stay away from not getting to that but and i'm not the expert on that but what you know what
little i do know is that it became known at los alamos late later in the project that that there
really was no german threat yet then the decision was made to keep working on it.
There were a few people,
now there's one,
I don't remember who it was,
one or two physicists actually quit work
when they learned that the German program
was not a real threat,
but most people chose to keep working on it.
That was a very moral decision.
Yeah, but how do you view it? Do you view it as a straightforward
good one way or the other, or how would you have navigated that?
Extremely complicated, very, very complex.
I mean, of the people you were talking about, the Martians,
the sort of extraterrestrial Hungarians, they all
kept working on the weapons except Leo Szilard,
who actually, he was at Chicago.
He'd been sort of excommunicated from Los Alamos.
Groves wanted to have him put in jail.
And he circulated a petition.
I think it was signed by 67 physicists from Chicago to not use the weapon against the
civilians of Japan, to at least give a demonstration against an unpopulated target.
And that petition never even reached the president. It was sort of embargoed.
I've never understood why a demonstration wasn't a more obvious option.
Was the fear that it wouldn't work?
option. What was the fear that it wouldn't work?
Yes, because they didn't know, and they had only very few weapons at that time. They had two or three.
There were a lot, but that's again a story that's still
to be figured out, and I think the people like von Neumann carried a lot of that
to the grave with them. But Edward Teller's answer
to the Szilard petition was, you know, I'd love to sign your petition, but I think his exact words were,
the things we are working on are so terrible that no amount of fiddling with politics will save our souls.
That's pretty much an exact quote.
Yeah, so I think Teller was, first Teller was another one of these Hungarian mutants along with von Neumann, and the two of them really inspired the continued progress past a fission weapon and on to a fusion one. was an absolutely necessary condition of that progress.
So the story of the birth of the computer is largely,
or at least the growth of our power in building computers,
is largely the story of the imperative that we felt to build the H-bomb.
Right. And what's weird is that we're sort of stuck with it. Like, you know, for 60 years,
we've been stuck with this computational architecture
that was developed for this very particular problem
to do numerical hydrodynamics
to solve this hydrogen bomb question,
to know that the question was,
would the Russians,
they knew the Russians were working on it
because von Neumann had
worked intimately with Klaus Fuchs, who turned out to be a Russian spy. So they knew the
Russians sort of knew everything they did. But the question was, was it possible? And
you needed computers to figure that out. And they got the computer working. And then, you
know, now, 67 years later, our computers are still exact copies of that particular machine they built to do that job.
It's a very, none of those people would, I think they would find it incomprehensible if they came back today and saw that, you know, we hadn't really made any architectural improvements. at all in computer circles, or is this acknowledged that having the von Neumann architecture,
as I think it is still called, we got stuck in this legacy paradigm, which is by no means
necessarily the best for building computers?
Yeah, no, they knew it wasn't.
I mean, already, even by the time Alan Turing came to Princeton, he was working on completely different kinds of computation.
He was already sort of bored with the Turing machine.
He was interested in much more interesting sort of non-deterministic machines.
And the same with von Neumann.
Long before that project was finished, he was thinking about other things.
What's interesting about von Neumann is he only has one patent, and the one patent he took out was for a completely non-von Neumann
computer that IBM bought from him for $50,000. That's another strange story that hasn't quite,
I think, been figured out.
Presumably, that was when $50,000 really meant something.
It was an enormous amount of money, just a huge amount of money.
So, yeah, so they all wanted to build different kinds of computers.
And if they had lived, I think they would have.
In your contribution to this book, you talk about the prospect of analog versus digital computing.
Make that intelligible to the non-computer scientist?
Yes. So there are really two very different kinds of computers. It goes back to Turing
in sort of a mathematical sense. There are continuous functions that vary continuously,
which is sort of how we perceive time or the frequency
of sound or those sorts of things.
And then there are discrete functions, the sort of ones and zeros and bits that took
over the world.
And Alan Turing gave this very brilliant proof of what you could do with a purely digital
machine.
But both Alan Turing and von Neumann were almost you know towards the ends of their life
obsessed with the fact that nature doesn't do this nature does this in a in our genetic systems we
use digital coding because digital coding is as Shannon showed us is so good at error correction
but you know continuous functions and analog computing are better for control. All
control systems in nature, all nervous systems, the human brain, the brain of a fruit fly,
the brain of a mouse, those are all analog computers, not digital. There's no digital code
in the brain. And von Neumann wrote a whole book about that that people have misunderstood
I guess you could say that whether or not a neuron
fires is a digital
signal but then the analog component
is downstream of that
just the different synaptic weights
and receptors
there's no code with a logical
meaning
the complexity is not
in the code, it's in the topology
and the connections
of the network.
Everybody knew that.
You take apart a brain,
you don't find any
sort of digital code.
There's no,
I mean,
now we're sort of obsessed
with this idea of algorithms,
which is what
Alan Turing gave us,
but there are no algorithms
in a nervous system
or a brain.
That's a much, much, much, much higher level function that comes later.
Well, so you introduced another personality here and a concept.
So let's just do a potted bio on Claude Shannon and this notion that digitizing information was somehow of value with respect to error correction.
Yes.
I mean, what Claude Shannon's great contribution was sort of modern information theory, which
you can make a very good case he actually sort of took those ideas from Norbert Wiener,
who was explaining them to him during the war.
But it was Shannon who published the great manifesto on that, proving that
you can communicate with reliable accuracy given any arbitrary amount of noise by using
digital coding. And that none of our computers would work without that, the fact that basically
your computer is a communication device and has to communicate these hugely complicated states from one fraction of a microsecond to the next billions of times a second.
And the fact that we do that perfectly is due to Shannon's theory and his model of how can you do that in an accurate way.
Is there a way to make that intuitively understandable, why that would be so. I mean, what I picture is like cogs in a gear
where it's like you're either all in one slot or you're all out of it. And so any looseness of fit
keeps reverting back to you fall back into the well of the gear or you slip out of it. Whereas
it's something that's truly continuous, that is to say, analog admits of errors that are
undetectable because you're just, you're kind of sliding off a more continuous, smoother
surface.
Do you have a better?
Yeah, that's a good, that's a very good way to explain it.
Now it has this fatal flaw that you sort of, there's always a price for everything, and so you can get this perfect digital accuracy
where you can make sure that every bit, billions of bits, and every bit is in the right place,
your software will work.
But the fatal flaw is that if for some reason a bit isn't in the right place,
then the whole machine grinds to a halt, whereas the analog machine will keep going.
It's much more robust against failure.
So are you in touch with people who are pursuing this other line of building intelligent machines
now? What does analog computation look like circa 2019?
Well, it's coming at us in two directions. There's bottom-up and there's
top-down.
The bottom-up is actually extremely
interesting. I'm professionally
not a computer scientist.
I'm a historian,
so I look at the past, but occasionally I get
dragged into
a meeting
a couple years ago that was actually held at
Intel. You'll have a meeting like that and they like the voice of a historian there, A meeting a couple years ago that was actually held at Intel,
you'll have a meeting like that,
and they like the voice of a historian there, so I get to go.
And this was an entire meeting of people working on building analog chips from the bottom up,
using the same technology we use to build digital computers,
but to build completely different kinds of chips
that actually do analog processing.
And that's extremely exciting.
I think it's going to change the world
the same way the microprocessor changed the world.
We're sort of at the stage where,
like we were when we had the first 4-bit calculator
you could buy, and then suddenly, you know,
somebody figured out how to play a game with it.
The whole thing happened.
So that's from the bottom up.
Some of these chips are going to do very interesting
things like voice recognition, smell, things like that. Of course, the big driver, you
know, sort of killer app is drones, which is sort of the equivalent of the hydrogen
bomb. That's what's driving this stuff. And self-driving cars and cell phones. And then
from the top down is a whole other thing. That's the part where I think we're sort of missing something,
that if you look at the sort of Internet as a whole,
or the whole computational ecosystem,
particularly on the commercial side,
enormous amount of the interesting computing we're doing now
is back to analog computing,
where we're computing with continuous functions.
It's pulse frequency coded.
Something like Facebook or YouTube doesn't care.
The file that somebody clicks on, they don't care what the code is.
They just sort of care the meaning is in the frequency that it's connected to,
very much the same way a brain or a nervous system works.
So if you look at these large companies, Facebook or Google or something,
actually they're large analog
computers. The digital is not
replaced, but another
layer is growing on top of it. The same
way that after World War II
we had all these analog vacuum tubes
and the oddballs like Alan
Turing and von Neumann
and even Norbert Wiener figured out how to use
the analog components
to build digital computers.
And that was the digital revolution.
But now we're sort of right in the midst of another revolution where we are taking all
this digital hardware and using it to build analog systems.
But somehow people don't want to talk about that.
Analog is still sort of seen as this archaic thing, and I believe differently.
In what sense is an analog system
supervening on the digital infrastructure?
Are there other examples that can make it more vivid for people?
Yes, I mean, analog is much better.
Like, nature uses analog for control systems.
So you could take an example like,
you know, an obvious one would be Google Maps with live traffic.
So you have all these cars driving around and people have their digital cell phone in
the car.
And you sort of have this deal with Google where Google will tell you what the traffic
is doing and the optimum path if you tell Google how tell Google where you are and how fast you're moving.
And that becomes an analog computer, sort of an analog system where there is no digital
model of all the traffic in San Francisco.
The actual system is its own model.
And that sort of annoyance definition of an organism or a complex system,
that it constitutes its own simplest behavioral description.
There is no trying to formally describe what's going on makes it more complicated, not less.
There's no way to simplify that that whole system except except the system itself hmm so you're
using you know Facebook's very much the same way that you'd be impossible to
build you could build a digital model maybe of you know social life in a high
school but if you try to do social life and if anything large it becomes just
collapses under its own complexity so you just
give everybody a copy of facebook which is a reasonably simple piece of code that lives on
their mobile device and suddenly you have a a full-scale model of the actual thing itself so
the social graph is the is the social graph and that's's what's a huge transition.
We've sort of, I think,
is at the root of some of the unease
people are feeling
about some of these particular companies
is that suddenly, you know,
it used to be Google was someplace
where you would go to look something up,
and now it really effectively
is becoming what people think.
And the big fear is that something like Facebook becomes what your friends are,
and that can be good or bad, but it's a real, you know,
just in an observational sense, it's something that's happening.
So what most concerns you about how technology is evolving at this point?
Well, I wear different hats there.
I mean, my other huge part,
most of my life was spent as a boat builder,
and I still am right here in the middle
of a kayak building workshop
and want nothing to do with computer.
That's really why I started studying them
and writing about them,
because I was not against them,
but quite suspicious.
The big thing about artificial intelligence, AI, it's not a threat, but the threat is not
that machines become more intelligent, but that people become less intelligent.
I spent a lot of time out in the wild with no computers at all,
lived in a treehouse for three years.
And you can lose that sort of natural intelligence, I think,
as a species reasonably quickly if we're not careful.
So that's what worries me.
I mean, obviously the machines are clearly taking over.
There's no, if you look at just the span of my life
from when von Neumann built that one computer
to where we now, you know, almost biological growth of this technology.
So as a, you know, sort of as a member of living things, it's something to be concerned about.
Do you know David Krakauer from the Santa Fe Institute?
Yes, I don't know him, but I've met him and talked to him. Yeah, because he has a
rap on this very point where he distinguishes between, I think his phrasing is cognitively
competitive and cognitively cooperative technology. So there are forms of technology that
compete with our intelligence on some level, and insofar as we outsource our cognition to them, we get less and less competent.
And then there are other forms of technology where we actually become better even in the absence
of the technology. And so unfortunately, the only example of the latter that I can remember is the
one he used on the podcast was the abacus, which apparently if you learn how to use an abacus well,
used on the podcast was the abacus, which apparently if you learn how to use an abacus well, you internalize it and you can do calculations you couldn't otherwise do in your
head in the absence even of the physical abacus. Whereas if you're relying on a pocket calculator
or your phone or for arithmetic or you're relying on GPS, you're eroding whatever ability you had in those areas.
So if we get our act together and all of this begins to move in a better direction or something like an optimal direction, what does that look like to you? If I told you 50 years from now we
arrived at something just far better than any of us were expecting with respect to this marriage of increasingly powerful
technology with some regime that conserves our deepest values. How do you imagine that looking?
Well, it's, yeah, it's certainly possible. And I guess that's where I would be slightly optimistic in that sort of my knowledge of human culture goes way back
and we grew up, you know, as a species,
I'm speaking of just all humanity,
most of our history was, you know,
was among animals who were bigger and more powerful than we were
and things that we completely didn't understand.
And we sort of made up our, not religions, but just views of the world that we couldn't
control everything.
We had to live with it.
And I think in a strange way, we're kind of returning to that childhood of the species in a way
that we're building these systems that we no longer have any control over, and we, in
fact, no longer even have any real understanding of.
So we're sort of, in some ways, back to that world that we were, you know, originally were
quite comfortable with, where we're at the power of things that we don't understand.
Sort of megafauna, and I think that's
that
could be a good thing, it could be a bad thing, I don't know,
but it doesn't surprise
me. And I'm just,
personally, I'm interested. Like, if you take,
you know, to get back
to why we're here, which is John's book,
almost
everyone in that book is talking about
domesticated artificial intelligence. I mean, they is talking about domesticated artificial intelligence i mean
they're talking about sort of commercial systems products that you can buy things like that i'm
just personally i'm in you know i'm sort of a naturalist and and i'm interested in wild ai
what what evolves completely in the wild out of out of human control completely and that's a very
interesting part of the whole sphere that doesn't get looked at that
much.
It's sort of the focus now is so much on marketable captive AI, self-driving cars,
things like that.
But it's the wild stuff that, to me, that's...
Like, I'm not afraid of bad AI, but I'm very afraid of good AI, the kind of AI. But those of us who are worried
about the prospect of building what's now called AGI, artificial general intelligence,
that proves bad is just based on the assumption that there are many more ways to build AGI
that is not ultimately aligned with our interests, then there are ways to build it perfectly aligned
with our interests, which is to say we could build the megafauna that tramples us perhaps
more easily than we could build the megafauna that lives side by side with us in a durably
benign way. You don't share that concern?
No, I think that's extremely foolish and misguided to think that we can, I mean, sort of by definition,
real AI you won't have any control over.
I mean, this sort of idea that, oh, we, there's some,
that's, again, why I think there's this enormous mistake
that thinking it's all based on algorithms.
I mean, real AI won't be based on algorithms.
And so there's this misconception that happened back to
when they built those first computers that they needed programmers
to run. So this view is that, well, the programmers are in control.
But if you have non-algorithmic
there is no program.
By definition, you don't control it.
And to expect control is absolutely foolish.
But I think it's much better to be realistic and assume that you won't have control.
Well, so then why isn't your bias here one of the true counsel of fear,
which says we shouldn't be building machines more powerful than we are
well we probably shouldn't but we are i mean the reality the fact is we we're we've done it i mean
it's not something that we're thinking about it's something we've been doing for for a long time and
it's probably not going to stop and then then the point is to be realistic about and then and maybe
optimistic that you know humans have not been the best at controlling the world and something else could well be better.
But this illusion that we are going to program artificial intelligence is, I think, provably wrong.
I mean, Alan Turing would have proved that wrong.
that wrong. That was how he got into the whole thing in the beginning,
was proving this
statement called the enscheidungsproblem,
whether by any
systematic way to look at a string
of code and predict what it's going to do.
You can't.
It baffles me that
people don't sort of... Somehow we've been
so brainwashed by this.
The digital revolution was so
successful.
It's amazing how it has sort of clouded everyone's thinking.
If you talk to biologists, of course, they know that very well.
People who actually work with brains of frogs or mice,
they know it's not digital.
Why people think more intelligent things would be digital is, again, sort of baffling.
How did that sort of take over the world, that thought?
Yeah.
So it does seem, though, that if you think the development of truly intelligent machines
is synonymous with machines that not only can we not control, but we, on some level,
can't form a reasonable expectation of what they will be inclined to do. There's the assumption
that there's some way to launch this process that is either provably benign in advance or... So I'm looking at the book now and
the person there who I think has thought the most about this is Stuart Russell. And he's just
trying to think of a way in which AI can be developed where its master value is to continually
Its master value is to continually understand in a deeper and more accurate way what we want, right?
So what we want can obviously change, and it can change in dialogue with this now super-intelligent machine, but its value system is in some way durably anchored to our own,
because its concern is to get our situation the way we want it.
Right. But all the most terrible things that have ever happened in the world happened because
somebody wanted them. I mean, there's no safety in that. I admire Stuart Russell,
but we disagree on this sort of provably good AI.
Yeah. But I guess at least what you're doing there is collapsing it down to one fear rather than the other.
I mean, the fear that provably benign about is that developing AGI in the first place
can't be provably benign, and we will find ourselves in relationship to something far
more powerful than ourselves that doesn't really care about our well-being in the end.
Right. And that's, again, sort of the world we used to live in, and I think we can make
ourselves reasonably comfortable there, but we no longer become the, you know, sort of the classic religious view was there are humans and there's God and there's only nothing but angels in between.
That can change.
Nothing but angels and devils in between now.
Right.
sort of the last thing he published before well he actually published after he died but i mean there's a line in there which i think gets it right that the world of the future will be an
ever more demanding struggle against the limitations of our own intelligence it's not a
comfortable hammock in which we can lie down to be waited upon by our robot slaves and that's
those are the two two sort of paths that so many people want. The cars are going to drive us
around and be our slaves. It's probably not going to happen that way.
On that dire note.
It's not a dire note. It could be a good thing. We've been the chief species
for a long time and it could be time for something else but but at least be be realistic about it don't
don't have this sort of childish view that that that everything's going to be obedient to us
that hasn't worked and i think it was you know it did a lot of harm to the world that that sort of
we had that view but again one of the signs of any real artificial intelligence would immediately be
intelligent enough not to reveal its existence to us.
That would be the first smart thing it would do, would be not reveal itself.
So the fact that AI has not revealed itself, to me, is no...
That's zero evidence that it doesn't exist.
I would take it the other way.
If it existed, I would expect it not to reveal itself.
Unless it's so much more powerful than we are that it perceives no cost and reveals itself by merely steamrolling over us.
Well, there would be a cost.
I think it's sort of faith is better than proof.
So you can see where I'm going with that, but it's not necessarily malevolent.
It's just as likely to be benevolent as malevolent.
Okay, so I have a few bonus questions for you, George.
These can be short form.
If you had one piece of advice for someone who wants to succeed in your field,
and you can describe that field however you like, what would it be?
Okay, well, I'm a historian, as far as what I became, and a boat builder,
and so the advice in all those fields is just specialize.
I mean, find something and become obsessed with it.
I became obsessed with the kayaks that the Russians adopted when they came to Alaska,
and then I became obsessed with how computing really happened. and if you are obsessed with one little thing like that
you immediately become you know you can very quickly know more than anybody else
and that's it that helps to be successful what if anything do you wish
you'd done differently in your 20s 30s or 40s
that's I mean you can't you can't replay that that tape I wish well I can be very clear about
that I wish in my 20s I had gone to the Aleutian Islands earlier while while more of the old-time
kayak builders were still alive and kind of interviewed and learned from them and then
very much the same in my 30s all these projects i met i did go
find the surviving project orion people and technicians and physicists and interviewed
them but i should have done that earlier and the same with computing you know in my 40s i could
have interviewed a lot more people who really were there at that important time.
I sort of caught them, but almost too late, and I wish I had done that sooner.
Ten years from now, what do you think you'll regret doing too much of
or too little of at this point in your life?
Probably regret not getting out more up the coast again,
which is what I'm trying to do.
That's what I'm working very diligently at. But I keep getting distracted.
You've got to get off the podcast and get into the kayak.
Yeah, well, we could be doing this from Orca Lab.
They have a good internet connection.
I mean, that's the beautiful thing is that you can do this.
And the other thing I would say is, this is aside,
but I grew up since a young teenager in Canada where the country was united by radio.
I mean, in Canada, people didn't get newspapers, but everybody listened to one radio channel.
And so in a way, podcasts are, again, back to that past where we're all listening to the radio again.
And I think it's a great thing.
What negative experience, one you would not wish to repeat, has most profoundly changed you for the better?
I very nearly choked to death.
I mean, literally, that's the only time I've had a true near-death experience, seeing the tunnel of light and reliving my whole life.
And not only thinking about my daughter and other profound things, but thinking how stupid this was.
about my daughter and other profound things,
but thinking how stupid this was.
This guy who had kayaked to Alaska six times with no life jacket dies in a restaurant on Columbus Avenue in New York.
And John Brockman saved my life.
Ran out and came back with a New York City off-duty fireman
who literally saved my life.
Wow, I'm so glad I asked that question.
I had no idea of that story.
So again, learn Heimlich maneuver.
Dr. Heimlich really did something great for the world.
Fascinating.
We may have touched this in a way,
but maybe there's another side to this.
What most worries you about our collective future?
Yeah, kind of what I said,
that we lose all these
skills and intelligences that we've built
up over such a long
period of time.
The ability to, you know,
survive in the wilderness and
understand animals
and respect them.
I think that's a very sad thing
that we're losing that, of course, and losing the
losing the wildlife itself.
If you could solve just one mystery as a scientist or historian or journalist, however you want to come at it, what would it be?
One mystery? Well, one of them would be the one we just talked about.
You know, cetacean communication, what's really going on with these whales communicating in the ocean.
That's something I think we could solve, but we're not looking at it in the right way.
If you could resurrect just one person from history and put them in our world today
and give them the benefit of a modern education, who would you bring back?
It was prominent areas of most people I'm interested in history sort of had extremely good education.
You're talking about John von Neumann and Alan Turing, yeah, you're right.
Yeah, and Leibniz,
I mean, he was very well, yeah. Lately,
the character in my,
the project I've been working on lately
was kind of awful, but fascinating.
It was Peter the Great.
He was so
obsessed with science and things like that, so I think
to have brought him, you know, if he could come back, it might be a very dangerous thing.
But he sort of wanted to learn so much and was, again, preoccupied by all these terrible things and disasters that were going on at the time.
What are you doing on Peter the Great?
I've been writing this very strange book where it kind of starts with him and Leibniz.
They go to the hot springs together and they basically stop drinking alcohol for a week.
And Leibniz convinces him, wants him to support building digital computers, but he's not interested.
So the computer thing failed, but what Leibniz did convince him was to launch a voyage to America.
So that's how the Russians came to Alaska.
It became the Bering-Chirikov voyage.
But it all starts in this hot springs where they can't drink for a week, so they're just drinking mineral water and talking.
There is a great biography on Peter the Great, isn't there?
Is there one that you recommend?
Several.
I wouldn't know which one to recommend, but again, it's why he's Peter the Great, isn't there? Is there one that you recommend? Several. I wouldn't know which one to recommend,
but again, that's why he's Peter the Great,
because he's been well studied.
His relationship with Leibniz fascinates me,
and that's not, you know,
there's just a lot there we don't know,
but it's kind of amazing how this
sort of obscure mathematician
becomes very close to this great leader of a huge part of the world.
Okay, last question, the Jurassic Park question.
If we are ever in a position to recreate the T-Rex, should we do it?
I would say yes, but this comes up as a much more real question with the woolly mammoth and these other
animals. The stellar sea cow,
there's another one we could maybe resurrect.
So yeah, I've had these arguments
with Stuart Brand
and George Church who were realistic
about could we do it.
So I would say
yes, don't expect it to work,
but
certainly worth trying.
What are their biases?
Do Stuart and George say we should or shouldn't do this?
Yeah, if you haven't talked to them, definitely that would be a great program to go to that
debate.
The question more is, if you can recreate the animal, does that recreate the species?
One of the things they're working on is, I think,
trying to build a park in Kamchatka or somewhere over there in Siberia, so that if you did
recreate the woolly mammoth, they would have an environment to go live in. So to me, that's
actually the payoff. The payoff to creating, recreating the woolly mammoth is that it would
force us to create a better environment.
Same as we did when the buffalo were coming back and we should bring the antelope back.
It's sort of the American cattle industry that's sort of wrecked the great central heart of America that could easily come back into the grasslands it once was.
Well, listen, George, it's been fascinating.
Thank you for your contribution to this book,
and thanks for coming on the podcast. Thank you. It's a very interesting book. There's short chapters, which makes it very easy to read. Yeah, it's a sign of the times, but a welcome one.
I am here with Alison Gopnik. Alison, thank you for coming on the podcast.
Glad to be here.
I am here with Alison Gopnik. Alison, thank you for coming on the podcast.
Glad to be here.
So the occasion of our conversation is the release of John Brockman's book,
Possible Minds, 25 Ways of Looking at AI. And I'm sure there'll be other topics we might want to touch, but as this is our jumping off point, first, give me your background. How would you
summarize your intellectual interests at this
point? Well, I began my career as a philosopher, and I'm still half appointed in philosophy at
Berkeley. But for 30 years or so, more than that, I guess now, I've been looking at young children's
development and learning to really answer some of these big philosophical questions. Specifically,
the thing that I'm most interested in is how do we come to have an accurate view
of the world around us when the information we get from the world seems to be so concrete
and particular and so detached from the reality of the world around us?
And that's a problem that people in philosophy of science raise.
It's a problem that people in machine learning raise.
And I think it's a problem that you can explore particularly well by looking at young kids who, after all, are the people who
we know in the universe who are best at solving that particular problem. And for the past 20 years
or so, I've been doing that in the context of thinking about computational models of how that
kind of learning about the world is possible for anybody, whether it's a scientist or an artificial computer or a computational system,
or, again, the best example we have, which is young children.
Right. Well, we'll get into the difference between how children learn
and how our machines do, or at least our current machines do.
But just a little more on your
background. So you did your PhD in philosophy or in psychology? I actually did my first degree,
my BA in honors philosophy. And then I went to Oxford to actually wanting to do both philosophy
and psychology. I worked with Jerome Bruner in psychology, and I spent a lot of time with the people in philosophy. And my joke about this is that after a year or two in Oxford,
I realized that there was one of two communities that I could spend the rest of my life with.
One community was of completely disinterested seekers after truth who wanted to find out about
the way the world really was more than anything else. And the other community was somewhat spoiled,
narcissistic, egocentric creatures who needed to be taken care of by women all anything else. And the other community was somewhat spoiled, narcissistic,
egocentric creatures who needed to be taken care of by women all the time. And since the first
community was the babies and the second community was the philosophers, I thought it would be,
I'd be better off spending the rest of my life hanging out with the babies. That's a little
unfair to the philosophers, but it does make the general point, which is that I think a lot of
these big philosophical questions can be really well answered by looking at a very neglected group in some ways, namely
babies and young children.
Yeah, yeah.
So I did my PhD in the end in experimental psychology with Jerome Bruner.
And then I was in Toronto for a little while and then came to Berkeley, where, as I say,
I'm in the psychology department, but also affiliated in philosophy. And I've done a lot
of collaborations with people doing computational modeling at the same time. So I really think of
myself as being a cognitive scientist in the sense that cognitive science puts together
ideas about computation, ideas about psychology, and ideas about philosophy.
Yeah, well, if you're familiar with me at all,
you'll understand that I don't respect the boundaries between these disciplines
really at all. I just think that it's just interesting how someone comes
to a specific question. But whether you're doing cognitive science or neuroscience or psychology
or philosophy of mind, this can change from sentence to sentence, or it just
really depends on what building in a university campus you're standing in. Well, I think I've
tried, you know, I've tried and I think to some extent succeeded in actually doing that in my
entire career. So I publish in philosophy books and collaborate with philosophers. I had a
wonderful project where we had half philosophers
who were looking at causality, people like Chuck Lemore and James Woodward and Chris Hitchcock,
and then half developmental psychologists and computational cognitive scientists. So people
like me, like Josh Tenenbaum at MIT, like Tom Griffiths. And that was an incredibly powerful and successful interaction. And the
truth is, I think one of my side interests is David Hume. And if you look at people like David
Hume or Barclay or Descartes or the great philosophers of the past, they certainly
wouldn't have seen boundaries between the philosophy that they were doing and psychology
and empirical science. Let's start with the AI question and then get into children and other areas of common interest.
So perhaps you want to summarize how you contributed to this volume and your angle of attack on this really resurgent interest in artificial intelligence. It was this
period where it kind of all went to sleep. And I remember being blindsided by it, just thinking,
well, AI hadn't really panned out. And then all of a sudden, AI was everywhere.
How have you come to this question? Well, as I say, we've been doing work looking at
computational modeling and cognitive science for a long time. and I think that's right for a long time. Even though there was really interesting theoretical work going on about how we could represent the kinds of knowledge that we have as human beings computationally, it didn't translate very well into actual systems that could actually go out and do things more effectively. And then what happened, interestingly, in this new AI spring,
wasn't really that there was some great new, you know,
killer app, new idea about how the mind worked.
Instead, what happened was that some ideas that had been around for a long time,
since the 80s, basically, these ideas about neural networks,
and in some ways, you know, much older ideas about associative networks,
for example. Suddenly, when you had a whole lot of data the way you do with the internet,
and when you also had a whole lot of compute power with good old Moore's law running through its
cycles, those ideas became very practical so that you could actually take a giant data set of all
the images that had been put on the net, for example, and train that data set to discriminate between images. Or you could
take the giant data sets of all the translations of French and English on the net, and you could
use that to actually design a translation program. Or you could have something like AlphaZero that
could just play millions and millions and millions of
games of chess against itself.
And then you could use that data set to figure out how to play chess.
So the real change was not so much a kind of conceptual change about how we thought
about the mind.
It was this change in the capacities of computers.
And I think to the surprise of everybody, including the people
who were, you know, including the people who had designed the systems in the first place,
it turned out that those ideas really could scale. And the big problem with computational
cognitive science has always been not so much that finding good computational models for the
mind, although that's a problem, but finding ones that could do more than just solve toy problems, ones that could deal with the complexity of real world kinds of knowledge.
And I think it was surprising and kind of wonderful that these learning systems could
actually turn out to work at a broad scale. And the other thing that, of course, was interesting
was that not just in the history of AI, but in the history of philosophy, there's been this
constant kind of ping-ponging back and forth between two ways to solve this big problem of
knowledge, this big problem of how we can ever understand the world around us. And a way I like
to put it is, here's the problem. We seem to have all this abstract, very structured knowledge of
the world around us. We seem to know a lot about the world,
and we can use that knowledge to make predictions and change the world. And yet, it looks as if all
that reaches us from the world are these patterns of photons at the back of our eyes and disturbances
of air at our ears. And the question is always, how could you resolve that conundrum? And one way,
going back to Plato and Aristotle, has been to say, well, a whole lot of
it is built in in the first place. We don't actually have to learn that abstract structure.
It's just there. Maybe it evolved. Maybe if you're Plato, it was in a past life. And then the other
approach, going all the way back to Aristotle, has been to say, well, if you just have enough data,
if you just had enough stuff to learn, then you could develop this kind of abstract
knowledge of the world. And again, going back to Plato and Aristotle, we kind of ping-ponged back
and forth between those two approaches to trying to solve the problem. And sort of good old-fashioned
AI said, well, if we just, you know, famously Roger Shanks said, well, if we just had like
a summer's worth of interns, we'll figure out all of our knowledge about the world.
We'll write it all down and we'll program it into a computer.
And that turned out not to be a very successful project.
And then the alternative, the kind of neural net idea was, oh, we just have enough data and we have some learning mechanisms.
Then the learning mechanisms will just be able to pull out the information from the data.
And that's kind of where we are now.
That's the latest iteration in this back and forth between having building in knowledge and learning the knowledge from the data.
Yeah, so what you've done there is you've sketched two different approaches to generating intelligence.
One, I guess, could be considered
top-down and the other bottom-up. And what AI has done of late, the great gains we see in
image recognition and many other things, is born of a process that really is aptly described as
bottom-up, where you take in an immense amount of data and do
what is essentially a statistical pattern recognition on it. And some of this can be
entirely blind and blackboxed such that the humans who have written these programs don't
even necessarily know how the machines are doing it. And yet given enough processing power and enough data,
we're now getting results that are human level and beyond for specific tasks. But of course,
you make this point in your piece that we know this is not how humans learn, that there is
some structure undoubtedly given to us by evolution that allows us to generalize on the basis of
comparatively small amounts of data. And so this makes what we do non-analogous to
what our machines are doing. And I guess, I mean, now both top-down and bottom-up approaches are being combined in AI. I guess one question I have for you of blowing past this moment and building machines that we
just, we know are developing their intelligence in a way that is totally unlike the way we do it
biologically, and yet it is successful, it becomes successful on all fronts without our building any
analogous process into them, and we just lose sight of the fact that it was ever interesting to compare the ways we do
it. I mean, there's an effective way to do it in a brute force way, let's say bottom up, on every
front that will matter to us. Or do you think that there's some problems for which it will be
impossible to generate true artificial intelligence unless we have a deeper theory about how
biological systems do it.
Well, I think we already can see that. So one of the reasons, one of the interesting things is that
there's this whole really striking revival of interest in AI among people in AI in cognitive
development, for example. And it's because we're starting to come up against the limits of this
kind of pattern of having this technique of doing a lot of
statistical inference from big data sets. So there are lots of examples, for instance, even if you're
thinking about things like image recognition, where, you know, if you have something that
looks like a German shepherd, it'll recognize it as a German shepherd. But if you just have
something that to a human just looks like a mass that has the same textural superficial features as the German Shepherd, it will also recognize it as a German Shepherd.
You know, if it sees a car that's suspended in the air and flooded, it will report this is a car parked by the side of the road and so forth.
And there's a zillion examples that are like that. In fact, there's a whole kind of area of these adversarial examples where you can show that
the machine is not actually making the right decision.
And it's because it's only paying attention to the sort of superficial features.
And in particular, the machines are very bad at making generalizations.
So even if you, you know, taught, teach AlphaZero how to play chess, and then you said, all
right, we're going to just change the rules a little bit. So now the rooks are going to,
are going to be able to move diagonally and you're going to want to capture the queen instead of the
king. That kind of difference, which for a human who had learned chess would be really easy to
adjust to for the more, more recent AI systems leads to this problem they call catastrophic forgetting, which is
having to relearn everything all over again when you get a new data set. So in principle, of course,
you know, there's no in principle reason why we couldn't have an intelligence that operated
completely differently from the way that, say, human children learn. But human children are a
demonstration case of the capacities of an intelligence,
presumably in some sense of computational intelligence, because that's the best way
we have of understanding how human brains work. But that's the best example we have of a system
that actually really works to be intelligent. And nothing that we have now is really even in
the ballpark of being able to do the same kinds of things that that system can do.
So in principle, it might be that we would figure out some totally different way of being
intelligent.
But at the moment, the best case we have is, you know, a four-year-old, a four-year-old
human child.
And we're very, very, very far from being able to simulate that.
You know, I think part of it is if people had just labeled the new techniques by saying
statistical inference from large data sets, instead of calling it artificial intelligence, I think we would be having a very different kind of conversation, even though statistical inference from large data sets turns out to be an incredibly powerful tool, more powerful than we might have thought.
alarmingly powerful it is in narrow cases. I mean, you take something like AlphaZero,
what happened there was fairly startling because you have an algorithm that is fairly generic in that it can be taught to play both a game like Go and a game like chess and presumably other games
as well. And we have this history of developing better and better chess engines. And finally,
we have this history of developing better and better chess engines, and finally,
the human grandmaster ability was conquered. I forget when that was, 1997 or so, when Garry Kasparov lost, famously. And ever since, there's just been this incremental growth in the
power of these machines. And what AlphaZero did was create a, again, a far more general algorithm,
which over the course of four hours taught itself to be better than any chess engine ever. So I mean,
you're taking the totality of human knowledge about this 2,000-year-old game, all of the
engineering talent that went into making this better and better over
decades, and here we found an algorithm which turned loose on the problem, beat every machine
and every person in human history, essentially. When you extrapolate that kind of process to
anything else we could conceivably care about, the recognition of emotion in a human face and voice, say.
Again, coming at this not in an AGI way,
where we've cracked the code of what intelligence is on some level
and built it from the bottom up,
but in a piecemeal way,
where we take the hundred most interesting cognitive problems
and find brute force methods to crack them. It's amazing to consider how quickly a solution can
appear. And once it does, and this is the point I've always made about so-called human level
intelligence, for any ability that we actually do find an AI
solution, even a narrow one in the case of chess or arithmetic, once that solution is found,
you're never talking about human-level intelligence. It's always superhuman. So the
moment we get anything like a system that can behave or learn like a four-year-old child. It won't be at human level
even for a second, because you'd have to degrade all of its other abilities that you could cobble
together to support it. You wouldn't make it worse than your iPhone as a calculator, right?
So it's already going to be superhuman. Yeah. But I mean, you know, I think there's a question, though, about exactly what different kinds
of problems require and how you solve those problems.
And I think an idea that is pretty clearly there in computer science and neuroscience
is that there's trade-offs between different kinds of properties of a solution that aren't
just because we happen to be biological humans, but are built into the very nature of trying to solve the problem. And in some ways,
the most striking thing about the progress of AI all through has been what people sometimes call
Moravich's paradox, which is that actually the things that really impress us as humans are the
things that we're not very good at, like doing arithmetic or playing chess. So I think of these sometimes as being
like the corridas of nerd machismo.
So the things that you have to just be,
have a particular kind of ability
that most people don't have
and then really train it up to do really well.
It turns out those things are things
that computers are good at doing.
On the other hand,
an example I give is my grandson, who's
three, plays something that we call Addy chess. His name is Atticus. So how do you play Addy chess?
Well, the way you play Addy chess is you take all the pieces off the board and then you throw them
in the wastebasket. And then you pick them up out of the wastebasket and you put them more or less
in the same place as they were in before. And then you take them all off and throw them in the
wastebasket again. And it turns out that Addi chess is actually a lot harder than Grandmaster chess, because Addi
chess means actually manipulating objects in the real physical world so that you have to figure
wherever it is that that piece lands in the wastebasket, whatever orientation it is, I can
pick it up and perform the motor actions that are necessary to get it on the board.
And that turns out to be incredibly difficult.
If you, you know, go and see any robotics lab, they have to put big walls around the
robots to keep them from destroying each other, even trying to do incredibly simple tasks
like picking up objects off of a tray.
And there's another thing about AdHS that makes it
really different from what even very, very powerful artificial intelligence can do, which is,
as you said, what these new systems can do is you can take what people sometimes call an objective
function. You can say to them, look, this is what I want you to do. Given this set of input,
I want you to produce this set of output. Given this set of input, I want you to produce this set of output. Given this
set of moves, I want you to get the highest score, or I want you to win at this game.
And if you specify that, it turns out that these neural net learning mechanisms are actually
remarkably good at solving those problems without a lot of additional information,
except just here's a million examples of the input, and here's a million examples of the output. But of course, what human beings are doing all the
time is going out and making their own objectives. They're going out and creating new objectives,
creating new ideas, creating new goals, goals that are not the goals that anyone has created before,
even if they might look kind of silly, like playing at a chess. And in some way that
we really don't understand at all, there's some sense of a kind of progress in those goals that
we're capable of setting ourselves goals that were better than the goals that we had before.
But again, that's not even kind of in the ballpark. It's not like, oh, if we just made
the machines more powerful, then they would be able to do those things too. They would be able
to go out and physically manipulate the world and they would be able to set novel objectives.
That's kind of not even in the same category. And as I say, I think an interesting idea is that
there might really be trade-offs between some of the kinds of things that humans are really good at, like, for instance, taking very complicated, high-dimensional spaces of solutions, having to think of an incredibly
wide range of possibilities versus, say, being able to do something really quickly and efficiently
when it's well-specified. And I think there's reasons to think those things. You might think,
well, okay, if you could do the thing that's really well specified and just do that better and better, then you're going to
be able to solve the more complicated problem and the less well-defined problem. And I think there's
actually reasons to believe that that's not true, that there's real trade-offs between the kinds of
things you need to do to solve those two kinds of problems. Yeah, well, so the paradox you point to is interesting and is a key to how people's
expectations will be violated when automation begins to replace human labor to a much greater
degree than it has. Because people tend to expect that menial jobs will be automated first,
or lower-skilled, not famously high know famously high cognition jobs will be the first
to be automated away but you know as you point out many of the things that we find it amazing
that the human beings can do are easier to automate than the things that any or virtually
any human being can do and you know which is to say to say it's easier to play Grandmaster-level chess
than it is to walk across a room if you're a computer.
So your oncologist and your local mathematician
are likely to lose their jobs to AI before your plumber will,
which is a harder task to move physically into a space
and manipulate objects and make decisions across tasks of that sort.
So there's a lot that's counterintuitive here. that intelligence is substrate independent, ultimately, that we could find some way of
instantiating human-like intelligence in a non-biological system? Is there something
potentially magical about having a computer made of meat, from your point of view, or not?
Well, I think the answer is that we don't really know, right? So again, the one,
we have a kind of species of one, or species of a couple of examples of systems that can really do this. And the ones that we know about are indeed biological. Now, I think it's rather striking, and I think maybe not appreciated enough, that this idea that really comes with Turing, the idea of thinking about a human mind as being a computational
system. That's just been an incredibly productive idea that's ended up enabling us to make really,
really good predictions about many, many, many things that human beings do. And we don't have
another idea that's as good at making predictions or providing explanations for intelligence as that idea. Now, again,
maybe it'll turn out that there is something that we're missing that is contributing something
important about biology. But I think at the moment, the kind of computational theory of the mind is
the best one that's on the table. It's the one that's been most successful just in empirical
scientific terms. So for instance, when we're looking at young children, if we say, are they doing something like Bayesian inference of
structured causal systems, that's a computational idea. We can actually say, okay, well, if they're
doing that, then if we give them this kind of problem, they should solve it this way. And sure
enough, it turns out that over and over again, that's what they do kind of independently of
knowing very much about what exactly is going on in their brains when they're doing that. So again, it could
be that this gap between the kinds of problems that we can solve computationally now and the
kinds of problems that every four-year-old are solving, it could be that that's got something
to do with having a biological substrate. But I don't think that's kind of the most likely
hypothesis given the information that
we have now. I think actually one of the interesting things is the problem is not so
much trying to figure out what our representations and rules are, what's going on in our head,
what the computations look like. The problem is what people in computer science call a search
problem. So the problem is really,
given all the possible things we could believe about the world, or given all the possible
solutions we could have to a problem, or given all the possible things that we could do in the world,
how is it that we end up converging? How is it that we end up picking ones that are, as it were,
the right ones rather than all the other ones that we could consider. And that, I think that's at the moment, that's the really, that's the really deep,
serious problem. So we kind of know how a computational system could be instantiated in,
in a brain. We have ideas about how neurons could be configured so they could do computations. We
kind of figured that part out, but the part about how we take all these possibilities and end up narrowing in on ones that are relatively
good, relatively true, relatively effective, I think that's the really next deep problem.
And looking at kids can help us to think about looking at how kids solve that problem.
We know that they do solve it, could help to let us make progress.
ridiculous and detrimental ones, right? So you, I mean, this is where all the cartoons of AI apocalypse come in. The idea that, you know, you're going to design a computer to remove the
possibility of spam and, you know, an easy way to do that is just kill all the people who would send
spam, right? So this is obviously, this is nobody's actual fear. It just points out that unless you build the common sense into these machines, they're not going to have it necessarily for free.
The more and more competent they get at solving specific problems.
But see, it's in a way it's even worse than that, because, you know, one thing is one thing you might say is, well, OK, you know, we have some idea about what our everyday common sense is like, you know, we have these principles. So if we could just sort of specify
those things enough, so we could take our everyday ideas about the mind, for example,
or our everyday ideas about how the physical world works, and we could build those into
the computer, that would help. And it is true that the systems that we have now don't even have that.
But the interesting thing about people is that we can actually discover new kinds of common sense.
So we can actually go out in the world and say, you know, that thing that we thought about how
the physical world worked, it's not true. Actually, we can have action at a distance or even worse,
it turns out that actually space and time can be translated into one another, which is certainly not anything that anyone
intuitively thinks about how physics works.
Or for that matter, we can say, you know, that thing that we thought that we knew about
morality, it turns out that no, actually, when we think about it more carefully, something
like gay marriage is not something that should be perceived as being immoral, even though lots and lots of people for a long time had thought that that was true. change the world, invent new environments, invent new niches, invent new worlds, and then figure out
how to thrive in those new worlds and look around the space of possibilities and create yet other
worlds and repeat. So even if we could build in sort of what in 2019 is everybody's understanding
about the world or build in the understandings about the world that we had in the Pleistocene,
that still wouldn't capture this ability that we have to search the space, to consider new possibilities, to think
about new things that aren't there. And, you know, let me give you some examples. For instance,
the sort of things that people are concerned about, I think legitimately concerned about
that AI could potentially do is, for example, you could give the kind of systems
that we have now examples of all of the verdicts of guilty and innocent that had gone on in
a court over a long period of time and then get it to give it a new example and say, OK,
how would this how would this case be judged?
Will it be judged innocent or will it be judged guilty?
And the systems that we have now could probably do a pretty decent job of doing that.
And certainly, you know, changes to those systems could, you could, it's easy to imagine
an extension of the systems we have now that could solve that kind of problem.
But of course, what we can do is to say, you know what, all that law, that's really not
right.
That isn't really capturing what we want.
That's not enabling people to thrive. Now
we should think of a different way of thinking about making these kinds of judgments. And that's
exactly the sort of thing that the current systems, again, it's not just like if you gave them more
data, they would be able to do that. They're not really even conceptually in the ballpark of being
able to do that. And that's probably a good thing. Now, I think it's important to say that,
and I think you're going to talk to Stuart Russell who will make this point,
these systems don't have to have anything like human level general intelligence to be
really dangerous. Electricity is really dangerous. I just was talking to someone
who made a really interesting point, which is about like, how did we invent circuit
breakers? It turns out the insurance companies actually started insisting that people have
circuit breakers on their electrical systems because houses were being set on fire. So,
you know, electricity, which we now think of as being this completely benign thing we put on a
switch and electricity comes out and none of us is sitting there thinking, oh my God,
is our house about to burn down? That was only a very long, complicated process of regulation and legislation
and work to get that to be other than a really, really dangerous thing. And I think that's
absolutely true, not about some theoretical artificial general intelligence, but about the
AI that we have now, that it's a really powerful force. And like any
powerful technology, we have to figure out ways of regulating it and having it make sense. But I
don't think that's like a giant difference in kind from all the issues we've had about dealing with
powerful technologies in the past. Yeah. Yeah. Well, I guess this issue of creativity and growth in intuitions is something, I guess my intuitions divide from many people's
on this point because creativity is often held out as something that's fundamentally different,
that our machines can't do this and we routinely do this. But in my view, creativity isn't especially creative in the sense that it clearly proceeds
on the basis of rules we already have, and nothing is fundamentally new down to the studs.
Nothing that's meaningful is.
I mean, you can create something that essentially looks like noise that is new. Something that strikes us as insightful, meaningful, beautiful is functioning on the basis of properties mathematical intuition that was fairly hard won and took
thousands of years to emerge in someone's mind. But once you've got it, you sort of got it,
and it's really the same thing you're doing anyway, which is you take a triangle having
180 degrees on a flat plane, but you curve the plane, you curve the plane and it can, it can have more or less than that. And, you know, it's strange that it took so long to see that, but the scene of that
doesn't strike me as fundamentally more mysterious than the fact that we can understand anything
about triangles in the first place. I mean, I think I would just set that on its head in the
sense that, you know, again, this is one of the real advantages
of studying young children is that, you know, when you say, well, it's no more mysterious than
understanding triangles in the first place, people have actually tried to figure out how is it that
we can understand triangles? How is it that children can understand basic things about how
number works? Or in the work that I've done, how do children understand basic things about the
causal structure of the world, for example? And it turns out that even very basic things that we
take for granted, like understanding that you can believe something different from what I believe,
for example, it's actually very hard to see exactly how it is that children are taking
individual pieces and putting them together to come to realizations about, say,
how other people's minds work. And the problem is, if you're doing it backwards, once you know
what the answer is, then you can say, oh, I see, this is how you could put that together from
pieces that you have in the world or from data that you have. But of course, if you're sort of doing it prospectively, then there's all sorts of, you know, incredibly large number of different
other ways that you could have put together, could have put together those pieces, or you could have,
you could have interpreted the data. And the puzzle is how, how is it that you came upon the
one that was both new and interesting and wasn't just random?
Now, again, I don't think there's any kind of, you know, giant reason why we couldn't
solve that problem.
But I do think that's looking at even something as simple as, you know, children figuring
out basic things about how the world around them and the people around them work.
That turns out to be a very, very, very tricky problem to solve. And one interesting thing, for example, that we found in our data, in our
research, is that in many respects, children are actually better at coming to unlikely or new
solutions than adults are. So again, this is this kind of trade-off idea where actually the more you
know, in some ways, the more difficult it is for you to conceive of something new.
We use a lot of Bayesian ideas when we're trying to characterize what the children are doing. And
one way you could think about it is that, you know, as your priors get to be more and more
peaked, as you know more and more, as you're more and more confident about certain kinds of
knowledge, and that's a good thing, right? That's what lets you go out into the world and build things and make the world a better place. It gets to be harder and harder for
you to conceive of new possibilities. And one idea that I've been arguing for is that you could
think about the very fact of childhood as being a solution to this kind of explore-exploit tension,
this tension between exploring, being able to explore lots of different possibilities, even if they're maybe not very good, and having to narrow
in on the possibilities that are really relevant to a particular problem.
And again, that's the sort of thing that people or humans over the course of their
life history and culture seem to be pretty good at doing in a way that we don't really
have a good, we don don't really have a good,
we don't even really have a good start on thinking about how a computational system could do that.
Now we're working on it. I mean, you know, we're hoping that we could get a computational system that could do that. And we have some sort of have some ideas, but that's a dimension that really,
really differentiates what the current powerful AI systems can do and what every four-year-old can do.
Yeah, yeah.
No, I'm granting all of that.
I guess I'm just putting the line at a different point
because, again, people often hold out creativity
and being able to form new goals and insights, intuitions,
as though this were a uniquely human thing
that was very difficult to understand how a machine could do.
But, you know, as you point out, just being able to walk across the room is fairly miraculous
from the point of view of, you know, how hard it is to instantiate in a robot
and to ride a bicycle and to do things that kids routinely learn to do very early. My point is that
once we crack that, these fairly basic problems that evolution has solved for us and really for
even non-human animals in many cases, then we're talking about just incremental gains into
something that is fundamentally beyond the human. I mean, because no one's putting the
line and nobody says, well, yes, you know, you can, you can, might be able to build a machine
that could run across a room like a human child and, you know, balance, you know, something on its
finger, but you are never going to get something that can produce the creative genius of an
Olympic athlete or a professional basketball player.
But I don't, I mean, that's where I think the intuitions flip.
I mean, once you could build something that could move exactly like a person, then there
is no limit to, there's no example of human agility that will be out of sight at that point. And I
think, I guess what I'm reacting to is that people seem to think different rules apply at the level
of cognition and artistic creativity, say. Well, I think it's just an interesting empirical
question. You know, we're collaborating now on a big project with a bunch of people who are doing
things in computer vision, for example. And that's another example where something that we think is very simple and straightforward,
you know, I mean, we don't even feel as if we do any effort to go out into the world and actually
see the objects that are out there in the world. That turns out to be both extremely difficult and
in some ways very mysterious that we can do that as well as we can.
Not only do we identify images, but we can recognize that, you know, there's an object
that's closer to me or an object that's further away from me or that objects have texture
or that objects are really three-dimensional.
Those are all really, really challenging problems.
And an interesting thought is that at a very high abstract level, it may be that we're
solving some of those problems
in the same way that enables us to solve some of these creativity problems. So let me give you an
example. One of the things that the kids very characteristically do is do experiments, except
that when they do experiments, we call it getting into everything. They explore. They're not just
sort of passively waiting for data to come to them.
They can have a problem and actually go out and get the data that's relevant to that problem.
Again, when they do that, we call it playing or getting into everything or making a mess.
And we sit there and nod our heads and try and keep them from killing themselves when they're
doing it. But that's a really powerful technique, a really powerful way of making progress,
actually getting more information about what the structure of the world is like, and then using it to
change what you think about the world, and then repeating by actually going out into
the real world and getting data from the real world.
And that's something that kids are very good at doing.
That seems to play a big role in our ability to do things like move around the world or
do perform skilled actions.
And again, that's something that at least at the moment isn't very characteristic of the way the
machines work. Here's another nice example of something that we're actually working on at
Berkeley. So one of the things that we know about kids is their motivation and affect is that they're
insatiably curious. They just want to get as much information
as they can about the world around them. And they're driven to go out and get information
and especially get new information, which again is why just thinking about the way that we evolved
isn't going to be enough to answer the problem. One of the things that's true about lots of
creatures, but especially human children, is that they're curiosity-driven. And in work that we've been doing with computer scientists at Berkeley,
you can design an algorithm that instead of, say, wanting to have a higher score,
wants to have the predictions of its model be violated.
So actually, when it has a model and things turn out to be wrong,
instead of being depressed, it goes out and says,
huh, that's interesting. Let
me try that again. Let me see what's going on with that little toy car that it's doing that
strange thing. And you can show that a system that's got that kind of motivation can solve
problems that your typical, say, reinforcement learning system can't solve. And that, what we're
doing is actually comparing children and these curious
AIs on the same problems to see the ways that the children are being curious and how that's
related to the ways that the AIs are being curious. So I think you're absolutely right that
the idea that the place where humans are going to turn out to be unique is in, you know, the great
geniuses or the great, the great artists or the great athletes.
They're going to turn out to have some special sauce that the rest of us don't have. And that's
going to be the thing that AI can't do. I think you're right that that's not really going to be
true, that what those people are doing is an extension of the things that every two and three
year old is equipped to do. But I also think that what the two and three-year-olds are equipped to do is going to turn out to be very different from at least what the current
batch of AI is capable of doing. Yeah. Well, I don't think anyone is going to argue there.
Well, so how do you think of consciousness in the context of this conversation? For me, I'll just
give you a moment.
Thank you.