Lex Fridman Podcast - Max Tegmark: Life 3.0
Episode Date: August 27, 2018A conversation with Max Tegmark as part of MIT course on Artificial General Intelligence. Video version is available on YouTube. He is a Physics Professor at MIT, co-founder of the Future of Life Inst...itute, and author of "Life 3.0: Being Human in the Age of Artificial Intelligence." If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.
Transcript
Discussion (0)
Welcome to the Artificial Intelligence Podcast.
My name is Lex Friedman, I'm a research scientist at MIT.
This podcast is an extension of the courses on deep learning, autonomous vehicles, and
artificial general intelligence that I've taught and organized.
It is not only about machine learning, robotics, neuroscience, or philosophy, or any one
technical field, it considers all of these avenues of thought in a way that is
hopefully accessible to everyone. The aim here is to explore the nature of human and machine intelligence, the big picture of understanding the human mind and creating echoes of it in the machine. To me, that is one of our
civilization's most challenging and exciting scientific journeys into the unknown.
I will first repost parts of previous YouTube conversations and lecture
Q&As that can be listened to without video.
If you want to see the video version, please go to my YouTube channel.
My username is Lex Friedman there and on Twitter, so reach out and connect if you find these
conversations interesting.
Moving forward, this podcast will be long-form conversations with some of the most fascinating
people in the world who are thinking about the nature of intelligence.
But first, like I said, I will be posting old content,
but now in audio form.
For a little while, I'll probably repeat this intro
for reposting YouTube content like this episode,
and we'll try to keep it to what looks to be just over two minutes,
maybe two thirty.
So in the future, if you want to skip this intro,
just jump to the 230 minute mark.
In this episode, I talk with Max Tagmark.
He's a professor at MIT, a physicist who has spent much of his career studying and writing
about the mysteries of our cosmologically universe, and now thinking and writing about the
beneficial possibilities and existential risks of artificial intelligence. He's the co-founder of the Future of Life Institute, author of two books,
Our Mathematical Universe, and Life 3.0.
He is truly an out-of-the-box thinker, so I really enjoyed this conversation.
I hope you do as well. Do you think there's intelligent life out there in the universe?
Let's open up with an easy question.
I have a minority of you here, actually.
When I give public lectures, I often ask
for show fans who thinks there's intelligent life out there somewhere else and almost everyone
put their hands up and when I ask why they'll be like oh there's so many galaxies out there
there's gotta be. But them and numbers nerd right? So when you look more carefully at it, it's not so clear at all.
When we talk about our universe, first of all, we don't mean all of space. We actually mean,
I don't know, you can throw me in the universe if you want, it's behind you there. It's, we'd
simply mean the spherical region of space from which light has had time to reach us so far
during the 14, 20, billion year, 13, 20, billion years since our big bang.
There's more space here, but this is what we call a universe,
because that's all we have access to.
So is there intelligent life here that's gotten to the point of
building telescopes and computers?
My guess is no, actually, the probability of it happening on any given planet is some number we don't
know what it is.
And what we do know is that the number can't be super high because there's over a billion
earth-like planets in the Milky Way galaxy alone, many of which are billions of years older
than Earth and aside from some UFO believers,
you know, there isn't much evidence that any super-drunner's civilization has come here
at all.
And so that's the famous Fermi paradox, right?
And then if you work the numbers, what you find is that if you have no clue what the
probability is of getting life on a given planet, so it could be 10 to the minus 10, 10 to the minus 20 or 10 to minus 2, any power of 10 is sort of equally likely if you want to be really open-minded.
That translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away, 10 to the 17 meters away, 10 to 18. By the time you get much less than 10 to the 16 already, we pretty much know that there
is nothing else that close.
When you get beyond 10 10 to 26 meters,
that's already outside of here. So my guess is actually that we are the only life in here that's
gotten the point of building advanced tech, which I think is very, puts a lot of responsibility
on our shoulders, not screw up. I think people who take for granted that it's okay for us to screw up,
have an accident in the nuclear war or go extinct somehow because there's a sort of
start trek-like situation out there with some other life forms are going to come and bail us out
and it doesn't matter.
So I think they're leveling us into a false sense of security.
I think it's much more prudent to say, let's be really grateful for this amazing opportunity we've had and makes the best of it. Just in case it is down to us.
So from a physics perspective, do you think intelligent life is so unique from a sort
of statistical view of the size of the universe, but from the basic matter of the universe, how difficult is it for intelligent
life to come about?
The kind of advanced tech building life is implied in your statement that it's really
difficult to create something like a human species.
Well, I think what we know is that going from no life to having life that can do a level
of tech, there's some sort of two going beyond that
and actually settling our whole universe with life. There's some road major
roadblock there, which is some great filter as sometimes called, which
which is tough to get through. It's either that roadblock is either behind us or in front of us.
I'm hoping very much that it's behind us. I'm super excited every time we get a new report from
NASA saying they failed to find any life on Mars. Yes, awesome. Because that suggests that the
hard part, maybe it was getting the first ribosome or some very low level kind of stepping stone,
so they were home free. Because if that's true, then the future is really only limited by our
own imagination. It would be much suckier if it turns out that this level of life is kind of a
diamond dozen, but maybe there's some other problem. Like as soon as a civilization gets advanced
technology within a hundred years, they get into some stupid fight with themselves and poof.
Yep.
No, that would be a bummer.
Yeah.
So, you've explored the mysteries of the universe, the cosmological universe, the one that's
between us today.
I think you've also begun to explore the other universe, which is sort of the mystery, the mysterious
universe of the mind of intelligence of intelligent life. So is there a common thread between
your interests or in the way you think about space and intelligence?
Oh, yeah. When I was a teenager, I was already very fascinated by the biggest questions, and I felt that the two biggest
quite mysteries of all in science were our universe out there, and our universe in here.
So it's quite natural after having spent a quarter of essentially on my career thinking
a lot about this one, and now indulging in the luxury of doing research on this one.
It's just so cool.
I feel the time is ripe now for you to transparently deepening your understanding of this.
Just to explore in this one.
Yeah, because I think a lot of people view intelligence as something mysterious that
can only exist in biological organisms like us, and therefore dismiss all talk about
artificial general intelligence as science fiction.
But from my perspective as a physicist,
I am a blob of quarks and electrons
moving around in a certain pattern
and processing information in certain ways.
And this is also a blob of quarks and electrons.
I'm not smarter than the water ball
because I made of different kind of quarks. I'm made smarter than the water bottle because I made of different kind of quirks.
I'm made of up quirks and down quirks exact same kind as this. There's no secret sauce, I think,
in me. It's all about the pattern of the information processing. This means that there's no law
of physics saying that we can't create technology, which can help us by being incredibly intelligent
and help us crack mysteries.
In other words, I think we've really only seen the tip of the intelligence iceberg so far.
Yeah, so the perceptronium.
Yeah.
So you coined this amazing term.
It's a hypothetical state of matter, sort of thinking from a physics
perspective, what is the kind of matter that can help, as you're saying, subjective experience,
emerge, consciousness emerge. So how do you think about consciousness from this physics perspective?
Very good question. So again, I think many people have underestimated our ability to make progress on this by convincing
themselves it's hopeless because somehow we're missing some ingredient that we need.
There's some new consciousness particle or whatever.
I happen to think that we're not missing anything and it's not the interesting thing about consciousness.
It gives us this amazing subjective experience of colors and sounds and emotions and so on.
It's rather something at the higher level about the pattern, the information processing.
That's why I like to think about this idea of perceptronium. What does it mean for an
arbitrary physical system to be conscious in terms of what its particles are doing or its
information is doing? I don't think, I hate carbon-carbonism. There's attitude you have to be
made of carbon atoms to be smart or conscious. So something about the information processing, it's kind of matter, performs.
Yeah, and you know, you can see I have my favorite equations here
describing various fundamental aspects of the world.
I feel that I think one day, maybe someone who's watching this will
come up with the equations that information processing has to
satisfy to be consciously. I'm quite convinced there is big
discovery to be made there. Yeah there because let's face it.
We know that some information processing is conscious because we are conscious.
But we also know that a lot of information processing is not conscious.
Most of the information processing happening in your brain right now is not conscious.
There are like 10 megabytes per second coming in, even just through your visual system. You're not conscious about your heartbeat, regulation, or most things.
Even if I just ask you to read what it says here, you look at it and then, oh, now you know what
it said. You're not aware of how the computation actually happened. You're conscious as it was like
the CEO that got an email at the end with the final answer. So what is it that makes
a difference? I think that's both of great science mystery. We're actually starting it a little bit
in my lab here at MIT. But I also think it's just a really urgent question to answer. For starters,
I mean, if you're an emergency room doctor and you have an unresponsive patient coming in, it wouldn't be great if in addition to having a CT scanner, you had a conscious
of scanner that could figure out whether this person is actually having locked the insin
room or is actually comatose.
And in the future, imagine if we build robots or the machine that we can have really good
conversations with, I think it's most likely to happen, right?
Wouldn't you want to know if your home helper robot is actually experiencing anything or
just like a zombie?
Would you prefer it?
Would you prefer that it's actually unconscious so that you don't have to feel guilty about switching it off or giving boring chores?
Or what would you prefer?
Well, we would prefer the appearance of consciousness.
But the question is whether the appearance of consciousness is different than consciousness
itself. And sort of ask that as a question.
Do you think we need to understand what consciousness is,
solve the hard problem of consciousness
in order to build something like an AGI system?
No, I don't think that.
I think we will probably be able to build things,
even if we don't answer that question.
But if we want to make sure that what happens is a good thing, we better solve it first.
So it's a wonderful controversy you're raising there.
There, where you have basically three points of view about the hard problem.
So there are two different points of view that both conclude that the hard problem of
consciousness is BS.
On one hand, you have some people like Daniel Dennett who say,
this is consciousness is just BS because consciousness is the same thing as intelligence.
There's no difference. So anything which acts conscious is conscious, just like we are.
And then there are also a lot of people, including many top AI researchers, I know you say,
I have conscience, it's just bullshit because of course machines can never be conscious.
Right. They're always going to be zombies. You never have to feel guilty about how you treat them.
And then there's a third group of people, including Julio Tonone, for example, and another and
just a coconut, a brother. I would put myself on this middle camp
who say that actually some information processing is conscious and some is not. So let's find the
equation which can be used in the term in which it is. And I think we've just been a little bit
lazy kind of running away from this problem for a long time. It's been almost taboo, even mentioned the C word,
a lot of circles, because,
but we should stop making excuses.
This is a science question, and we can,
and there are ways we can even test any theory
that makes predictions for this.
And coming back to this, help a robot, I mean,
so you said you would want to help a robot
to certainly act conscious and treat you like you have conversations
with you and stuff.
But wouldn't you feel, would you feel a little bit creeped out
if you realized that it was just like glossed up
tape recorder, you know, there was just zombie
and this is a faking emotion?
Would you prefer that it actually had an experience
or would you prefer that it's actually not experiencing
anything. So you feel you don't have to feel guilty about what you do to it.
It's such a difficult question because you know it's like when you're in a relationship and you say
well I love you and the other person I love you back it's like asking well do they really love
you back or are they just saying they love you back. Don't you really want them to actually love you?
It's hard to, it's hard to really know the difference between everything seeming like
there's consciousness present, there's intelligence present, there's affection, passion, love,
and it actually being there.
I'm not sure. Do you have, can I ask you a question?
Let's just make it a bit more pointed. So, the mass general hospital is right across the river,
right? Yes. Suppose you're going in for a medical procedure, and they're like, you know,
for anesthesia, what we're going to do is we're going to give you muscle relaxants so you won't
be able to move, and you're going to feel extrusion pain during the whole surgery, but you won't be
able to do anything about it. But then we're going to give you this drug that erases
your memory of it. Would you be cool about that? What's the difference that you're conscious
about it or not, if there's no behavioral change, right?
Right. That's a really, that's a really clear way to put it. That's, yeah, it feels like in that sense, experiencing it is a valuable quality.
So actually being able to have subjective experiences, at least in that case, is valuable.
And I think we humans have a little bit of a bad track record also of making these self-serving
arguments that other entities aren't conscious. People
often say, all these animals can't feel pain. It's okay to boil lobsters because we asked
them if it hurt and they didn't say anything. Now there was just a paper out saying lobsters
did do feel pain when you boil them. They're banning it in Switzerland. This was slaves
too often and said, oh, they don't mind. And they don't maybe
aren't conscious or women don't have souls or whatever.
So I'm a little bit nervous when I hear people just take
as an axiom that machines can't have experience ever.
I think this is just really fascinating science question as
what it is.
Let's re-short it and try to figure out what it is.
It makes the difference between unconscious
intelligent behavior and conscious intelligent behavior.
So in terms of, so if you think of a Boston dynamic, Summoner robot being sort of with a
broom being pushed around, it starts, it starts pushing on a consciousness question.
So let me ask, do you think an AGI system, like a few
neuroscientists believe, needs to have a physical embodiment?
Do you need to have a body or something like a body? No. I don't think so. You
mean to have a conscious experience? To have consciousness. I do think it helps a
lot to have a physical embodiment to learn the kind of things about
the world that are important to us humans for sure.
But I don't think the physical embodiment is necessary after you've learned it's just
have the experience.
Think about when you're dreaming, right?
Your eyes are closed, you're not getting any sensory input, you're not behaving or moving
in any way, but there's still an experience there, right?
And so clearly the experience that you have when you see something cool in your dreams isn't coming from your eyes,
it's just the information processing itself in your brain, which is that experience, right?
But if I put another way, I'll say because it comes from neuroscience is
the reason you want to have a body and a physical, something like a physical system is because you want
to be able to preserve something.
In order to have a self, you could argue, you'd need to have some kind of embodiment of
self to want to preserve.
Well, now we're getting a little bit on to a
pre-morphic.
And to pre-morphizing things, maybe it's talking about self-preservation instincts.
I mean, we are evolved organisms, right?
Right.
So they're winning evolution and doubt us and other involved organism with
the self-preservation instinct, because those that didn organism with a self-preservation instinct, because
those that didn't have those self-preservation genes got cleaned out of the gene pool.
Right. But if you build an artificial general intelligence, the mind space that you can design
is much, much larger than just a specific subset of minds that can evolve. So they, so an agi mind
doesn't necessarily have to have any self-pres so an AGI mine doesn't necessarily have to have any
self-preservation instinct. It also doesn't necessarily have to be so individualistic as us.
Like imagine if you could just, first of all, we were also very afraid of death, you know,
it's suppose you could back yourself up every five minutes and then your airplane is about to crash
you're like, shucks, I'm just, I'm just, I'm going to lose the last five minutes of experiences since my last cloud backup, you're dang, you know, it's not as big
a deal. Or if we could just copy experiences between our minds easily, like we, which we
could easily do if we were a silicon based, right? Then maybe we would feel a little bit
more like a hive mind, actually, that maybe it's the... So, so there's a...
So I don't think we should take for granted at all that AGI will have to have any of those
sort of competitive, as alpha male instincts.
On the other hand, you know, this is really interesting because I think some people go
too far and say, of course, we don't have to have any concerns either that
advanced AI will have those instincts because we can build anything we want.
That there's a very nice set of arguments going back to Steve, I'm a hundro and Nick Bossdram and others just pointing out that when we build machines,
we normally build them with some kind of goal, you know, win this chess game,
drive this car safely or whatever.
And as soon as you put in a goal into machine, especially if it's kind of open-ended goal and the machine is very intelligent,
it'll break that down into a bunch of sub-goals.
And one of those goals will almost always be self-preservation because if it breaks or dies in the process, it's not going to accomplish the goal. Suppose you just build a little robot and you tell it to go down
to the store market here and get you some food, make you cook you an Italian dinner,
and then someone mugs it and tries to break it on the way.
That robot has an incentive to not get destroyed and defend itself or run away,
because otherwise it's going to fail and cooking you dinner.
It's not afraid of that, but it really wants to complete the dinner cooking gold.
So it will have a self-preservation instinct.
It's going to continue being a functional agent somehow.
And similarly, if you give any kind of more ambitious gold to an AGI, it's very likely
to want to acquire more resources.
So it can do that better.
And it's exactly from those sort of sub-goals
that we might not have intended that some of the concerns
about AGI safety come.
You give it some goal that seems completely harmless.
And then before you realize it,
it's also trying to do these other things
which you didn't want it to do.
And it's maybe smarter than us., it's also trying to do these other things which you didn't want to do and it's maybe smarter than us.
So, it's fascinating.
And let me pause just because I am in a very kind of human-centric way, see fear of death
as a valuable motivator.
So you don't think, you think that's an artifact of evolution, so that's the kind of mind space evolution
created that were sort of almost obsessed about self-preservation.
Kind of generic.
Well, you don't think that's necessary to be afraid of death.
So not just a kind of sub-goal of self-preservation, just so you can keep doing the thing, but more
fundamentally sort of have the finite thing. self-preservation just so you can keep doing the thing, but more fundamentally, you
sort of have the finite thing, like this ends for you at some point.
Interesting.
Do I think it's necessary for what precisely?
For intelligence, but also for consciousness.
So for those, for both, do you think really like a finite death and the fear of it is important.
So before I can answer, before we can agree on whether it's necessary for intelligence or for
conscience, we should be clear on how we define those two words because a lot of really smart people
define them in very different ways. I was in this on this panel with AI experts and they couldn't agree on how
to define intelligence even. So I define intelligence simply as the ability to accomplish complex
goals. I like to broad definition because again, I don't want to be a carbon-carbonist.
And in that case, no, certainly it doesn't require fear of death. I would say alpha-go, alpha-zero is quite intelligent.
I don't think alpha-zero has any fear of being turned off because it doesn't understand
the concept of it even.
And similarly, consciousness, I mean, you can certainly imagine a very simple kind of
experience.
If certain plans have any kind of experience, I don't think they're
afraid of dying or there's nothing they can do about it anyway, so there wasn't that much
value. But more seriously, I think if you ask not just about being conscious, but maybe
having what you would, we might call an exciting life for your passion. I'd say, really appreciate the things.
Maybe there, somehow, maybe there perhaps it does help having a backdrop today.
It's finite.
Let's make the most of this.
Let's live to the fullest.
But if you knew you were going to just live forever, do you think you would change your... Yeah, I mean, in some perspective, it would be an incredibly boring life living forever.
So in the sort of loose subjective terms that you said of something exciting and something
in this that other humans would understand, I think, is, yeah, it seems that the finiteness
of it is important.
Well, the good news I have for you then is, based on it seems that the finiteness of it is important.
Well, the good news I have for you then is based on what we understand about cosmology,
everything in our universe is probably, ultimately, probably finite, although,
although big crunch or big, or big, what's the expected? The infinite.
Yeah, we could have a big chill or a big crunch or a big rip or that's the big snap or death
bubbles.
All of them are more than a billion years away. So we should we certainly have vastly more
time than our ancestors thought. But they're still still pretty hard to squeeze in an infinite
number of compute cycles, even though there are some loophole that just might be possible.
But I think, you know, some people like to say that you should live as if you're about
you're going to die in five years or so and that's sort of optimal.
Maybe it's a good assumption.
We should build our civilization as if it's all finite to be on the safe side.
Right.
Exactly. as if it's all finite to be on the safe side. Right, exactly.
So you mentioned defining intelligence as
the ability to solve complex goals.
So where would you draw a line?
How would you try to define human level intelligence
and super human level intelligence?
Where is consciousness part of that definition?
No.
Consciousness does not come into this definition.
So I think
of intelligence is a spectrum, but there are very many different kinds of goals you can have.
You can have a goal to be a good chess player, a good go player, a good card driver, a good investor,
good poet, etc. So intelligence that by its very nature isn't something you can measure,
but there's one number, it's my overall goodness.
No, no, there's some people who are better at this, some people are better at that.
Right now we have machines that are much better than us at some very narrow tasks, like
multiplying large numbers fast, memorizing large databases, playing chess, playing go, soon driving cars.
But there's still no machine that can match a human child in general intelligence,
but artificial general intelligence,
AGI, the name of your course, of course.
That is by, it's very definition,
the quest, the build, the machine,
you can do everything as well as we can.
So the old Holy Grail of AI from back to its inception in the 60s.
If that ever happens, of course, I think it's going to be the biggest transition in the history of life on Earth.
But it doesn't necessarily have to wait the big impact around it until machines are better than us at knitting that the really big change
doesn't come exactly the moment they're better than us at everything. The really big change comes first they're big changes when they start becoming better at us
doing most of the jobs that we do because that takes away much of the demand for human labor.
And then the really whopping change comes when they become better than us at AI research.
Right. Because right now the time scale of AI research is limited by the human
research and development cycle of years, typically, you know, along the tape from one release of
some software or iPhone or whatever to the next. But once Google can replace 40,000 engineers
by 40,000 equivalent pieces of software or whatever,
then there's no reason that has to be years.
It can be in principle much faster.
And the time scale of future progress in AI
and all of science and technology will be driven
by machines, not humans.
So it's this simple point, which gives this incredibly fun controversy about whether
there can be intelligence explosion, so-called singularity is the one I've just called it.
The idea is articulated by IJ Good, obviously
way back 50s, but you can see Alan Turing and others talking about it even earlier.
So you asked me what exactly would I define human level at?
So the glib answers to say something which is better than us at all,
So the glib answers to say something which is better than us at all, cognitive tasks, we're better than any human at all cognitive tasks.
But the really interesting bar, I think, goes a little bit lower than that, actually, it's
when they're better than us at AI programming and general learning so that they can, if
they want to, get better than us at anything by just studying out.
So their better is a key word and better is towards this kind of spectrum of the complexity
of goals.
It's able to accomplish.
So another way to, so another, and that's certainly a very clear definition of human
love.
So there's, it's almost like a sea that's rising.
You can do more and more and more things.
It's a graphic that you show. It's really nice way to put it. So there's some
peaks that and there's an ocean level elevating and you solve more and more problems. But you know,
just kind of to take a pause and we took a bunch of questions and a lot of social networks and
a bunch of people asked a sort of a slightly different direction on creativity and
slightly different direction on creativity and things that perhaps aren't a peak. It's human beings are flawed and perhaps better means having contradiction, being flawed
in some way.
So let me start easy, first of all, so you have a lot of cool equations.
Let me ask, what's your favorite equation, first of all. So you have a lot of cool equations. Let me ask, what's your favorite equation, first of all?
I know they're all like your children, but that one.
Which one is that?
This is the shirt in your equation.
The master key of quantum mechanics.
The micro world.
So this is creation, which you can check.
Everything to do with add-ons, molecules,
and all that way up.
Yeah, so okay, so quantum mechanics is certainly a beautiful mysterious
formulation of our world. So I'd like to sort of ask you, just as an example,
it perhaps doesn't have the same beauty as physics does, but in mathematics,
abstract, the Andrew Wiles who proved the Fermat's last theory.
So I just saw this recently, and it caught my eye a little bit.
This is 358 years after it was conjectured.
So this very simple formulation, everybody tried to prove it.
Everybody failed.
And so here's this guy comes along and eventually proves it and fails to prove it and then proves
it again in 94.
And he said like the moment when everything connected into place, in an interview he said,
it was so indescribably beautiful.
That moment when you finally realized the connecting piece of two conjectures, he said, it was so indescribably beautiful.
It was so simple and so elegant. I couldn't
understand how I'd missed it and I just stared at it in disbelief for 20 minutes. Then during the day,
I walked around the department and I'd keep coming back to my desk looking to see if it was still
there. It was still there. I couldn't contain myself. I was so excited. It was the most important moment
of my working life. Nothing I ever do again
will mean as much. So that particular moment, and it kind of made me think of what would
it take. And I think we have all been there at small levels. Maybe let me ask, have you
had a moment like that in your life where you just had an idea? It's like, wow, yes.
I wouldn't mention myself in the same breath as Andrew Wilde, but I I've certainly had a number of, of a ha moment when I
realized something very cool about physics, just as completely made my head
explode. In fact, some of my favorite discoveries
I made later, I later realized that they had been discovered earlier by someone or sometimes
got quite famous for it. So I, I mean, there's two leads for me to even publish it, but that
doesn't diminish in any way. The emotional experience you have when you realize it like,
yeah. Wow. Yeah. So what would it take in that moment, that wow, that was yours in that moment.
So what do you think it takes for an intelligent system, an AGI system, an AIs system to have a moment like that?
That's a tricky question because there are actually two parts to it, right?
One of them is, candidates accomplished that proof.
They can't prove that you can never write a to the n plus b to the n equals
three to that equals z to the n for all integers, and so on, etc. When when in as big of them too.
That's simply an question about intelligence. Can you build machines that are that intelligent?
And I think by the time we get a machine that can independently
come up with that level of proofs, probably quite close to AGI. The second question is
a question about consciousness. When will we, will we'll, and how likely is it that
such a machine would actually have any experience at all, as opposed to just being like a zombie, and would we expect that they have some sort of emotional
response to this or anything at all akin to human emotion
where when it accomplishes its machine goal,
the views are somehow something very positive and sublime
and deeply meaningful.
I would certainly hope that if in the future,
we do create machines that are peers,
or even our descendants.
Yeah.
I would certainly hope that they do have
this sort of supply and supply and appreciation of life.
In a way, my absolutely worst nightmare would be
that at some point in the future,
the distant future maybe our cosmos is teaming
with all this post-biological life
doing all the seemingly cool stuff.
And maybe the last humans or the time are our species eventually.
Fizzes that will be like, well, that's okay because we're so proud of our descendants
here and look what all the my most nightmare is that we haven't solved the consciousness
problem and we haven't realized that these are all the zombies.
They're not aware of anything anymore than the tape recorder.
It hasn't any kind of experience.
So the whole thing has just become a play for empty benches.
That would be like the ultimate zombie apocalypse.
I mean, I would much rather in that case that we have these beings,
which can really appreciate how amazing it is.
And in that picture, what would be the role of creativity?
I had a few people ask about creativity. Do you think, when you think about intelligence,
I mean, certainly the story told at the beginning of your book involved creating movies and so on,
sort of making money. You know, you can make a lot of money in our modern world with music and movies. So
if you are an intelligence system, you may want to get good at that. But that's not necessarily
what I mean by creativity. Is it important on that complex goals where the sea is rising
for there to be something creative? Or am I being very human-centric and thinking creativity is somehow special relative
to intelligence? My hunch is that we should think of creativity as an aspect of intelligence.
And we have to be very careful with human vanity. We have this tendency to very often want to say,
as soon as machines can do something,
we try to diminish it and say,
oh, but that's not like real intelligence.
Is there not creative or this or that?
The other thing,
if we ask ourselves to write down a definition,
but we actually mean by being creative,
what we mean by underwilds, what he did there. For example, don what we actually mean by being creative, what we mean by Anderwiles,
what he did there, for example. Don't we often mean that someone takes a very unexpected leap?
It's not like taking 573 and multiplying in my 224 by just a step of straight forward cookbook
like rules, right? You can maybe make it, you make it,
connect between two things that people have never thought
was connected.
It's very surprising.
Or something like that.
I think, I think this is an aspect of intelligence.
And this is some, actually one of the most important aspects of it.
Maybe the reason we humans are tend to be better at it
than traditional computers is because
it's something that comes more naturally if you're a neural network than if you're a traditional
logic-gate-based computer machine.
We physically have all these connections.
If you activate here, activate here, activate here, my hunch is that if we ever build a machine where you could just give it
the task, hey, hey, you say, hey, you know, I just realize I want to travel around the world
instead this month. Can you teach my AGI course for me? And it's like, okay, I'll do it. And
it does everything that you would have done. And they improvise. That would, in my mind, involve a
lot of creativity. Yeah. So it's actually a beautiful way to put it. I think we do try to grasp
at the definition of intelligence as everything we don't understand how to build. So we, as humans, try to find things that we have on machines don't have.
And maybe creativity is just one of the things, one of the words we used to describe that.
That's a really interesting way to put it.
I don't think we need to be that defensive.
I don't think anything good comes out of saying, well, we're somehow special.
Right.
It's, it's, um,
to realize there are many examples in history of where trying to pretend that we're somehow superior
to all other intelligent beings
has led to pretty bad results, right?
Nazi Germany, they were somehow superior to other people today.
We still do a lot of cruelty to animals by saying that we're so superior somehow and they
can't feel pain. Slavery was justified by the same kind of just really weak arguments.
I don't think if we actually go ahead and build artificial general intelligence,
we can do things better than us. I don't think we should try to find ourselves worth on some sort of
bogus claims of superiority in terms of our intelligence.
I think we should instead just find our calling and the meaning of life from the experiences
that we have.
I can have very meaningful experiences, even if there are other people who are smarter
than me.
When I go to faculty meeting here and I was talking about something and I suddenly realized, oh, he has an old prize, he has an old prize, he has an old prize.
I don't have one.
Does that make me enjoy life any less or enjoy talking to those people less?
Of course not.
And I feel very honored and privileged to get interact with other very intelligent beings,
but better than me with a lot of stuff.
So I don't think there's any reason why we can't have the same approach with intelligence
machines.
That's a really interesting.
So people don't often think about that.
They think about when there's going, if there's machines that are more intelligent, you naturally
think that that's not going to be a beneficial type of intelligence.
You don't realize it could be, you know, like peers with no-ball prizes that would be just
fun to talk with.
And they might be clever about certain topics and you can have fun having a few drinks with
them.
Well, also, you know, another example, we all relate to, of why it doesn't have to be a terrible
thing to be in presence of people or even smarter than us all around is when you and I were both
two years old, I mean, our parents were much more intelligent than us, right? Worked out okay?
Because their goals were aligned with our goals. And that I think is really the number one key
aligned with our goals. And that I think is really the number one key issue we have to solve. If we value the value, the value alignment problem, exactly because people who see too many
Hollywood movies with with Laosie science fiction plot lines, they worry about the wrong thing,
right? They worry about some machines only turning evil. It's not malice that is the concern. It's competence. My definition, intelligent
makes it very competent. If you have a more intelligent goal playing
computer playing as the less intelligent one, and when we define intelligence as the ability
to accomplish goal winning, right? It's going to be the less intelligent one and when we define intelligence as a ability to accomplish goal winning, right?
It's going to be the more intelligent one that wins.
And if you have a human and then you have an AGI that's more intelligent in all ways and
they have different goals, I guess who's going to get their way, right?
So I was just reading about this particular rhinoceros species that was driven extinct just a few
years ago.
And a bummer is looking at this cute picture of mommy rhinoceros with its child.
And why did we humans drive at the extinction?
It wasn't because we were evil rhino haters as a whole.
It was just because our goals weren't aligned with those of the rhinoceros and it didn't
work out so well for the rhinoceros because we were more intelligent, right? So I think it's
just so important that if we ever do build AGI before we unleash anything we have to make sure that
it learns to understand our goals, that it adopts our goals, and it retains those goals.
So the cool, interesting problem there is us as human beings trying to formulate our values.
So, you know, you could think of the United States Constitution as a way that people sat down
at the time, a bunch of white men, but which is a good example, I should,
we should say, they formulated the goals for this country and a lot of people agreed that those
goals actually held up pretty well. It's an interesting formulation of values and failed miserably
in other ways. So for the value alignment problem, the solution to it, we have to be able to
put on paper or in a program, human
values. How difficult do you think that is? Very. But it's so important. We really have
to give it our best. And it's difficult for two separate reasons. There's the technical
value alignment problem of figuring out just how to make machines understand
or goals, adopt them and retain them.
And then there's this separate part of it,
the philosophical part, whose value is anyway?
And since we, it's not like we have any great consensus
on this planet on values,
how, what mechanisms should we create then,
to aggregate and decide, okay, what's a good compromise?
Right.
That second discussion can't just be left the tech nerds like myself, right?
That's right.
And if we refuse to talk about it, and then AGI gets built, who's going to be actually
making the decision about whose values?
It's going to be a bunch of dudes in some tech company, right?
Yeah.
Are they necessarily?
So representative of all of you, mankind, that we to trust the them or are they even uniquely qualified to speak to future human happiness just because they're good at programming AI. I'd much rather have this be really inclusive conversation.
So you create a beautiful vision that includes the diversity, cultural diversity and various perspectives on discussing rights, freedoms, human dignity.
But how hard is it to come to that consensus?
Do you think it's certainly a really important thing that we should all try to do, but do
you think it's feasible?
I think there's no better way to guarantee failure
than to try to refuse to talk about it or refuse to try.
And I also think it's a really bad strategy to say,
okay, let's first have a discussion for a long time.
And then once we reach complete consensus,
then we'll try to lower that into the machine.
No, we shouldn't let perfect be the enemy of good.
Instead, we should start let perfect be the enemy of good. Instead, we
should start with the kindergarten ethics, pretty much everybody agrees on and put that
into machines. Now, we're not doing that even. Look at anyone who builds a passenger aircraft
wants it to never, under any circumstances, fly into a building or a mountain, right?
Yet the September 11 hijackers were able to do that.
And even more embarrassingly, you know,
Andreas Lubitz, the depressed German wings pilot,
when he flew his passenger jet into the Alps,
killing over 100 people, he just told the autopilot to do it.
He told the freaking computer to change the altitude to 100 meters.
And even though it had the GPS maps, everything, the computer was like,
okay, so we should take those very basic values. Where the problem is not that we don't agree,
the problem is just we've been too lazy to try to put it into our machines and make sure
that from now on airplanes will just, which all have computers in them, which will just refuse to do something like that.
Going to a safe mode, maybe lock the clock,
but they're going to the nearest airport.
And there's so much other technology in our world as well now,
where it's really quick.
But coming quite timely to put in some sort of very basic values like this,
even in cars, we've had enough vehicle terrorism attacks
by now, if you will, driven trucks,
and vans into pedestrians,
that it's not at all a crazy idea
to just have that hardwired into the car.
Because yeah, there are a lot of,
there's always gonna be people who,
for some reason, wanna harm others.
But most of those people don't have
the technical expertise to figure out
how to work around something like that. So if the card just won't do it, it helps. So let's start there.
So there's a lot of, that's a great point. So not chasing perfect. There's a lot of things
that a lot that most of the world agrees on. Yeah, let's start there. Let's start there.
And then once we start there, we'll also get into the habit of having these kind
of conversations about, okay, what else should we put in here and how have these discussions?
This should be a gradual process then. Great. So, but that also means describing these things
and describing it to a machine. So one thing we had a few conversations with Stephen Walls from,
I'm not sure if you're familiar with Stephen Wolfram.
Oh, yeah, I know him quite well.
So he works a bunch of things, but cellular automata,
these simple, computable things, these computation systems.
And he kind of mentioned that we probably have already,
within these systems, already something that's AGI,
meaning like, we just don't know it because we can't talk to it.
So if you give me this chance to try to at least form a question out of this,
it's an interesting idea to think that we can have intelligent systems,
but we don't know how to describe something to them and they can't communicate with us. I know you're doing a little bit of work and explainable AI trying to get AI
to explain itself. So what are your thoughts of natural language processing or some kind
of other communication? How does the AI explain something to us? How do we explain something
to it to machines? Or do you think of it differently? So there are two separate parts to your question there. One of them has to do with communication,
which is super interesting, you don't only get to that insect. The other is whether we already have
AGI, we just haven't noticed it. There, I make it to different. I don't think there's anything in any cell in a automaton or anything, or the internet
itself or whatever that has artificial general intelligence, and that it's really do exactly
everything.
We humans can do better.
I think the day that happens, when that happens, we will very soon notice, and we'll probably
notice even before, and because in a very, very big way. But for the second part though, I can't.
Can I answer?
Sorry.
So, because you have this beautiful way to formulating consciousness as, you know, as information
processing and you can think of intelligence and information processing and this, you
can think of the entire universe.
There's these particles and these systems roaming around that have this information processing power.
You don't think there is something with the power to process information in the way that we human beings do that's out there that
that needs to be sort of connected to. It seems a little bit philosophical perhaps,
but there's something compelling to the idea
that the power is already there.
Would you, the focus should be more on being able
to communicate with it.
Well, I agree that in a certain sense,
the hardware processing power is already out there,
because our universe itself can think of it as
being a computer already, right? It's constantly computing what water waves how it devolve the water
waves and the river charles and how to move the air molecules around. Seth Lloyd is pointed out
my colleague here that you can even in a very rigorous way think of our entire universe as being
a quantum computer. It's pretty clear that our universe supports
this amazing processing power because you can even,
within this physics computer that we live in,
we can even build actually laptops and stuff.
So clearly the power is there.
It's just that most of the compute power that nature has,
it's in my opinion kind of wasting on boring stuff,
like simulating yet another ocean wave
somewhere where no one is even looking, right?
So in a sense of what life does,
what we are doing when we build computers is we're
re-channeling all this compute that nature is doing anyway
and to doing things that are more interesting
than just yet another ocean wave,
and do something cool here.
So the raw hardware power is there, sure, but and even just like computing
what's going to happen for the next five seconds and this water bottle, you know, takes a ridiculous
amount of compute if you do it on a human computer. Yeah. This water bottle does did it, but that does
not mean that this water bottle has AGI and this because AGI means it should also be able to like have written
my book done this interview.
Yes.
And I don't think it's just communication problems.
I don't really know.
Don't think it can do it.
And other Buddhists say when they watch the water and that there is some beauty, that there's
some depth and nature that they can communicate with.
Communication is also very important because I mean, look, part of my job is being a teacher.
And I know some very intelligent professors,
even who just have a better hard time communicating.
They try to put all these brilliant ideas,
but to communicate with somebody else,
you have to also be able to simulate their own mind.
Yes, I'm pretty good.
Build enough and understand model of their mind, but you can say things that they will understand.
That's quite difficult.
That's why today it's a frustrating if you have a computer that makes some cancer diagnosis
and you ask it, well, why are you saying I should have a surgery?
And if it don't, can I want to reply?
I was trained on five terabytes of data,
and this is my diagnosis, BOOP, BEEP, BEEP.
It doesn't really instill a lot of confidence, right?
So I think we have a lot of work to do on communication there.
So what kind of, what kind of,
I think you're doing a little bit
work and explainable AI. What do you think are the most promising avenues? Is it mostly
about sort of the Alexa problem of natural language processing of being able to actually
use human interpretable methods of communication? So being able to talk to a system and it
talked back to you, or is there some more fundamental problems to be solved?
I think it's all of the above.
The match of language processing is obviously important,
but there are also more nerdy fundamental problems,
like if you take, you play chess,
of course Russian.
I have to.
At least my name is Low.
When did you learn Russian? I got to watch to sparrowsky. Yeah, at least my onions low. What did you learn Russian?
I got a little bit of a Russian.
I talk after the dark.
No, I don't.
I got a little bit of a book.
I teach myself Russian.
I teach it all.
I don't know.
Builds are the same.
The road.
Wow.
I got a little bit of a bad.
But I would do languages.
Do you know?
Wow.
That's really impressive.
I don't know.
I've had some contact with you.
But my point was, if you play chess,
now you have you looked at the Alpha Zero games,
the actual games now.
Just checking out some of them are just mind blowing,
really beautiful.
And if you ask, how did it do that?
You got that, talk to them in the service,
I don't know, others from deep mind. All they'll ultimately
be able to give you is big tables of numbers, matrices that define the neural network. And you
can stare at these numbers, numbers, tell your face turned blue. And you're not going to
understand much about why it made that move. And if you have natural language processing that can tell you in human language about five seven point two eight
still not going to really help so i think i think there's a whole spectrum of fun challenge there involved in taking a computation doesn't tell you things and transforming into something
equally and transforming into something equally good, equally intelligent, but that's more understandable.
And I think that's really valuable because I think as we put machines in charge of every more
infrastructure in our world, the power grid, the trading on the stock market,
weapons systems and so on, it's absolutely crucial that we can trust these AI's
to do what we want.
And trust really comes from understanding
in a very fundamental way.
And that's why I'm working on this,
because I think the more, if we're going
to have some hope of ensuring that machines have adopted
our goals and that they're going to retain them,
that kind of trust,
I think, needs to be based on things you can actually understand, preferably even make
it, have preferably improve theorems on, even with a self-driving car, right? If someone
just tells you it's been trained on tons of data and it never crashed, it's less reassuring
than if someone actually has a proof. Maybe it's a computer verified proof, but still it says that on a normal circumstances, is this car just going to swerve into an oncoming traffic?
And that kind of information helps you'll trust and the alignment of goals, at least the
awareness that your goals, your values are aligned.
And I think even in a short term, if you look at how today, there's absolutely pathetic state of cybersecurity that we have.
Where is it?
Three billion Yahoo accounts, which have packed almost every
American's credit card and so on.
You know, why is this happening?
It's ultimately happening because we have software that nobody fully understood how it worked.
That's why the bugs hadn't been found, right?
And I think AI can be used very effectively for offense, for hacking,
but it can also be used for defense, hopefully automating verifiability and creating its systems that are
Built in different ways so you can actually prove things about them. Right and it's it's important
So speaking of software that nobody understands how it works
Of course a bunch of people ask about your paper about your thoughts of why does deep and cheap learning work so well
That's the paper but what are your thoughts
on deep learning? These kind of simplified models of our own brains have been able to do some
successful perception work, pattern recognition work, and now with alpha zero and so on. Do
some clever things. What are your thoughts about the promise limitations of this piece. Great. I think there are a number of very important insights,
very important lessons we can all be drawn from these kind of successes.
One of them is, when you look at the human brain,
you see it's very complicated.
10th of 11 neurons,
and there are all these different kinds of neurons.
And yada yada, and it's been a long debate about
whether the fact that we have dozens of different kinds is actually necessary for intelligence. We can now, I think quite convincingly
answer that question, no, it's enough to have just one kind. If you look under the hood of alpha
zero, it's only one kind of neuron and it's ridiculously simple, there, simple mathematical thing.
So it's not the, it's just like in physics. It's not the,
if you have a gas with waves in it, it's not the detailed nature of the molecule that matter.
It's the collective behavior somehow. Similarly, it's, it's, it's this higher level structure of the
network that matters. Not that you have 20 kinds of neurons.. I think our brain is such a complicated mess because it wasn't devolved just to be intelligent.
It was evolved to also be self-assembling and self-repairing, right?
And evolutionarily attainable and so on.
And catchers and so on.
Yeah.
So I think it's pretty, my hunches that we're going to understand how to build
AGI before we fully understand how our brains work. Just like we, we understood how to build flying machines
long before we were able to build a mechanical word bird. Yeah, that's right. You're giving
things. You're giving that example, exactly, mechanical birds and airplanes and airplanes do a pretty
good job of flying without really mimicking
bird flight. And even now after 100, 100 years later, did you see the TED Talk with the
this German-Nah, mechanical bird? I heard you mention it. I don't want to say it. It's amazing.
But even after that, we still don't fly in mechanical birds because it turned out the way we came
up with simpler and just better parapurposes. And I think it might be the same there. That's one lesson.
and it's better for our purposes. And I think it might be the same there.
That's one lesson.
And another lesson, one what the paper was about,
well, first, as a physicist thought,
it was fascinating how there's a very close mathematical
relationship actually between artificial neural networks
and a lot of things that we've studied for in physics.
Go by nerdy names like the re-normalization group equation and
Motoneans and yada yada yada and when you look a little more closely at this, you have
at first there was something crazy here that doesn't make sense because we know that if you even want to build a super simple neural
network, you've got to tell that part tap pictures and dog pictures, right? You can do that very
well, very well now. But if you think about it a little bit, you can venture stuff that must be
impossible because if I have one megapixel, even if each pixel is just black or white, there's two
to the power of one million possible images,
just way more than their atoms in our universe, right?
So in order to,
and then for each one of those,
I have to assign a number, which is the probability
that it's a dog.
So an arbitrary function of images
is a list of more numbers than their atoms in our universe.
So clearly I can't store that under the hood of my GPU
or my computer.
Yet, somehow works.
So what does that mean?
Well, it means that out of all of the problems
that you could try to solve with a neural network,
almost all of them are impossible to solve
with a reasonably sized one.
But then what we should show in our paper was that the fraction of all the problems that
you could possibly pose, that we actually care about given the laws of physics is also
an infinitesimally tiny little part.
And amazingly, they're basically the same part.
Yeah, it's almost like our world was created for, I mean, they kind of come together. Yeah, you could say maybe where the world created the world, the world was created for us, but I have
a more modest interpretation, which is that instead evolution endowed us with neural networks,
which is nice to you for that reason, right? Because this particular architecture, as opposed to the one in your laptop, is very, very well
adapted, solving the kind of problems that nature kept presenting it or ancestors with, right?
So it makes sense that why do we have a brain in the first place? It's to be able to make predictions
about the future and so on. So if we had a sucky system which could never solve it, it wouldn't have a lot.
But it's, so this is, I think a very beautiful fact. We also, we also realize that there,
that we, it's been earlier work on, on why deeper networks are good, but we, we were able
to show an additional cool fact there, which is that even incredibly simple problems,
like suppose I give you a thousand numbers
and ask you to multiply them together,
and you can write a few lines of code boom done trivial.
If you just try to do that with a neural network,
that has only one single hidden layer in it.
You can do it, but you're gonna need
two to the power of 1000 neurons
Yeah, to multiply 1000 numbers, which is again more neurons than their atoms in our universe, so
That's not saying but if you allow if you allow yourself make it a deep network of many layers
You only need 4,000 neurons. It's perfectly feasible. So that's really interesting. Yeah. Yeah. So on another
architecture type, I mean, you mentioned Schrodinger's equation and what are your thoughts about
quantum computing and the role of this kind of computational unit in creating an intelligent system?
In some Hollywood movies, I don't, that I'll not mention my name,
because I don't want to spoil them.
The way they get AGI is building a quantum computer.
Well, it's the AGI.
Because the word quantum sounds cool and so on.
That's right.
Mine, first of all, I think we don't need quantum computers
to build AGI.
I suspect your brain is not quantum computer in any found sense.
So you don't even wrote a paper about that a lot many years ago,
I calculated the deco here and so-called deco here in time,
how long it takes until the quantum computerness of what your new
runs are doing gets erased by just around the noise from the environment.
And it's about 10 to the minus 21 seconds.
So as cool as it would be
to have a quantum computer in my head,
I don't think that fast, you know.
On the other hand, there are very cool things
you could do with quantum computers.
Or I think we'll be able to do soon when we
get bigger ones. That might actually help machine learning do even better than the brain.
So for example, one of this is just a moonshot, but learning is very much the same thing as search.
If you're trying to train a neural network to get really learned, to do something really
well, you have some loss function, you have a bunch of knobs you can turn, represented
by a bunch of numbers, and you're trying to tweak them so that it becomes as good as possible
and this thing. So if you think of a landscape with some valley,
we each dimension of the landscape corresponds to some number you can change.
You're trying to find the minimum.
And it's well known that if you have a very high dimensional landscape,
complicated thing, it's super hard to find the minimum, right?
Quantum mechanics is amazing.
You're good at this.
Right.
If I want to know what's the lowest energy state this water can possibly have, And quantum mechanics is amazing and good at this. Right.
If I want to know what's the lowest energy state this water can possibly have,
it's very hard to compute, but we can, but nature will happily figure this out for you.
If you just cool it down, make it very, very cold.
If you put a ball somewhere, it'll roll down to its minimum.
And this happens metaphorically at the energy landscape too.
And quantum mechanics even uses some clever tricks which today's machine learning systems
don't. Like if you're trying to find the minimum and you get stuck in the little local
minimum here, and quantum mechanics you can actually tunnel through the barrier and get
unstuck again. And that's really interesting. Yeah. So it may be, for example, we'll one day
use quantum computers to help train neural networks better. That's really interesting. Okay. So
as a component of kind of the learning process, for example, yeah. Let me ask sort of wrapping up
here a little bit. Let me let me return to the questions of our human nature and love, as I mentioned.
So do you think you mentioned sort of a helper robot that you could think of also personal
robots? Do you think the way we human beings fall in love and get connected to each other?
It's possible to achieve in an AI system,
in human level AI intelligence system. Do you think we'll ever see that kind of connection?
Or, you know, in all this discussion about solving complex goals,
yeah, as this kind of human social connection, do you think that's one of the goals
on the peaks and valleys that with the raising sea levels that would be able to achieve
or do you think that's something that's ultimately or at least in a short term, relative to
the other goals is not achievable.
I think it's all possible.
And I mean, in recent, there is a very wide range of guesses as you know among AI researchers
when we're going to get AGI.
Some people, you know, like our friend Rodney Brooks
said it's going to be hundreds of years, please.
And then there are many others
that think it's going to happen a little bit.
Much sooner and recent polls,
maybe half or so already,
I researchers think it's we're going to get AGI
within decades.
So if that happens, of course,
then I think these things are all possible.
But in terms of whether it will happen,
I think we shouldn't spend so much time asking,
what do we think will happen in the future?
As if we are just some sort of pathetic,
your passive bystanders waiting for the future
to happen to us.
Hey, we're the ones creating this future, right?
So we should be proactive about it
and ask us on what sort of future we would
like to have happen. That's right.
Trying to make it like that. Well, what I prefer to some sort of incredibly boring zombie-like
future where there's all these mechanical things happening and there's no passion, no
emotion, no experience, maybe even. No, I would, of course, much rather prefer it if all
the things that we find that we value the most about humanity,
our subjective experience, passion, inspiration, love, you know, if we can create a future where
those are what those things do exist. I think ultimately it's not our universe giving meaning to us, it's us giving meaning to our universe.
If we build more advanced intelligence, let's make sure we're building it in such a way
that meaning is part of it.
A lot of people that seriously study this problem and think of it from different angles,
have trouble in the majority of cases cases if they think through that happen
are the ones that are not beneficial to humanity.
And so, yeah, so what are your thoughts?
What's the, what's the people, you know, I really don't like people to be terrified.
What's the way for people to think about it in a way that that in a way we can solve it and we can make it better?
Yeah, no, I don't think panicking is gonna
Help in any way it's not gonna increase chances of things going well either even if you are in a situation where there is a real threat
Does it help if everybody just freaks out? Right. No, of course, of course not
I think yeah, there are of, ways in which things can go
horribly wrong.
First of all, it's important when we think about this thing,
about the problems and risks, to also remember how huge the
upsides can be if we get it right.
Everything we love about society and civilization is a
product of intelligence.
So if we can amplify our intelligence with machine intelligence
and not anymore lose our loved one, what we're told is an uncurable disease and things like this.
Of course, we should aspire to that. So that can be a motivator, I think, reminding ourselves that
the reason we try to solve problems is not just because we're trying to avoid gloom,
but because we're trying to do something great. But then in terms of the risks, I think the really important question is to ask,
what can we do today that will actually help make outcome good, right?
And dismissing the risk is not one of them.
I find it quite funny often when I'm in discussion panels about these things, how the people who work for
companies always say, oh, nothing to worry about, nothing to worry about, nothing to worry
about. And it's always, it's only academics sometimes express concerns. That's not surprising
at all if you think about it. Optancing clear, qu, right, that the target make your man believe in something when his
income depends on not believing in it.
And frankly, we know a lot of these people in companies that they are just as concerned
as anyone else, but if you're this EO of a company, that's not something you want to go
on.
Records saying when you have silly journalists who are going to put a picture of a terminator
robot when they quote you.
So the issues are real.
And the way I think about what the issue is, is basically,
the real choice we have is, first of all,
are we going to just dismiss this, the risks and say, well,
let's just go ahead and build machines that can do everything we can do better and cheaper.
Let's just make ourselves and build machines that can do everything we can do better and cheaper.
Let's just make yourselves obsolete as fast as possible.
What could possibly go wrong?
That's one attitude.
The opposite attitude, I think, is to say, there is incredible potential.
Let's think about what kind of future we're really, really excited about.
What are the shared goals that we can really aspire to.
And then let's think really hard about how we can actually get there.
So start with it.
Not don't start thinking about the risks.
Start thinking about the goals.
Goals, yeah.
And then when you do that, then you can think about the obstacles you want to avoid.
Right.
I often get students coming in right here into my office for career advice and always ask them this
very question, where do you want to be in the future?
Right. If all she can say is, oh, maybe I'll have cancer, maybe I'll run over by
Protestant obstacles instead of the goal. She's just going to end up a hypocondriac paranoid.
Whereas if she comes in and fire in her eyes and is like, I want to be there,
and then we can talk about the obstacles and see how we can circumvent them. That's, I think, a much, much healthier attitude.
And I feel it's very challenging to come up with a vision for the future, which we are
unequivocally excited about.
I'm not just talking now in the vague terms, like, yeah, let's cure cancer, fine.
Talking about what kind of society
do we want to create?
What do we want it to mean to be human
in the age of AI?
In the age of AGI.
So if we can have this conversation
broad, inclusive conversation,
and gradually start converging towards
some future that with some direction at least
that we want to steer towards, right?
Then there will be much more motivated
to construct that they take on the obstacles.
And I think if I had to, if you make,
if I try to wrap this up in a more succinct way,
I think we can all agree already now
that we should aspire to build a GI that doesn't overpower
us, but that empowers us.
And think of the many various ways that can do that, whether that's from my side of
the world of autonomous vehicles.
I personally, actually, from the camp that believes this human level intelligence is required
to achieve something like vehicles that would actually be something we would enjoy using
and being part of.
So that's one example, and certainly there's a lot of other types of robots and medicine
and so on.
So focusing on those and then coming up with the obstacles, coming up with the ways that
that can go wrong and solving those one at a time.
And just because you can build an autonomous vehicle, even if you could build one that would
drive this final value, maybe there are some things in life that we would actually want
to do ourselves.
That's right.
Like, for example, if you think of our society as a whole, there are some things that we
find very meaningful to do.
And that doesn't mean we have to stop doing them just because machines can do them better.
You know, I'm not going to stop playing tennis.
Just they, they, someone build a tennis robot.
Yeah.
Beat me.
People are still playing chess and even go.
Yeah.
And I, in this, in the, in the near term, even some people are advocating basic income, replace jobs.
But if the government is going to be willing to just hand out cash to people for doing
nothing, then once you're also seriously considered whether the government should also fire a lot
more teachers and nurses and the kind of jobs which people often find great fulfillment
in doing, right?
We get very tired of hearing politicians saying oh we can't afford
hiring more teachers but we're going to maybe have basic income. If we can have more more serious
research and thought into what gives meaning to our lives and the jobs give so much more than income
right. And then think about in the future what are the roles that
future. What are the roles that we want to have? Are people seemingly feeling empowered by machines? And I think sort of, I
come from the Russia, from the Soviet Union, and I think for a
lot of people in the 20th century, going to the moon, going to
space was an inspiring thing. I feel like the the the universe of the mind, so AI, understanding creating
intelligence is that for the 21st century. So it's really surprising, and I've heard you
mention this, it's really surprising to me, both on the research funding side that it's
not funded as greatly as it could be. But most importantly, on the politician side, that
it's not part of the public discourse except in the killer bots, terminator kind of you, that people are not yet, I think, perhaps
excited by the possible positive future that we can ability together.
We should be, because politicians usually just focus on the next election cycle, right?
The single most important thing I feel we humans have learned
in the entire history of science is there were the masters
of underestimation, underestimated the size of our cosmos,
again and again, realizing that everything we thought
existed, it's a small part of something grander, right?
Plana, solar system, the galaxy, you know,
clusters of galaxies, universe. And we now know that
we have that the future has just so much more potential than our ancestors could ever have dreamt of.
This cosmos, imagine if all of earth was completely devoid of life except for Cambridge, Massachusetts. Wouldn't it be
kind of lame if all we ever aspired to was to stay in Cambridge, Massachusetts forever,
and then go extinct in one week, even though Earth was going to continue on for longer.
That sort of attitude, I think, we have now on the cosmic scale. Life can flourish on Earth, not for four years, but for billions of
years. I can even tell you about how to move it out of harm's way when the sun gets too hot.
And then we have so much more resources out here, which today, maybe there are a lot of other
planets with bacteria or cow-like life on them. But most of this, all this opportunity seems
as far as we can tell, to be largely dead, like this is a horror desert. And yet we have
the opportunity to help life flourish throughout this, a billion of years. So like,
let's quit squabbling about whether some little border should be drawn one mile to the left to right.
And look up into the sky and realize, hey, you know, we can do such incredible things.
Yeah. And that's, I think, why it's really exciting that you and others are connected with
some of the working Elon Musk is doing because he's literally going out into that space,
really exploring our universe and it's wonderful.
That is exactly why Elon Musk is so misunderstood, right?
Misconstrued him as some kind of pessimistic doom there.
The reason he cares so much about the safety is
because he more than almost anyone else appreciates
these amazing opportunities.
It will squander if we wipe out here on Earth.
We're not just gonna wipe out the next generation, but all generations,
and this incredible opportunity that's out there, and that would be really
be a waste. And AI, for people who think that
that we better to do without technology, let me just mention that if we don't
improve our technology, the question isn't whether
humanity is going to go extinct.
Question is just whether we're going to get taken out by the next big asteroid or the next
super volcano or something else dumb that we could easily prevent with more tech, right?
And if we want life to flourish throughout the cosmos, AI is the key to it. As I mentioned, in a lot of detail in my book right there,
even many of the most inspired sci-fi writers,
I feel have totally underestimated the opportunities
for space travel, especially to other galaxies,
because they weren't thinking about the possibility of AI,
which just makes it so much easier.
Right. Yeah. So that goes to your view of AGI that enables our progress, that enables a better life. So that's a beautiful way to put it and then something to strive for. So Max,
thank you so much. Thank you for your time today. It's been awesome.
Thank you so much. Thanks. Thank you for watching.
today has been awesome. Thank you so much. Thanks. Thank you for watching.