Lex Fridman Podcast - #130 – Scott Aaronson: Computational Complexity and Consciousness
Episode Date: October 12, 2020Scott Aaronson is a quantum computer scientist. Please support this podcast by checking out our sponsors: - SimpliSafe: https://simplisafe.com/lex and use code LEX to get a free security camera - Eigh...t Sleep: https://www.eightsleep.com/lex and use code LEX to get $200 off - ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free - BetterHelp: https://betterhelp.com/lex and use code LEX to get 10% off EPISODE LINKS: Scott's Blog: https://www.scottaaronson.com/blog/ Our previous episode: https://www.youtube.com/watch?v=uX5t8EivCaM PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 07:46 - Simulation 12:38 - Theories of everything 18:18 - Consciousness 40:32 - Roger Penrose on consciousness 50:44 - Turing test 54:31 - GPT-3 1:03:02 - Universality of computation 1:09:33 - Complexity 1:15:38 - P vs NP 1:27:57 - Complexity of quantum computation 1:40:03 - Pandemic 1:53:49 - Love
Transcript
Discussion (0)
The following is the conversation with Scott Ernson, his second time in the podcast.
He is a professor at UT Austin, director of the Quantum Information Center,
and previously a professor at MIT. Last time we talked about quantum computing,
this time we talk about computation complexity, consciousness, and theories of everything.
consciousness and theories of everything. I'm recording the centroid as you may be able to tell in a very strange room in the
middle of the night. I'm not really sure how I got here or how
I'm going to get out, but Hunter is Thompson saying I think
applies to today and the last few days and actually the last couple of weeks.
Life should not be a journey to the grave with the intention of arriving safely in a pretty
and well-preserved body, but rather to skid in broadside and a cloud of smoke thoroughly used up,
used up, totally worn out, and loudly proclaiming, wow, what a ride. So I figured whatever I'm up to here. And yes, lots of wine is involved. I'm going to have to improvise, hence this
recording. Okay, quick mention of each sponsor, followed by some thoughts related to the
episode. First sponsor is simply safe, a home security company I use to monitor and protect my apartment.
Though of course, I'm always prepared with a fallback plan.
As a man in this world must always be.
Second sponsor is 8th sleep.
A mattress that cools itself, measures heart rate variability, has an app, and has
given me yet another reason to look forward to sleep, including the all-important power
nap.
3rd sponsor is ExpressVPN. The VPN I've used for many years to protect my privacy on
the internet.
Finally, the fourth sponsor is better
help. Online therapy, when you want to face your demons with a
licensed professional, not just by doing David Goggins like
physical challenges, like I seem to do an occasion. Please check
out these sponsors in the description to get a discount and
to support the podcast. As a side note, let me say that this is the second time
I recorded a conversation outdoors.
The first one was Stephen Wolfram,
when it was actually sunny out.
In this case, it was raining,
which is why I found a covered outdoor patio.
But I learned a valuable lesson,
which is that raindrops can be quite loud
on the hard metal surface of a patio cover.
I did my best with the audio, I hope it still sounds okay to you.
I'm learning, always improving. In fact, as Scott says, if you always win, then you're probably
doing something wrong. To be honest, I get pretty upset with myself when I fail.
To be honest, I get pretty upset with myself when I fail. Small or big.
But I've learned that this feeling is priceless.
It can be fuel when channeled into concrete plans of how to improve.
So if you enjoyed this thing, subscribe on YouTube, review it, 5 stars and apple podcasts,
follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman.
As usual, I'll do a few minutes of ads now and no ads in the middle.
I try to make these interesting, but I give you time stamps, so if you skip, please still
check out the sponsors by clicking the links in the description.
It's the very best way to support this podcast.
This show is sponsored by SimplySafe, a home security company.
There are no tricky overpriced contracts.
The customer service is amazing.
They told me to say the following line, and I shall oblige even though it's ridiculous.
While there are a lot of options out there, there's only one no brainer, SimplySafe.
I personally have no clue about the actual options out there, but this one happens to be great.
It's simple, no contracts, 15 bucks a month, easy setup, I have it already set up in my
apartment, but of course I'm also prepared for intruders.
One of my favorite movies is Leon or the professional, which is a movie about a hip
man with a minimalist life that resembles my own.
Anyway, go to simplysafe.com slashlex to get a free HD camera, again, that's simply
safe.com slashlex.
They're a new sponsor, and this is a trial run, so you, my dear listeners, know what to do.
This show is also sponsored by 8Sleep and its pod pro mattress
that you can check out at 8Sleep.com slash Lex to get $200 off. It controls temperature with an app
it's packed with sensors and it can cool down to as low as 55 degrees on each side of the bed separately
and it totally has been a game changer for me. I have air conditioners and heat, but even then it's hard to get the temperature right.
Like when I'm fasting, I'm usually cold, when I'm fat and stressed, I'm usually hot.
A sleep allows me to adjust to that for perfect sleep.
A cool bed surface with a warm blanket after a long day of focus work is an amazing feeling.
They can track a bunch of metrics like heart rate variability, but cooling alone is honestly
worth the money.
Anyway, go to a sleep.com slash lux to get $200 off.
This show is also sponsored by ExpressVPN.
It provides privacy in your cyber life.
Without a VPN, your internet service provider can see every site you've ever visited,
even when you're browsing in incognito mode, even if you clear your history.
In the United States, they can legally sell your data to ad companies.
ExpressVPN prevents them from being able to do all that.
I've used it for many years on Windows, Linux, and Android, but it's available everywhere
else too. It's fast and easy to use. Go to ExpressVPN.com slash Lex pod to get an extra three months
free on a one year package that's ExpressVPN.com slash Lex pod. Finally, this show sponsored by Better
Help spelled H-E-L-P help. They figure out what you need and match you with a licensed
professional therapist in under 48 hours. I chat with a person on there and enjoy it. Of course,
I also regularly talk to David Goggins these days who is definitely not a licensed professional
therapist, but he does help me meet his and my demons and become comfortable to exist
in their presence.
Everyone is different, but for me, I think suffering is essential for creation, but you
can suffer beautifully in a way that doesn't destroy you.
Therapy can help in whatever form that therapy takes, and better help, I think, is an option
worth trying. They're easy, private, and better help, I think, is an option worth trying.
They're easy, private, affordable, available, worldwide.
You can communicate by text anytime and schedule weekly audio and video sessions.
Check it out at BetterHelp.com slash Lex to get 10% off.
That's BetterHelp.com slash Lex.
And now, here's my conversation with Scott Ayrinson.
Let's start with the most absurd question. But I've read you write some fascinating stuff about it, so let's go there.
Are we living in a simulation?
What difference does it make, Bucks?
I mean, I'm serious.
What difference?
Because if we are living in a simulation, it raised the question, how real does something have to be in simulation,
foreign it to be sufficiently immersive for us humans? But I mean, even in principle, how could we ever
know if we were in one, right? A perfect simulation by definition is something that's indistinguishable
from the real thing. But we didn't say anything about perfect. It could be, no, no, that's, that's
right. Well, if it was an imperfect simulation, if we could hack it, find a bug in it, then that
would be one thing, right?
If this was like the matrix, and there was a way for me to do flying kung fu moves or
something by hacking the simulation, well, then we would have to cross that bridge when
we came to it, wouldn't we?
Right?
I mean, at that point, it's hard to see the difference between that and just what
people would ordinarily refer to as a world with miracles, you know?
What about from a different perspective, thinking about the universe as a computation, like
a program running at a computer?
That's kind of a neighboring concept.
It is.
It is an interesting and reasonably well-defined question to ask, is the world computable?
Does the world satisfy what we would call and say, yes, the church touring thesis that
is, could we take any physical system and simulate it to any desired precision by a touring machine,
given the appropriate input data.
So far, I think the indications are pretty strong
that our world does seem to satisfy the church touring thesis.
At least if it doesn't,
then we haven't yet discovered why not.
But now, does that mean that our universe is a simulation?
Well, that word seems to suggest that there is some other larger universe in which
it is running.
Right.
And the problem there is that if the simulation is perfect, then we're never going to be
able to get any direct evidence about that other universe.
You know, we will only be able to see the effects of the computation that is running in this universe.
Well, let's imagine an analogy.
Let's imagine a PC, a personal computer, a computer.
Is it possible with the advent of artificial intelligence
for the computer to look outside of itself
to see, to understand its creator?
I mean, is that a ridiculous connection?
Well, I mean, with the computers that we actually have, I mean, first of all, we all know
that humans have done an imperfect job of enforcing the abstraction boundaries of computers,
right?
Like, you may try to confine some program to a play pen,
but you know as soon as there's one memory allocation error in the C program,
then the program has gotten out of that play pen and it can do whatever it wants.
Right? This is how most hacks work, you know, the viruses and worms and exploits.
And you know, you would have to imagine that an AI would be able
to discover something like that.
Now, of course, if we could actually discover
some exploit of reality itself, then in some sense,
we wouldn't have to philosophize about this.
This would no longer be a metaphysical conversation.
But that's the question is what would that hack look like? Yeah, well, I have no idea. I mean,
Peter Schor, you know, that very famous person in quantum computing, of course, has a joke
that maybe the reason why we haven't yet integrated general relativity in quantum mechanics is that the part of the universe that depends on both of them was actually left unspecified.
And if we ever tried to do an experiment involving the singularity of a black hole or something like that,
then the universe would just generate an overflow hour or something. Yeah, we would just crash the universe.
Now, the universe has seemed to hold up pretty well for 14 billion years.
So my Occam's razor kind of gas has to be that it will continue to hold up, you know, that the fact that we don't
know the laws of physics governing some phenomenon is not a strong sign that probing that phenomenon
is going to crash the universe, right? But, you know, of course, I could be wrong.
But do you think on the physics side of things, you know, there's been recently a few folks, Airquindstein and Stephen Wolf
from that came out with a theory of everything. I think there's a history, a physicist, a
dreaming, and working on the unification of all the laws of physics. Do you think it's
possible that once we understand more physics, not necessarily the unification of the laws,
but just understand physics more deeply at the fundamental level
would be able to start, you know, I mean, part of this is humorous, but looking to see if there's any bugs in the universe
that could be exploited for, you know, traveling at, at not just speed of light, but just traveling faster than our current Spaceships can travel all that kind of stuff. Well, I mean to travel faster than our current spaceships could travel
You wouldn't need to find any bug in the universe, right? The known was a physics
You know, let us go much faster up to the speed of light, right?
And you know when people want to go faster than the speed of light
Well, we actually know something about what that would entail, namely that, you know, according to relativity, that seems to
entail communication backwards in time. Okay. So then you have to worry about closed time
like curves and all of that stuff. So, you know, in some sense, we, we sort of know the
price that you have to pay for these things, right?
But, you know, we're into the thing a physics. That's right, that's right.
We can't say that they're impossible,
but we know that sort of a lot else in physics breaks, right?
So now regarding Eric Weinstein and Stephen Wolfram,
like I wouldn't say that either of them
has a theory of everything.
I would say that they have ideas that they hope, could someday lead to a theory of everything. I would say that they have ideas that they hope, you
know, could someday lead to a theory of everything. Is that a worthy pursuit?
Well, I mean, certainly, let's say by theory of everything, you know, we don't literally
mean a theory of cats and of baseball, and, you know, but we just mean it in the more limited
sense of everything, a fundamental theory of physics, of all of the fundamental
interactions of physics.
Of course, such a theory, even after we had it, would leave the entire question of all
the emergent behavior to be explored.
So it's only everything for a specific definition of everything.
But in that sense, I would say, of course, that's worth pursuing.
I mean, that is the entire program of fundamental physics, right?
All of my friends who do quantum gravity, who do string theory,
who do anything like that, that is what's motivating them.
Yeah, it's funny though, but in Airquastan talks about this,
it is, I don't know much about the physics world, but I know about the AI world.
And it is a little bit taboo to talk about AGI, for example, on the AI side.
So really, to talk about the big dream of the community, I would say, because it seems so far away.
It's almost taboo to bring it up, because it's seen as the kind of people that dream about
creating truly a superhuman level intelligence.
That's really far out there.
People, because we're not even close to that.
It feels like the same thing is true for the physics community.
I mean, Stephen Hawking certainly talked constantly about theory of everything, right? You know, I mean, I mean, people, you know, use those terms who were, you know, some of the most respected people in the, in the, in the whole world of physics, right?
But I mean, I think that the distinction that I would make is that people might react badly if you use the term in a way that suggests that, that you, you know, thinking about it for five minutes
have come up with this major new insight about it. Right? It's difficult. Stephen Hawke is not a great
example because I think you can do whatever the heck you want when you get to that level.
And I certainly see like senior faculty, you know, that, you know, at that point,
that's one of the nice things about getting older.
You just stop giving a damn.
But, me as a whole,
they tend to roll their eyes very quickly
as stuff that's outside the quote unquote mainstream.
Well, let me put it this way.
I mean, if you ask, you know, Ed Witten,
let's say, who is, you know,
you might consider a leader of the string community, and thus very, very
mainstream in a certain sense. But he would have no hesitation in saying, of course, they're
looking for a unified description of nature, of general relativity, of quantum mechanics,
of all the fundamental interactions of nature. Now, whether people would call that a theory of everything, whether they
would use that term, that might vary. You know, Lenny Suskin would definitely have no problem
telling you that, you know, if that's what we want, right? For me, who loves human beings
in psychology, it's kind of ridiculous to say a theory that unifies the laws of physics
gets you to understand everything. I would say you're not even close to understanding everything.
Yeah, right. Well, yeah, the word everything is a little ambiguous here, right? Because,
you know, and then people will get into debates about, you know, reductionism versus
emergentism and blah, blah, blah. And so in not wanting to say theory of everything,
people might just be trying to short circuit that debate
and say, you know, look, you know, yes,
we want a fundamental theory of, you know,
the particles and interactions of nature.
Let me bring up the next topic that people don't want
to mention, although they're getting more comfortable
with it is consciousness.
You mentioned that you have a talk and consciousness that they watched five minutes of, but the
internet connection is really bad.
Was this my talk about, you know, refuting the integrated information theory, which is
this particular account of consciousness that, yeah, I think one can just show it doesn't
work.
So let me, uh, much harder to say what does work.
What does work?
Yeah.
Yeah.
Yeah. yeah. Let me ask, maybe it'd be nice to comment on,
you talk about also like the semi hard problem conscious
or almost hard problem or kind of hard,
pretty, pretty hard problem, I think I call it.
So maybe can you talk about that,
their idea of the approach to modeling consciousness
and why you don't find it convincing?
What is it, first of all?
Okay, well, so what I call the pretty hard problem
of consciousness, this is my term,
although many other people have said something
equivalent to this, okay?
But it's just the problem of giving an account
of just which physical systems are conscious and which are not.
Or if there are degrees of consciousness, then quantifying how conscious a given system
is.
Awesome.
So that's the pretty hard problem.
Yeah, that's what I mean.
That's it.
I'm adopting it.
I love it.
That's a good ring to it.
And so, the infamous hard problem of consciousness is to explain how something like consciousness
could arise at all in a material universe.
Or why does it ever feel like anything
to experience anything?
Right.
And so I'm trying to distinguish from that problem.
And say, no, OK, I would merely settle for an account
that could say, is a fetus conscious,
you know, if so at which trimester, you know, is a, is a dog conscious, you know, what about
a frog, right?
Or even as a precondition, you take that both these things are conscious.
Tell me which is more conscious.
Yeah, for example, yes.
Yes.
Yeah, I mean, if consciousness is some multi-dimensional vector,
we'll just tell me in which respects
these things are conscious and in which respect they aren't.
And have some principled way to do it, where you're not
carving out exceptions for things that you like or don't like,
but could somehow take a description
of an arbitrary physical system and then just
based on the physical properties of that system,
or the informational properties, or how it's connected, or something like that, just
in principle, calculate, you know, it's the degree of consciousness.
Right?
I mean, this would be the kind of thing that we would need, you know, if we wanted to address
questions like, you know, what does it take for a machine to be conscious?
When should we regard AI as being conscious?
So now this IIT, this integrated information theory, which has been put forward by Julio Tennoni and a bunch of his collaborators over the last decade or two. This is noteworthy,
I guess, as a direct attempt to answer that question, to answer the address the pretty hard problem.
Right. And they give a criterion that's just based on how a system is connected.
So it's up to you to sort of abstract a system
like a brain or a microchip
as a collection of components
that are connected to each other
by some pattern of connections.
And to specify how the components can influence each other.
Like where the inputs go,
where they affect the outputs.
But then once you've specified that, then they give this quantity that they call fee,
you know, the Greek letter fee.
And the definition of fee is actually changed over time.
It changes from one paper to another, but in all of the variations, it involves something
about what we in computer science would call graph expansion.
So basically what this means is that they want it in order to get a large value of fee,
it should not be possible to take your system and partition it into two components that
are only weakly connected to each other.
So whenever we take our system and sort of try to split it up into two,
then there should be lots and lots of connections going between the two components. Okay, well,
I understand what that means on a graph. Do they formalize what, how to construct such a graph or
data structure, whatever, or is this one of the criticism I've heard you kind of say is that
a lot of the very interesting specifics
are usually communicated through like natural language,
like through words.
So it's like the details, I don't know,
well, it's true.
I mean, they have nothing even resembling
a derivation of this fee.
Okay, so what they do is they state
a whole bunch of postulates, you know, axioms
that they think that consciousness should satisfy. And then there's some verbal discussion,
and then at some point fee appears. Right. And this, this was what the first thing that
really made the hair stand on my neck, to be honest, because they are acting as if there's
a derivation. They're acting as if you're supposed to think
that this is a derivation
and there's nothing even remotely resembling a derivate.
They just pull the fee out of a hat completely.
Is one of the key criticisms to you
is that details are missing or is there something
more fun than that?
That's not even the key criticism.
That's just a side point.
Okay.
The core of it is that I think that they want to say that a system is more
conscious, the larger its value of fee. I think that that is obvious nonsense. As soon
as you think about it for a minute, as soon as you think about it in terms of, could I
construct a system that had an enormous value of fee, even larger than the brain has, but that is just implementing
an error correcting code, doing nothing that we would associate with intelligence or consciousness
or any of it, the answer is yes, it is easy to do that.
So I wrote blog posts just making this point that, yeah, it's easy to do that.
Now, Tinoa's response to that was actually
kind of incredible, right? I mean, I admired it in a way because instead of disputing any of it,
he just bit the bullet in the sense, you know, he was one of the most audacious bullet
biting I've ever seen in my career, okay? He said, okay, then fine. This system that just applies this error correcting code,
it's conscious. If it has a much larger value of fee, then you were me. It's much more conscious
than you were me. We just have to accept what the theory says, because science is not
about confirming our intuitions. It's about challenging them. And this is what my theory predicts,
that this thing is conscious,
and are super duper conscious,
and how are you gonna prove me wrong?
See, the way I would argue against your blog post,
is I would say, yes, sure you're right in general,
for naturally arising systems developed
through the process of evolution on earth,
the this rule of the larger fee being associated with more consciousness is correct.
So that's not what he said at all. Right. Right.
Because he wants this to be completely general. Right.
So we can apply to given computers. Yeah. I mean, I mean, the whole interest of the theory is the, you know, the hope that it could be completely general, apply to aliens, to computers, to animals,
coma patients, to any of it, right?
And so he just said, well,
Scott is relying on his intuition,
but I'm relying on this theory.
And to me, it was almost like,
are we being serious here?
Like, like, yes, you know, to me, it was almost like, you know, are we being serious here?
Like, like, like, like, okay, yes, in science, we try to learn highly not intuitive things.
But what we do is we first test a theory on cases where we already know the answer, right?
Like, if someone had a new theory of temperature, right, then, you know, maybe we could check
that it says that boiling water is hotter than ice. And then if it says that the sun is hotter than anything,
you know, you've ever experienced, then maybe we we trust that extrapolation, right? But like this
this theory, like if, you know, it's now saying that, you know, a gigantic gigantic, like regular grid of exclusive orgates can be way more conscious
than a person or than any animal can be, even if it is so uniform, that it might as well
just be a blank wall.
And so now the point is, if this theory is sort of getting wrong, the question is a blank wall,
you know, more conscious than a person, then I would say, what is, what is there for it to get right?
So your sense is a blank wall is not more conscious than a human being. Yeah, I mean, I mean,
you could say that I am taking that as one of my axioms. I'm saying that if a theory of consciousness is getting that
wrong, then whatever it is talking about, at that point, I'm not going to call it consciousness.
Well, it's going to use a different word. You have to use a different word. I mean, it's
possible, just like with intelligence, that us humans conveniently define these very difficult
to understand concepts in a very human-centric way.
That's right.
Just like the touring test really seems to define intelligence as a thing that's human-like.
Right, but I would say that with any concept, you know, there's, you know, we first need
to define it, right?
And a definition is only a good definition if it matches what we thought we were talking about,
prior to having a definition, right?
And I would say that, you know,
fee as a definition of consciousness fails that test.
That is my argument.
So, okay, so let's take a further step.
So you mentioned that the universe might be
a touring machine, so like like it might be computation.
Or simulatable by one, anyway, simulatable by one.
So, yeah, what's your sense about consciousness? Do you think
consciousness is computation that we don't need to go to any
place outside of the computable universe to, you know, to
what to understand consciousness, to build consciousness, to measure consciousness,
all those kinds of things.
I don't know.
These are what have been called the vertiginous questions.
There's the questions like, you get a feeling of vertigo when thinking about them.
I certainly feel like I am conscious in a way that is not
reducible to computation, but why should you believe me? Right? I mean, and if you said
the same to me, then why should I believe you? But as computer scientists, I feel like
a computer could be, could achieve human level intelligence. But, and that's actually a feeling and a hope.
That's not a scientific belief.
It's just we've built up enough intuition.
The same kind of intuition you use in your blog,
it's, you know, that's what scientists do.
They, I mean, some of it is a scientific method,
but some of it is just damn good intuition.
I don't have a good intuition about consciousness.
Yeah.
I'm not sure that anyone does or has in the, you know, 2500 years that these things have
been discussed.
Flex.
But do you think we will?
Like one of the, I got in a chance to attend, I can't wait to hear your opinion on this,
but attend the neural link event.
And one of the dreams there is to, you know, basically push neuroscience forward.
And the hope in neuroscience is that we can inspect
the machinery from which all this fun stuff emerges
and see, we're gonna notice something special,
some special sauce from which something like consciousness
or cognition emerges.
Yeah, well, it's clear that we've learned
an enormous amount about neuroscience.
We've learned an enormous amount about computation,
about machine learning, about AI, how to get it to work. We've learned an enormous amount about
the underpinnings of the physical world. From one point of view, that's an enormous distance
that we've traveled along the road to understand and consciousness. From another point of
view, you know, the distance still to be traveled on the road, you know, maybe seems no shorter than
it was at the beginning. Right. So it's very hard to say. I mean, you know, these are questions like,
like in, in, in sort of trying to have a theory of consciousness, there's sort of a problem where
it feels like it's not just that we don't know how to make progress, it's that it's hard to specify what could even count as progress, right?
Because no matter what scientific theory someone proposed, someone else could come along
and say, well, you've just talked about the mechanism.
You haven't said anything about what breathes fire into the mechanism, what really makes
there's something that it's like to be it.
And that seems like an objection that you could always raise,
no matter, you know, how much someone elucidated the details of how the brain
works. Okay, let's go to the touring test and a lot of the price.
I have this intuition, call me crazy, but we that a machine to pass the
touring test and it's full. Whatever the spirit of it is, we can talk about
how to formulate the perfect touring test,
that that machine has to be conscious.
Are we at least have to...
I have a very low bar of what consciousness is.
I tend to think that the emulation of consciousness
is as good as consciousness.
So consciousness is just a dance, a social,
a social shortcut, like a nice useful tool, but I tend to connect intelligence consciousness
together. So by that, do you maybe just to ask what role does consciousness play? Do you think
it passed in the touring test? Well, look, I mean, it's almost total logically true that if we had a machine that passed
the touring test, then it would be emulating consciousness, right?
So if your position is that, you know, emulation of consciousness is consciousness, then, you
know, by definition, any machine that passed the touring test would be conscious.
But it's, uh, but I mean, we know that you could say that, you know,
that that is just a way to rephrase the original question, you know, is an emulation of consciousness,
you know, necessarily conscious, right? And you can, you know, I hear I'm not saying anything
new that hasn't been debated ad nauseam in the literature, okay? But, you know, you could
imagine some very hard cases, like imagine a machine that passed the touring test,
but it did so just by an enormous cosmological-sized lookup table that just cashed every possible
conversation that could be had. The old Chinese room. Well, yeah, but this is,
I mean, the Chinese room actually would be doing some computation, at least in Sirl's version,
right? Here, I'm just talking about a table look computation, at least in Sirl's version, right?
Here, I'm just talking about a table lookup, okay?
Now, it's true that for conversations of a reasonable length, this, you know, lookup
table would be so enormous that wouldn't even fit in the observable universe, okay?
But supposing that you could build a big enough lookup table and then just, you know,
pass the touring test just by looking up what the person said, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh system of black box that's full of mystery. So like, full of mystery to whom? To human
and specters. So does that mean that consciousness is relative to the observer? Like could
something be conscious for us, but not conscious for an alien that understood better what was
happening inside the black box? Yes. So that if inside the black box is just a lookup
table, the alien that saw that would say this is not conscious.
To us, another way to phrase the black box is layers of abstraction, which make it very difficult to see to actually underlying
functionality of the system. And then we observe just the abstraction. And so it looks like magic to us.
But once we understand the inner machinery, it stops being magic. And so like, that's
a prerequisite is that you can't know how it works. Some part of it, because then there
has to be in our human mind entry point for the magic. So that's a formal definition of
the system. Yeah, well, look, I mean, I explored a view in this essay I wrote called The Ghost and the Quantum Turing
Machine seven years ago that is related to that, except that I did not want to have consciousness
be relative to the observer, right?
Because I think that, you know, if consciousness means anything, it is something that is experienced
by the entity that is conscious, right?
You know, like, I don't need you to tell me that I'm conscious, right?
In order you need me to tell you that you are, right?
So, but basically, what I explored there is, you know,
are there aspects of a system like a brain
that just could not be predicted, even with arbitrarily advanced future technologies.
It's because of chaos combined
with quantum mechanical uncertainty,
you know, things like that.
I mean, that actually could be a property of the brain,
you know, if true, that would distinguish it
in a principled way,
at least from any currently existing computer,
not from any possible computer.
But from, yeah, yeah.
This is a thought experiment.
So, yeah.
If I gave you information that you're in the entire history of your life, basically explain
away free will with a lookup table, say that this was all predetermined, that everything
you experience has already been predetermined. Wouldn't that take away your consciousness?
You yourself that wouldn't experience of the world change for you in a way that you can't
take back.
Well, let me put it this way.
If you could do like in a Greek tragedy where you would just write down a prediction for
what I'm going to do and then maybe you put the prediction in a sealed box and maybe,
you know, you open it later and you show that you knew everything I was going to do. Or,
you know, of course, the even creepier version would be, you tell me the prediction. And then
I try to falsify it. My very effort to falsify it makes it come true. Right. Let's, let's,
you know, let's even forget that, you know, that version as convenient as it is for fiction writers.
Let's just do the version where you put the prediction into a sealed envelope.
If you could reliably predict everything that I was going to do, I'm not sure that that
would destroy my sense of being conscious, but I think it really would destroy my sense
of having free will. And much, much more than any philosophical conversation could possibly do that.
I think it becomes extremely interesting to ask, could such predictions be done,
even in principle, is it consistent with the laws of physics to make such predictions,
to get enough data about someone that you could
actually generate such predictions without having to kill them in the process.
So, you know, slice their brain up into little slivers or something.
I mean, theoretically possible, right?
Well, I don't know.
I mean, it might be possible, but only at the cost of destroying the person, right?
I mean, it depends on how low you have to go in sort of the substrate.
Like if there was a nice digital abstraction layer, if you could think of each neuron as
a kind of transistor computing a digital function, then you could imagine some nano robots
that would go in and we just scan the state of each transistor, you know, of each neuron
and then, you know, make a good enough copy.
But if it was actually important to get down to the molecular or the atomic level, then
eventually you would be up against quantum effects.
You would be up against the unclonability of quantum states.
So I think it's a question of how good does the replica have to be before you're going to count it as actually a copy of you
or as being able to predict your actions? That's a totally open question.
Yeah, yeah, yeah. And especially once we say that, well, look, maybe there's no way to make
a deterministic prediction, because we know that there's noise buffeting the brain around, presumably even quantum mechanical
uncertainty affecting the sodium ion channels, for example, whether they open or they close.
There's no reason why, over a certain timescale, that shouldn't be amplified,
just like we imagine happens with the weather or with any other chaotic system.
If that stuff is important, then we would say, well, you're never going to be able to make an
accurate enough copy. But now the hard part is, well, what if someone can make a copy that's sort of no one else can tell apart from you, right?
It says the same kinds of things that you would have said,
maybe not exactly the same things
because we agree that there's noise,
but it says the same kinds of things.
And maybe you alone would say, no,
I know that that's not me.
It doesn't share my, I haven't felt my consciousness
leap over to that other thing. I still feel it localized in this version, right? Then
why should anyone else believe you? What are your thoughts? I'd be curious, you're
really a person to ask, which is pen roses, Roger Penrose's work on consciousness, saying
that there, you know, there is some, with axons and so on, there might be some biological places where quantum mechanics can come into play and through that's creed consciousness somehow.
Yeah. Okay. Well, I'm familiar with this work. Of course. You know, I read PanRows' books as a teenager. They had a huge impact on me five or six years ago. I had the privilege to actually talk these things over with PEDMOS, you know, at some length at a conference in Minnesota.
And you know, he is, you know, an amazing personality.
I admire the fact that he was even raising such audacious questions at all.
But you know, to answer your question, I think the first thing we need to get clear on is
that he is not merely saying that quantum
mechanics is relevant to consciousness. That would be like, that would be tame compared to what he
is saying. He is saying that even quantum mechanics is not good enough because if supposing,
for example, that the brain were a quantum computer know that's still a computer. In fact, a quantum computer can be simulated by an ordinary computer.
It might merely need exponentially more time in order to do so.
That's simply not good enough for him.
What he wants is for the brain to be a quantum gravitational computer.
He wants the brain to be exploiting as yet unknown laws of quantum gravity,
which would be uncomputable for the key point.
That's the key point.
Yes, that would be literally uncomputable, and I've asked him to clarify this,
but even if you had an oracle for the halting problem,
or as high up as you know, and, or, you know, as high
up as you want to go in this sort of high, the usual hierarchy of uncomputability, he wants to go
beyond all of that. Okay, so, so, you know, just to be clear, like, you know, if we're keeping count
of how many speculations, you know, there's probably like at least five or six of them, right?
There's first of all that there is some quantum gravity theory that would
involve this kind of uncomputability, right?
Most people who study quantum gravity would not agree with that.
They would say that what we've learned, you know, what little we know about
quantum gravity from the this ADS CFT correspondence, for example,
has been very much consistent with the broad idea of nature
being computable, right?
But supposing that he's right about that, then what most physicists would say is that
whatever new phenomena there are in quantum gravity, they might be relevant at the singularities
of black holes.
They might be relevant at the singularities of black holes. They might be relevant at the big bang.
They are plainly not relevant to something like the brain,
you know, that is operating at ordinary temperatures,
you know, with ordinary chemistry.
And, you know, the physics underlying the brain,
they would say that we have, you know, the fundamental physics of the brain, they would say that we have, the fundamental physics of the brain, they would say that we've pretty much completely known
for generations now, because quantum field theory lets us parameterize our ignorance.
I mean, Sean Carroll has made this case in great detail. that sort of whatever new effects are coming from quantum gravity,
they are sort of screened off by quantum field theory.
And this brings us to the whole idea of effective theories.
But we have in the standard model of elementary particles, we have a quantum field theory
that seems totally adequate for all of the terrestrial phenomena.
Right, the only things that it doesn't, you know, explain are,
well, first of all, you know, the details of gravity,
if you were to probe it like at, at, you know,
extremes of, you know, curvature or like incredibly small distances,
it doesn't explain dark matter,
it doesn't explain black hole, it doesn't explain black
whole singularities, right? But these are all very exotic things, very, you know, far removed
from our life on Earth. Right? So for Penrose to be right, he needs, you know, these phenomena
to somehow affect the brain. He needs the brain to contain antenna that are sensitive to
this black hole, to this as yet unknown physics, right?
And then he needs a modification of quantum mechanics.
Okay, so he needs quantum mechanics to actually be wrong.
Okay, he needs what he wants is what he calls
an objective reduction mechanism or an objective collapse.
So this is the idea that once quantum states get large
enough, then they somehow spontaneously collapse. And this is an idea that lots of people have
explored. There's something called the GRW proposal that tries to say something along those
lines. These are theories that actually make testable predictions,
which is a nice feature that they have.
But the very fact that they're testable
may mean that in the coming decades,
we may well be able to test these theories
and show that they're wrong.
We may be able to test some of Penrose's ideas.
If not, not as ideas about consciousness, but at least his ideas about an objective collapse
of quantum states.
And people have actually, like Dick Baumeister, have actually been working to try to do these
experiments.
They haven't been able to do it yet to test Penrose's proposal.
But Penrose would need more than just an objective collapse of quantum states, which
would already be the biggest development in physics for a century since quantum mechanics
itself.
He would need for consciousness to somehow be able to influence the direction of the
collapse so that it wouldn't be completely random, but that your dispositions would somehow
influence the quantum state to
collapse more likely this way or that way.
Okay.
Finally, Penrose, you know, says that all of this has to be true because of an argument that
he makes based on girdles and completeness theorem.
Okay.
Now, Blake, I would say the overwhelming majority of computer scientists and mathematicians
who have thought about this, I don't think that Gertel's incompleteness theorem can do what
he needs it to do here, right?
I don't think that that argument is sound, okay?
But that is sort of the tower that you have to ascend to if you're going to go where
Penra is going.
And the intuition uses with the incompleteness theorem is that basically that there's important stuff that's not computable
It's not it's not just that because I mean everyone agrees that there are problems that are uncomputable right that's a mathematical theorem
Yeah, right that but what Penrose wants to say is that
You know the you know for example, there are statements
You know for are statements given any formal system for doing math, right?
There will be true statements of arithmetic that that formal system, if it's adequate for
math at all, if it's consistent and so on, will not be able to prove.
Famous example being the statement that that system itself is consistent.
No good formal system can actually prove its own consistency.
That can only be done from a stronger formal system, which then can't prove its own consistency and so on forever.
That's girdle's theorem.
But now, why is that relevant to consciousness? Right?
Well, you know, I mean, the idea that it might have something to do with consciousness
as an old one, girdle himself apparently thought that it'd be.
Oh, really?
It doesn't.
You know, Lucas,
um, um, um, thought so, I think in the 60s.
And Penrose is really just, you know, sort of updating, but what, uh,
what, what day and others had said.
I mean, the idea that Gertel's theorem
could have something to do with consciousness was,
you know, in 1950 when Alan Turing wrote his article
about the Turing test, he already, you know,
was writing about that as like an old and well-known idea
and as one that he was just as a wrong one
that he wanted to dispense with.
Right.
Okay, but the basic problem with this idea is, you know, Penrose wants to say that, and all
of his predecessors, you know, want to say, that, you know, even though, you know, this
given formal system cannot prove its own consistency, we as humans, sort of looking at it from the outside can just somehow see its consistency.
And the rejoinder to that, from the very beginning has been, well, can we really?
Yeah, I mean, maybe he penrose can, but can the rest of us?
can the rest of us, right? And you notice that, you know, I mean, it is perfectly plausible to imagine a computer that could say, you know, it would not be limited to working within
a single formal system, right? They could say, I am now going to adopt the hypothesis that
my formal system is consistent, right? And I'm now gonna see what can be done from that stronger vantage point and so on.
And yeah, when I'm going to add new axioms to my system,
totally plausible.
There's absolutely girdle theorem
has nothing to say about against an AI
that could repeatedly add new axioms.
All it says is that there is no absolute guarantee
that when the AI adds new axioms that it will always be right.
Okay, and that's of course the point that Penrose pounces on, but the reply is obvious.
And it's one that Alan Turing made 70 years ago.
We don't have an absolute guarantee that we're right when we add a new axiom.
We never have, and plausibly we never will.
So on Alan Turing, you took part in the love enterprise?
I'm not really, you know what I mean?
I didn't. I mean, there was this kind of ridiculous claim
that was made some almost a decade ago about a chat bot
called Eugene Goosebite.
I guess you didn't participate as a judge in the love enterprise.
I didn't. But you participated as a judge in that.
I guess it was an exhibition event or something like that,
or was Eugene the...
No, Eugene Gooseman, that was just me writing a blog post
because some journalists called me to ask about it.
Did you ever chat with him?
I thought so.
I did chat with Eugene Gooseman.
I mean, it was available on the web.
The chat.
Oh, interesting.
So yeah, so all that happened was that a bunch of journalists started writing breathless articles
about, you know, first chatbot that passes the touring test.
Right.
And it was this thing called Eugene Gustman that was supposed to simulate a 13 year old boy.
And apparently someone had done some test where people couldn't, you know, we're less than
perfect, let's say, distinguishing it from a human.
And they said, well, if you look at Turing's paper and you look at, you know, the percentages
that he talked about, then, you know, it seemed like we're past that threshold, right?
And, you know, I add a sort of, you know, different way to look at it instead of the
legalistic way, like let's just try the actionable thing out. And let's see what it can do with
questions like, you know, is Mount Everest bigger than a shoe box? Okay. Or just, you know, like the
most obvious questions, right? And then, and, you know, and the answer is, well, it just kind of
parries you because it doesn't know what you're talking about, right?
So just to know if I exactly in which way they're obvious, they're obvious in the sense that you convert the sentences into the meaning of the objects they represent and then do some basic
obvious, we mean, you're common sense reasoning with the objects that the senses represent. Right, right, it was not able to answer,
or even intelligently respond
to basic common sense questions.
But let me say something stronger than that.
There was a famous chatbot in the 60s called Eliza,
that managed to actually fool a lot of people,
or people would pour their hearts out into this Eliza
because it simulated a therapist, right?
And most of what it would do
was it would just throw back at you whatever you said, right?
And this turned out to be incredibly effective, right?
It may be, you know, therapists know this.
This is, you know, one of their tricks.
But it really had some people convinced.
But this thing was just like,
I think it was literally just a few hundred lines
of list code, right?
It was not only was it not intelligent,
it wasn't especially sophisticated.
It was a simple little hobbyist program.
And Eugene Gooseman from what I could see
was not a significant advance compared to
Eliza, right? So so this and and that was that was really the point I was making and this was you know
you didn't in some sense you didn't need a like a computer science professor to sort of say this like
anyone who was looking at it and who just had an ounce of sense could have said the
same thing, right?
But because these journalists were calling me, like the first thing I said was, well, no,
I'm a quantum computing person.
I'm not an AI person.
You shouldn't ask me.
But then they said, look, you can go here and you can try it out.
I said, all right.
So I'll try it out. I said, all right, all right, so I'll try it out.
But now, you know, this whole discussion, I mean, it got a whole lot more interesting
in just the last few months.
Yeah, I'd love to hear your thoughts about GPT-3,
and the last few months,
we've had, you know, we've,
the world has now seen a chat engine or a text engine,
I should say, called GPT-3.
I think it still, it does not pass a touring test.
There are no real claims that it passes the touring test.
This comes out of the group at OpenAI,
and they've been relatively careful
in what they've claimed about the system.
But I think this as clearly as Eugene Gustman was not in advance over Eliza, it is equally clear
that this is a major advance over over Eliza or really over anything that the world has
seen before. This is a text engine that can come up with kind of on topic,
you know, reasonable sounding completions
to just about anything that you ask.
You can ask it to write a poem about topic X
in the style of poet Y, and it will have a go with that.
And it will do, you know, not a great job,
not an amazing job, but a passable job.
Definitely as good as, in many cases,
I would say better than I would have done.
You can ask it to write an essay,
like a student essay about pretty much any topic,
and it will get something that I am pretty sure
would get at least a B minus in my know, in my most, you know, high school or even college classes, right? And, you know, in some sense, you know,
the way that it did this, the way that it achieves this, you know, Scott Alexander of the,
you know, the much more in the blog, Slate Star Codex, had a wonderful way of putting it.
He said that they basically just ground up the entire internet into a slurry.
Okay. And to tell you the truth, I had wondered for a while why nobody had tried that,
right? Like why not write a chatbot by just doing deep learning over a corpus consisting of the
entire web, right? And so now they finally have done that.
And the results are very impressive.
It's not clear that people can argue about whether this is truly a step toward general
AI or not.
But this is an amazing capability that we didn't have a few years ago.
That a few years ago, if you had told me that we would have it now
That would have surprised me. Yeah, and I think that anyone who denies that is just not engaging with with what's there?
So their model it takes a large part of the internet and compresses it in a small number of parameters
relative to the size of the internet and is able to without fine tuning do a basic
kind of a querying mechanism just like you describe, or you specify a kind of poet and then
you want to write a poem. And it's somehow is able to do basically a look up on the internet
of relevant things. I mean, that's what I mean, I mean, I mean, how else do you explain it?
Well, okay. I mean, I mean, the the training involved, you know, massive amounts of data from the
internet and actually took lots and lots of computer power, lots of electricity, right?
You know, there are some, some very prosaic reasons why this wasn't done earlier, right?
But, you know, it cost some tens of millions of dollars, I think, you know, it's just
for, okay, like a few million dollars.
Oh, okay, okay.
Oh, really?
Okay. I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, know, so I mean, I mean, or what's now called a deep net, but you know, they're basically the same thing, right?
So it's a it's a form of, you know, algorithm that people have known about for decades, right?
But it is constantly trying to solve the problem, predict the next word, right?
So it's just trying to predict what comes next.
It's not trying to predict what comes next.
It's not trying to decide what it should say, what ought to be true.
It's trying to predict what someone who had said all of the words up to the preceding one
would say next.
Although to push back on that, that's how it's trained.
That's right.
No, it's arguable.
It's arguable that our very cognition could be a mechanism as that simple.
Of course, of course, I never said that it wasn't.
But yeah, but I mean, I mean, in some sense, that is, you know, if there is a deep philosophical
question that's raised by GPT-3, then that is it, right?
Are we doing anything other than, you know, this predictive processing, just trying to
constantly trying to fill in a
blank of what would come next after what we just said up to this point, is that what I'm
doing right now?
It's impossible.
So the intuition that a lot of people have will look, this thing is not going to be able
to reason the mountain Everest question.
Do you think it's possible that GBT five, six, and 7 would be able to, with this exact same process,
begin to do something that looks like is indistinguishable to us humans from reasoning?
I mean, the truth is that we don't really know what the limits are, right?
Right, exactly. Because what we've seen so far is that GBT3 was basically the same thing as GBT2, but just
with a much larger network, more training time, bigger training corpus, and it was very
noticeably better than its immediate predecessor.
We don't know where you hit the ceiling here, right? I mean,
that's the, that's the amazing part, and maybe also the scary part, right, that, you know,
now my guess would be that, you know, at some point, like, there has to be diminishing returns.
Like, it can't be that simple, can it? Right? Right? But I wish that I had more to base that
guess on. Right. Yeah. I mean, some people say that there will be a limitation
on the, we're going to hit a limit on the amount of data
that's on the internet.
Yes.
Yeah, so sure.
So there's certainly that limit.
I mean, there's also, you know, like if you are looking
for questions that will stomp GPT-3, right,
you can come up with some without, you know, like, you know,
even getting it to learn how
to balance parentheses, right? Like, it can, you know, it doesn't do such a great job, right?
You know, like, you know, and, you know, and it's failures are ironic, right? Like, like, basic arithmetic,
right? And you think, and you think, you know, isn't that what computers are supposed to be best
that? Isn't that where computers already had us beat a century ago? Right? And, you think, isn't that what computers are supposed to be best at? Isn't that where computers already had us beat a century ago?
And yet that's where GBT3 struggles.
But it's amazing that it's almost like a young child in that way.
But somehow, because it is just trying to predict what comes next,
it doesn't know when it should stop doing
that and start doing something very different, like some more exact logical reasoning.
So, one is naturally led to guess that our brain sort of has some element of predictive processing,
but that it's coupled to other mechanisms, right?
That it's coupled to, you know, first of all, visual reasoning, which GPT-3 also doesn't have any of, right?
Although there's some demonstration that there's a lot of promise there.
That's how, yeah, it can complete images. That's right.
And using exact same kind of transformer mechanisms to like watch videos on YouTube.
And so the same, the same self-supervised mechanism
to be able to look,
it'd be fascinating to think,
what kind of completion you could do.
Oh yeah, no, absolutely.
Although, if we ask it to like, you know,
a word problem that involved reasoning
about the locations of things in space,
I don't think it does such a great job on those, right?
To take an example.
And so, so the gas would be, well, you know,
humans have a lot of predictive
processing, a lot of just filling in the blanks, but we also have these other mechanisms that we
can couple to or that we can sort of call a subroutine when we need to. And that maybe, maybe,
you know, to go further that one would want to integrate other forms of reasoning.
Let me go on another topic that is amazing, which is
complexity. What and then start with the most absurdly romantic question of
what's the most beautiful idea in the computer science or theoretical
computer science to you? Like what just early on in your life or in general I've
captivated you and just grabbed you. I think I'm going to have to go with the idea of universality.
You know, if you're really asking for the most beautiful, I mean,
so universality is the idea that you know, you put together a few simple operations.
Like, in the case of Boolean logic, that might be the end gate, the or gate, the not gate.
And then your first guess is, okay, this is a good start, but obviously, as I want to do more complicated things,
I'm going to need more complicated building blocks to express that.
And that was actually my guess when I first learned what programming was.
I mean, when I was an adolescent and, an adolescent and I someone showed me a Apple basic and, you know,
a GW basic, if anyone listening remembers that. Okay, but, you know, I thought, okay, well now,
you know, I mean, I thought I felt like this is a revelation, you know, it's like finding out
where babies come from. It's like that level of, of you know why didn't anyone tell me this before right but I thought okay this is just the beginning now I know how to write a basic program
but you know really write an interesting program like a you know a video game which had always been my dream as a kid to you know create my own Nintendo games right that you know but I'm you know obviously I'm gonna need to learn some way more complicated form of programming
than that. But eventually, I learned this incredible idea of universality. And that says that,
no, you throw in a few rules, and then you already have enough to express everything.
So, for example, the AND, the OR, and the NOT not gate can all or in fact, even just the and and the not gate
or even just the nand gate, for example,
is already enough to express any Boolean function
on any number of bits.
You just have to string together enough of them.
You can build a universe with nand gates.
You can build a universe out of nand gates.
Yeah, the simple instructions of basic are already enough, at least in principle, you
know, if we ignore details like how much memory can be accessed and stuff like that, that
is enough to express what could be expressed by any programming language whatsoever.
And the way to prove that is very simple.
We simply need to show that in basic or whatever, we could write an interpreter or a
compiler for whatever is other programming language we care about, like C or Java or whatever.
And as soon as we had done that, then if so fact, though, anything that's expressible in C or
Java is also expressible in basic. Okay. And so this idea of universality, you know, goes back at least to Alan Turing in the 1930s when, you know, he wrote down this incredibly simple,
paired down model of a computer, the Turing machine, right, which, you know, he paired down the instruction set to just read a symbol, you know, go right a symbol, move to the left, move to the right,
halt, change your internal state.
That's it.
Anybody proved that this could simulate all kinds of other things.
In fact, today, we would call it a touring universal model of
computation that is, you know, just as it has just the same expressive power that basic
or Java or C++ or any of those other languages have, because anything in those other languages
could be compiled down to touring machine.
Now touring also proved a different related thing, which
is that there is a single Turing machine that can simulate any other Turing machine if
you just describe that other machine on its tape. And likewise, there is a single Turing
machine that will run NEC program, if you just put it on its tape. That's a second meaning
of universality. First of all, he couldn't visualize it and that was in the 30s. Yeah, 30s. That's
right. Before computers really, I mean, I don't know how I wonder what that felt like, you know,
learning that there's no Santa Claus or something. Because I don't know if that's empowering or paralyzing,
because it doesn't give you any,
it's like you can't write a software engineering book
and make that the first chapter and say we're done.
Well, I mean, I mean, right, I mean, in one sense,
it was this enormous flattening of the universe.
I had imagined that there was going to be
some infinite hierarchy
of more and more powerful programming languages. And then I kicked myself for having such a stupid
idea. But apparently, Gertel had had the same conjecture in the 30s. And then Gertel read Turing's
paper and he kicked himself and he said, yeah, I was completely wrong about that.
But I had thought that maybe where I can contribute
will be to invent a new more powerful programming
language that lets you express things that could never
be expressed in basic.
And how would you do that?
Obviously, you couldn't do it itself in basic.
But there is this incredible flattening
that happens once you learn what is universality.
But then it's also like an opportunity
because it means once you know these rules,
then, you know, the sky is the limit, right?
Then you have kind of the same weapons at your disposal
that the world's greatest
programer has. It's now all just a question of how you wield them.
Right, exactly. But so every problem is solvable, but some problems are harder than others.
Well, yeah, there's the question of how much time, you know, of how hard is it to write
a program? And then there's also the questions of what resources
does the program need, how much time, how much memory,
those are much more complicated questions.
Of course, ones that we're still struggling with today.
Exactly.
So you've, I don't know if you created Complexity Zoo, or.
I did create the Complexity Zoo.
What is it?
What's Complexity?
Oh, all right, all right.
Complexity theory is the study of sort of the inherent resources
needed to solve computational problems.
Okay, so it's easiest to give an example.
Like, let's say we want to add to numbers, right?
If I want to add them, you know, if the numbers are twice
as long, then it only, it will take me twice
as long to add them, but only twice as long, right?
It's no worse than that.
Multi-computer.
For a computer or for a person, we're using pencil and paper for that matter.
If you have a good algorithm.
Yeah, that's right.
Even if you just use the elementary school algorithm of just carrying, you know, then
it takes time that is linear in the length of the numbers, right?
Now, multiplication, if you use the elementary school algorithm, is harder because you have
to multiply each digit of the first number by each digit of the second one.
Yeah.
And then deal with all the carries.
So that's what we call a quadratic time algorithm, right?
If the numbers become twice as long, now you need four times as much time,
okay? So now as it turns out, people discovered much faster ways to multiply numbers using computers.
And today we know how to multiply two numbers that are end digits long using a number of steps
that's nearly linear in it.
These are questions you can ask,
but now let's think about a different thing
that people, you know, even countered
in elementary school, a factoring a number.
Okay, take a number and find its prime factors, right?
And here, you know, if I give you a number with 10 digits,
I ask you for its prime factors.
Well, maybe it's even, so you know
that 2 is a factor, you know, maybe it ends in 0, so you know that 10 is a factor, right? But,
you know, other than a few obvious things like that, you know, if the prime factors are all very
large, then it's not clear how you even get started, right? You know, it seems like you have to do
an exhaustive search among an enormous number of factors. Now, and as many people You know, it seems like you have to do an exhaustive search among an enormous number of
factors. Now, and as many people might know, the for better or worse, the security, you know,
of most of the encryption that we currently use to protect the internet is based on the belief,
and this is not a theorem, it's a belief that that factoring is an inherently
hard problem for our computers. We do know algorithms that are better than just trial
division, just trying all the possible divisors, but they are still basically exponential.
And that's financial. It's hard. Yeah, exactly. So the fastest algorithms
that anyone has discovered, at least publicly discovered, you know
I'm assuming that the NSA doesn't know something better. Yeah, okay, but they they take time that basically grows
exponentially with the cube root of the size of the number that you're factoring right so that cube root
That's the part that takes all the cleverness. Okay, but there's still an exponential.
There's still an exponentiality there.
What that means is that when people use a thousand bit keys for their cryptography,
that can probably be broken using the resources of the NSA or the world's other intelligence agencies.
You know, people have done analyses that say, you know, with a few hundred million dollars of
computer power, they could totally do this.
And if you look at the documents that Snowden released, you know, it, it, it, it, it, it,
it looks a lot like they are doing that or something like that.
It would kind of be surprising if they weren't.
Okay.
But, you know, if that's true, then in, in, in some ways, that's reassuring because
of that's the best that they can do, then that would say that they can't break 2000 bit numbers.
Right?
Exactly.
Then 2000 bit numbers would be beyond what even they could do.
They haven't found an efficient algorithm.
That's where all the worries and the concerns of
upon computing came in that there'd be some kind
of shortcut around that.
Right, so complexity theory is a huge part
of, let's say, the theoretical core of computer science, you know, it started in the 60s and 70s as, you know, sort of an autonomous field. So it was, you know, already, you know, I mean, you know, it was well developed even by the time that I was born.
by the time that I was born. But in 2002, I made a website called The Complexities
Tho to answer your question, where I just
tried to catalog the different complexity classes, which
are classes of problems that are
solvable with different kinds of resources.
So these are kind of, you could think of complexity classes
as like being
almost to theoretical computer science, like what the elements are to chemistry, right?
They're sort of, you know, there are most basic objects in a certain way.
I feel like the elements have a characteristic to them where you can't just add an infinite number.
Well, you could, but beyond a certain point, they become unstable.
Right? Right. So it's like, you know, in theory, you can have atoms with,
you know, and look, look, I mean, I mean, a neutron star, you know, is a nucleus with, you know,
uncalled billions of of of of of of of of of of of neutrons in it, of of of of hadrons in it, of Hadron's in it.
But for sort of normal atoms,
probably you can't get much above a 100 atomic weight 150 or so,
sorry, sorry, I mean, beyond 150 or so protons
without it very quickly fissioning.
With complexity classes, well, yeah,
you can have an infinity of complexity classes.
But maybe there's only a finite number of them
that are particularly interesting, right?
Just like with anything else, you care about some
more than about others.
So what kind of interesting classes are there?
You can have just maybe say, what are the,
if you take any kind of computer
science class, what are the classes you learn?
Good. Let me tell you sort of the biggest ones, the ones that you would learn first. So,
you know, first of all, there is P. That's what it's called. It stands for Paulino Mealtime.
And this is just the class of all of the problems that you could solve with a conventional computer like your iPhone or your laptop,
you know, by a completely deterministic algorithm, right?
Using a number of steps that grows only like the size of the input raised to some fixed power. So if your algorithm is linear time, like for adding numbers, that problem is in
P. If you have an algorithm that's quadratic time, like the elementary school algorithm
for multiplying two numbers, that's also in P. Even if it was the size of the input
to the tens power, or to the 50th power, well, that wouldn't be very good in practice, but you know, formally
we would still count that, that would still be in P. Okay, but if your algorithm takes exponential
time, meaning like if every time I add one more data point to your input, if the time that
needed by the algorithm doubles, if you need time like two to the power
of the amount of input data, then that
we call an exponential time algorithm.
And that is not polynomial.
So P is all of the problems that
have some polynomial time algorithm.
So that includes most of what we do with our computers
on a day-to-day basis. All the sorting, basic arithmetic, whatever is going on in your email reader or in Angry
Birds.
It's all in P.
Then the next super important class is called NP.
That stands for non-deterministic polynomial.
Does not stand for not polynomial, which is a
common confusion. But NP was basically all of the problems where if there is a solution,
then it is easy to check the solution, if someone shows it to you, okay? So actually
a perfect example of a problem in NP is factoring the one I told you about before.
Like if I gave you a number with thousands of digits and I told you that it, you know,
I asked you, does this have at least three non-trivial divisors, right?
That might be a super hard problem to solve, right?
It might take you millions of years using any algorithm that's known, at least running
on our existing computers.
Okay, but if I simply showed you the divisors, I said, here are three divisors of this number,
then it would be very easy for you to ask your computer to just check each one and see
if it works, just divide it in, see if there's any remainder, right?
And if they all go in, then you've checked.
Well, I guess there were, right?
So, so any problem where, you know,
wherever there's a solution,
there is a short witness that can be easily
like a polynomial size witness,
that can be checked in polynomial time,
that we call an NP problem, okay?
Beautiful. And yeah, so every problem that's in P is also in NP, right?
Because you know, you could always just ignore the witness
and just, you know, if a problem is in P,
you can just solve it yourself.
Okay, but now the, in some sense,
the central, you know, mystery of theoretical computer science
is every NP problem in P.
So if you can easily check the answer to a computational problem,
does that mean that you can also easily find the answer?
Even though there's all these problems that appear to be very difficult to find the answer,
it's still an open question whether a good answer exists.
Because no one has proven that there's no way to do it.
It's arguably the most, I don't know, the most famous, the most, maybe interesting, maybe
disagree with that problem in theoretical computer science.
So what's your most famous for sure?
P equals NP.
Yeah.
If you were to bet all your money, where do you put your money?
That's an easy one.
P is not equal to NP.
Okay.
So I like to say that if we were physicists
We would have just declared that to be a law of nature. You know, just like just like thermodynamics
It's hilarious. Given ourselves Nobel Prizes
That is so funny. Yeah. Yeah. No, no, look if later if later it turned out that we were wrong
We just give ourselves poor Nobel Prizes. Yeah
I mean, you know, but yeah, because we're so hard, but so true. I mean, no, I mean,
I mean, it's really just because we are mathematicians or descended from mathematicians, you know,
we have to call things conjectures that other people would just call empirical facts or
discoveries, right? But one shouldn't read more into that difference in language, you know,
about the underlying truth. So, okay, so you're a good investor
and good spender of money.
So then let me ask another way.
Is it possible at all?
And what would that look like if P
indeed equals NP?
Well, I do think that it's possible.
I mean, in fact, you know, when people really
pressed me on my blog for what odds would I put,
I would, you know, two or three percent odds.
That people said that.
That people said that.
That people's NP.
Yeah, just because, you know, when people,
I mean, you really have to think about like,
if there were 50, you know, mysteries like P versus NP,
and if I made a guess about every single one of them,
would I expect to be right 50 times?
Right, and the truth for answer is no.
Okay, so, you know, and that's what you really mean
in saying that you have, you know,
better than 98% odds for something, okay?
But, so yeah, you know, I mean,
there could certainly be surprises.
And look, if P equals NP,
well then there would be the further question
of, you know, is
the algorithm actually efficient in practice?
Right?
I mean, Don Canuth, who I know that you've interviewed as well, right?
He likes to conjecture that P equals NP, but that the algorithm is so inefficient that
it doesn't matter anyway, right?
Now, I don't know.
I've listened to him say that.
I don't know whether he says that just
because he has an actual reason for thinking it's true or just because it sounds cool. Yeah. Okay,
but, um, but you know, that that's a logical possibility, right? That the algorithm could be
end of the 10,000 time or it could even just be n squared time, but with a leading constant of
it could be a Google times n squared or something like
that. In that case, the fact that P equals NP, well, it would, it would, you know, ravage
the whole theory of complexity. We would have to, you know, rebuild from the ground up.
But in practical terms, it might mean very little, right? If the algorithm was too inefficient
to run, if the algorithm could actually be run in practice,
like if it had small enough constants,
you know, or if you could improve it
to where it had small enough constants
that was efficient in practice,
then that would change the world, okay?
You think it would have, like what kind of impact
would it have?
Well, okay, I mean, here's an example.
I mean, you could, well, okay, just for starters,
you could break basically all of the encryption
that people use to protect the internet.
That's just for starters.
You could break Bitcoin and every other cryptocurrency,
or a mine as much Bitcoin as you wanted, right?
Become a super-duper billionaire, right?
And then plot your next move.
Right. Okay, that's just for starters. All right, all right, all right then plot your next move. That's just for starters.
That's a good one.
Now, your next move might be something like, you know,
you now have like a theoretically optimal way
to train any neural network to find parameters
for any neural network, right?
So you could now say, like, is there
any small neural network that generates
the entire content of Wikipedia?
Right. If, you know, and now the question is not, can you find it? any small neural network that generates the entire content of Wikipedia. Right?
If, you know, and now the question is not, can you find it?
The question has been reduced to does that exist or not.
Yes.
If it does exist, then the answer would be yes, you can find it.
Okay?
If you had this algorithm in your hands, okay?
You could ask your computer, you know, I mean, I mean, P versus NP is one of these seven
problems that carries
this million dollar prize from the gray foundation. If you solve it, others are the Riemann hypothesis,
the punk array conjecture, which was solved, although the solver turned down the prize.
And for others, but what I like to say, the way that we can see that P versus NP is the biggest of all of these questions,
is that if you had this fast algorithm,
then you could solve all seven of them.
Okay, you just ask your computer,
is there a short proof of the Riemann hypothesis, right?
You know, that a machine,
in a language where a machine could verify it,
and provided that such a proof exists,
then your computer finds it in a short amount of time
without having to do a brute force search.
Okay, so I mean, I mean, those are the stakes
of what we're talking about.
But I hope that also helps to give your listeners
some intuition of why I and most of my colleagues
would put our money on P not equaling N P.
Is it possible?
Apologist is a really dumb question,
but is it possible
to, that approval come out that P equals NP, but an algorithm that makes P equals NP is
impossible to find? Is that like crazy? Okay, well, if P equals NP, it would mean that there is such an algorithm. But it exists.
But it would mean that it exists.
Now in practice, normally the way that we would prove anything like that would be by
finding the algorithm.
By finding one algorithm.
But there is such a thing as a non-constructive proof that an algorithm exists.
This is really only weird at its head, I think a few times,
in the history of our field.
But it is theoretically possible that such a thing could happen.
But even here, there were some amusing observations
that one could make.
So there is this famous observation of Lee and Ed Levin,
who was one of the original discoverers of NP completeness. And he said, we'll consider the
following algorithm that I guarantee we'll solve the NP problems
efficiently just as provided that P equals NP. Here is what it
does. It just runs, it enumerates every possible algorithm
in a gigantic infinite list, right,
from like in alphabetical order, right?
And many of them maybe won't even compile,
so we just ignore those.
Okay, but now we just run the first algorithm,
then we run the second algorithm,
we run the first one a little bit more,
then we run the first three algorithms for a while,
we run the first four for a while,
this is called dovetailing by the way
This is a known trick in
theoretical computer science, okay, but we do it in such a way that you know
Whatever is the algorithm out there in in our list that solves NP complete, you know the NP problems
Efficiently will eventually hit that one right and, the key is that whenever we hit that one,
by assumption, it has to solve the problem,
that's to find the solution,
and once it claims to find the solution,
then we can check that ourselves,
because these are end problems.
Then we can check it.
Now, this is utterly impractical.
You'd have to do this enormous exhaustive search
among all the algorithms, but from a certain theoretical standpoint,
that is merely a constant pre-factor.
That's merely a multiplier of your running time.
So there are tricks like that one can do to say that,
in some sense, the algorithm would have to be constructive.
But in the human sense, it is possible that,
to, it's conceivable that one could prove such a thing
via a non-constructive method.
Is that likely?
I don't think so, personally.
Not personally.
So that's P and P, but the complexity is
through a full of wonderful creatures.
Well, it's got about 500 of them.
500.
So how do you get, yeah, what?
Yeah, how do you get more?
How do you get more?
Yeah, well, okay.
I mean, I mean, just for starters,
there is everything that we could do
with a conventional computer
with a polynomial amount of memory,
okay, but possibly an exponential amount of time
because we get to reuse the same memory
over and over again.
That is called piece space.
And that's actually a, we think, an even larger class than NP.
Okay, well, NP is contained in NP, which is contained in piece space.
And we think that those containment are strict.
And the constraint there is on the memory that memory has to grow.
It would polynomial the size of the process.
That's right.
That's right.
But in piece space, we now have interesting things that we're not in N.P.
Like as a famous example, from a given position in chess, does white or black have the
wind?
Let's say assuming provided that the game lasts only for a reasonable
number of moves. Or likewise for go. And even for the generalizations of these games,
to arbitrary size boards. Because with an 8x8 board, you could say that's just a constant size
problem. You just, in principle, you just solve it in an hour of one time. Right? But so we really mean the the generalizations of
you know, games to arbitrary size boards here. Or another thing in peace space would be like,
I give you some really hard constraint satisfaction problem like, you know, traveling salesperson
or you know, packing boxes into the trunk of your car or something like that
I asked me not just is there a solution which would be an NP problem
But I ask how many solutions are there? Okay?
That you know count the number of of of solute of valid solutions that that that actually gives
Those problems lie in a complexity class called sharp P, or like it looks like hashtag, like hashtag P, got it.
Okay, which sits between NP and P space.
There's all the problems that you can do
in exponential time, okay?
That's called X.
So, and by the way, it was proven in the 60s
that X is larger than P.
So we know that much.
We know that there are problems that are solvable
in exponential time, that are not solvable
in polynomial time.
Okay, in fact, we even know,
we know that there are problems that are solvable
in N cubed time, that are not solvable in N squared time.
And that does don't help us with a controversy
between P and MP.
Unfortunately, it seems not or certainly not yet, right?
The techniques that we use to establish those things, they're very, very related to how
touring proved the unsolvability of the halting problem.
But they seem to break down when we're comparing two different resources, like time versus space
or like, you know like P versus NP.
But there's what you can do with a randomized algorithm
that can sometimes, you know,
with some probability of making a mistake.
That's called BPP, bounded error probabilistic polynomial time.
And then, of course, there's one that's very close
to my own heart, what you can efficiently do doing polynomial time using And then of course there's one that's very close to my own heart. What you can
efficiently do, do in polynomial time using a quantum computer. Okay, and that's called BQP.
And so, you know, what's understood about that class? Okay, so P is contained in BPP, which is
contained in BQP, which is contained in P space. So anything you can, in fact, in something very similar
to SharpP, BQP is basically, well, it's contained in P
with the magic power to solve SharpP problems.
So why is BQP contained in P space?
Oh, that's an excellent question.
So there is, I mean, one has to prove that, okay?
But the proof, you could think of it as using Richard Feynman's picture of quantum mechanics,
which is that you can always, we haven't really talked about quantum mechanics in this conversation.
We did it in our brief.
Yeah, we did last time.
But yeah, we did last time.
But basically, you can always think of a quantum computation
as like a branching tree of possibilities
where each possible path that you could take through,
the space has a complex number attached to it called
an amplitude.
Okay, and now the rule is, you know, when you make a measurement at the end,
will you see a random answer? Okay, but quantum mechanics is all about calculating
the probability that you're going to see one potential answer versus another one,
right? And the rule for calculating the probability that you'll see some
answer is that you have to add up the amplitudes for all calculating the probability that you'll see some answer is that you have
to add up the amplitudes for all of the paths that could have led to that answer.
And then, you know, that's a complex number.
So that, you know, how could that be a probability?
Then you take the squared absolute value of the result.
That gives you a number between zero and one.
Okay.
So, I just summarize quantum mechanics in like 30 seconds.
But now, you know, what this already tells us is that anything I can do with a quantum computer,
I could simulate with a classical computer if I only have exponentially more time.
Okay, and why is that?
Because if I have exponential time, I could just write down this entire branching tree and
just explicitly calculate each of these amplitudes.
Right?
You know, that will be very inefficient, but it will work.
Right?
It's enough to show that quantum computers could not solve the halting problem.
Or, you know, they could never do anything that is literally uncomputable in touring
sense.
Okay, but now, as I said, there's even a stronger result,
which says that BQP is contained in piece space.
The way that we prove that is that we say,
if all I want is to calculate the probability
of some particular output happening,
which is all I need to simulate a quantum computer,
really, then I don't need to write down the entire quantum state, which is an exponentially
large object.
All I need to do is just calculate what is the amplitude for that final state, and to do
that, I just have to sum up all the amplitudes that lead to that state.
Okay, so that's an exponentially large
sum, but I can calculate it just reusing the same memory over and over for each term
in the sum. And hence the p in the piece base. Yeah. So what out of that whole complexity
zoo, it could be BQP, what do you find is the most, the class that captured your heart the most?
The most beautiful class is just, yeah.
I used as my email address BQPQpoly at gmail.com.
Yes, because BQP slash Qpoly, amazingly, no one had taken it.
Amazing.
But, you know, but this is a class that I was involved in sort of defining proving the
first theorems about in 2003 or so.
So it was kind of close to my heart.
But this is like if we extended BQP, which is the class of everything we can do efficiently
with a quantum computer, to allow quantum advice, which means imagine that you had some special
initial state that could somehow help you do computation. And maybe such a state would be
exponentially hard to prepare, but maybe somehow these states were formed in the Big Bang or something,
and they've just been sitting around ever since. If? If you found one and if this state could be like ultra power, there are no limits
on how powerful it could be except that this state doesn't know an advance which input
you've got, right? It only knows the size of your input. You know, and that that's BQP
slash Q. So that's that's one that I just personally happen to love.
But if you're asking, what's the,
there's a class that I think is way more beautiful
than we're fundamental than a lot of people,
even within this field realized that it is.
That class is called SEK or statistical zero knowledge.
And there's a very, very easy way to define this class, which is to say, suppose that I have two algorithms that each sample
from probability distributions, right? So each one just outputs random samples according
to, you know, possibly different distributions. And now the question I ask is, let's say, distributions
over strings of n bits, so over an exponentially large space.
Now I ask, are these two distributions close or far
as probability distributions?
Any problem that can be reduced to that,
if it can be put into that form, is an SDK problem.
And the way that this class was originally discovered
was completely different from that.
And it was kind of more complicated.
It was discovered as the class of all of the problems
that have a certain kind of what's called zero-knowledge proof.
The zero-knowledge proofs are one of the central ideas
in cryptography, you know, Shafi Goldwasser and Silvio McColley won the Turing Award
for, you know, inventing them.
And they're at the core of even some crypto currencies
that, you know, people use nowadays.
But there are zero knowledge proofs or ways of proving
to someone that something is true, like, you know,
that there is a solution to this, you
know, optimization problem, or that these two graphs are isomorphic to each other or something,
but without revealing why it's true, without revealing anything about why it's true.
Okay.
SCK is all of the problems for which there is such a proof that doesn't rely on any cryptography.
Okay? And if you wonder, like, how could such a thing possibly exist? Well, like imagine
that I had two graphs and I wanted to convince you that these two graphs are not isomorphic,
meaning, you know, I cannot permute one of them so that it's the same as the other one.
Right? You know, that might be a very hard statement to prove like I might need you know
You might have to do a very exhaustive enumeration of you know all the different
Permutations before you were convinced that it was true
But what if there were some all-knowing wizard that said to you look?
I'll tell you what just pick one of the graphs randomly, then randomly permute it, then send it to me, and I will tell you which
graph you started with. And I will do that every single time.
Right? And let's load that in. Okay, that's what I got.
I got it. And let's say that that wizard did that 100 times
and it was right every time. Right? Now, if the graphs were
isomorphic, then, you know, it would have been flipping a coin each time. Right? It would if the graphs were isomorphic, then it would have been flipping
a coin each time. Right? It would have had only a 1 and 2 to the 100 power chance of guessing
right each time. But, you know, so if it's right every time, then now you're statistically
convinced that these graphs are not isomorphic, even though you've learned nothing new about
why they aren't. So fascinating. So yeah, so SDK is all of the problems
that have protocols like that one,
but it has this beautiful other characterization.
It's shown up again and again in my own work
and a lot of people's work.
And I think that it really is one of the most fundamental classes
is just that people didn't realize that
when it was first discovered.
So we're living in the middle of a pandemic currently.
Yeah.
How has your life been changed?
No, better to ask.
Like how is your perspective of the world change with this world changing event of a
pandemic overtaken the entire world?
Yeah.
Well, I mean, all of our lives have changed.
You know, like, I guess, as with no other event since I was born,
you would have to go back to World War II
for something I think of this magnitude
on the way that we live our lives.
As for how it has changed my world view,
I think that the failure of institutions,
like the CDC, like other institutions that we sort of thought were trustworthy,
like a lot of the media was staggering, was absolutely breathtaking.
It is something that I would not have predicted.
Right?
I think I wrote on my blog that it's fascinating to rewatch the movie Contagion from a decade
ago, right?
That correctly for saw so many aspects of what was going on, an airborne virus originates
in China spreads to much of the world, shuts everything down until a vaccine can be developed.
Everyone has to stay at home.
It gets an enormous number of things right.
But the one thing that they could not imagine
in this movie, everyone from the government
is hyper-competent, hyper-dedicated to the public good.
Best of the best. Yeah, they-dedicated to the public good. Right.
Best of the best.
Yeah, they're the best of the best.
They could, and there are these conspiracy theorists who think this is all fake news.
There's not really a pandemic.
And those are some random people on the internet who the hyper-competent government people
have to oppose. In trying to envision the worst thing that could happen,
there was a failure of imagination.
The movie makers did not imagine that the conspiracy theorists
and the incompetence, in the nutcase,
is what have captured our institutions
and be the ones actually running things.
So you had a certain, yeah, I love competence
in all walks of life.
I love, I get so much energy, I'm so excited,
but people do amazing job and I like you,
or maybe you can clarify, but I had maybe not intuition,
but I hope that government at as best
could be ultra competent.
What, first of all, two questions,
I was like, how do you explain the lack of confidence?
The other, maybe on the positive side, how can we build a more competent government?
Well, there's an election in two months. You have a faith at the election.
It's not going to fix everything, but it's like, I feel like there is a ship that is sinking
and you could at least stop the sinking. I think that there are much, much deeper problems. I mean, I think that it is
plausible to me that a lot of the failures with the CDC, with some of the other health agencies,
even pre-date Trump, pre-date the right-wing populism that has taken over much of the other health agencies, even, you know, pre-date Trump, you know, pre-date the, you know, right-wing populism
that has sort of taken over much of the world now.
And, you know, I think that, you know, it was,
is, you know, it is very, I'm actually, you know,
I've actually been strongly in favor
of, you know, rushing vaccines of, you know, I thought that we could have done human challenge trials,
which were not done.
We could have had volunteers to actually get vaccines, get exposed to COVID.
So innovative ways of accelerating what we've done for you, I thought that each month
that a vaccine is closer is like trillions of dollars.
I use the realization and of course lives,
you know, at least hundreds of thousands of lives.
I use the surprise that is taking this long,
we still don't have a plan, there's still not a feeling like
anyone is actually doing anything in terms of
alleviating like any kind of plan.
So there's a bunch of stuff with this vaccine, but you could also do a testing infrastructure where yeah, everybody's tested nonstop with contact tracing all that kind of
I mean, I'm as surprised as almost everyone else.
I mean, this is a historic failure.
It is one of the biggest failures in the 240-year history of the United States.
Right?
And we should be crystal clear about that.
And one thing that I think has been missing, even from the more competent side, is sort
of the World War II mentality, right? The mentality of, let's just,
if we can, by breaking a whole bunch of rules,
get a vaccine and even half the amount of time
as we thought, then let's just do that
because we have to weigh all of the moral qualms
that we have about doing that against the moral qualms of not doing.
And one key little aspect to that that's deeply important to me and when we go in that topic
next is the World War II mentality wasn't just about breaking all the rules to get the job done.
There was a togetherness to it. There's, so I would, if I were president right now,
it seems quite elementary to unite the country
because we're facing a crisis.
It's easy to make virus enemy
and it's very surprising to me that the division
has increased as opposed to decreases.
Yeah, well, that's hard break.
Yeah, well, look, I mean, it's been said by others
that this is the first time in the country's history
that we have a president who does not even pretend
to want to unite the country, right?
Yeah.
I mean, Lincoln, who fought a civil war,
said he wanted to unite the country, right?
And I do worry enormously about
what happens if the results of this election are contested you know and you know will there be
violence as a result of that and will we have a clear path of succession and you know look I mean
you know this is all well we're going to find out the answers to this in two months and if none of
that happens maybe I'll look foolish but I am willing to go on the record and
say, I am terrified about that.
Yeah, I've been reading the rise and fall of the third rite.
So if I can, this is like one little voice to put out there that I think November will
be a really critical month for people to breathe and put love out there.
Do not anger in those in that context,
no matter who wins, no matter what it said,
will destroy our country,
may destroy our country,
may destroy the world because of the power of the country.
So it's really important to be patient, loving, empathetic.
Like one of the things that it troubles me
is that even people on the left are unable
to have a love and respect for people who voted for Trump.
They can't imagine that there's good people
that could vote for the opposite side.
And that's...
Oh, I know there are, because I know some of them.
Yeah.
I mean, it's still, you know, maybe it baffles me, but, you know, I know such
people.
Let me ask you this.
It's also heartbreaking to me on the topic of cancel culture.
So in the machine learning community, I've seen it a little bit that there's aggressive
attacking of people who are trying to have a nuanced conversation about things. And it's troubling because it feels like nuanced conversation is the only way to talk about
difficult topics.
And when there's a thought police and speech police on any nuanced conversation that everybody
has to do like in animal farm chant that racism is bad and sexism is bad, which is things that everybody believes.
And they can't possibly say anything new on it.
It feels like it goes against any kind of progress from my kind of shallow perspective.
But you've written a little bit about cancel culture.
Do you have thoughts there?
Well, look, I mean, to say that I am opposed to the trend of cancellations or of shouting people
down rather than engaging them, that would be a massive understatement.
And I feel like I have put my money where my mouth is, not as much as some people have,
but I've tried to do something.
I have defended some un know, some unpopular people
and unpopular, you know, ideas on my blog. I've, you know, tried to defend, you know, norms
of, of, of, of open discourse of, you know, reasoning with our opponents, even when I've
been shouted down for that on social media, you know, called a racist, called a sexist, all those things.
Which by the way, I should say, I would be perfectly happy
to, if we had time to say, 10,000 times,
my hatred of racism, of sexism, of homophobia.
But what I don't want to do is to seed
to some particular political
faction, the right to define exactly what is meant by those terms, to say, well,
then you have to agree with all of these other extremely contentious positions
or else you are a misogynist or else you are a racist, right?
I say that, well, no, you know, you know, don't like, don't
I or, you know, don't people like me also get a say in the discussion about, you know, what
is racism about what is going to be the most effective to combat racism, right? And, you
know, this, this, this, um, cancellation mentality, I think is spectacularly ineffective at its
own professed goal of, you you know combating racism and sexism
What's a positive way out?
So I I try to I don't know if you see what I do on Twitter
But I on Twitter I mostly on in my whole in my life
Mm-hmm. I've actually sway him to the core is like I really focus on the positive and I try to put love out there in the world.
And still, I get attacked.
And I look at that and I wonder like,
You too?
I didn't know.
Like I haven't actually said anything difficult
and nuanced.
You talk about somebody like Steven Pinker
who I actually don't know the full range of things
that he's attacked for,
but he tries to say difficult, he tries to be thoughtful about difficult topics.
And obviously he just gets slaughtered by the best.
Well, I mean, I mean, yes, but it's also amazing how well Steve has withstood it.
I mean, he just survived that attempt to cancel him just
a couple of months ago. Right. Psychologically, he survives it too, which always means
because I don't think I can. Yeah, I've gotten to know Steve a bit. He is incredibly unperturbed
by this stuff. And I admire that and I envy it. I wish that I could be like that. I mean,
my impulse when I'm getting attacked is I just want to engage every single
like anonymous person on Twitter and Reddit who is saying mean stuff about me. And I want
to just say, look, look, can we just talk this over for an hour and then, you know, you'll
see that I'm not that bad. And you know, sometimes that even works. The problem is then there's
the, you know, the 20,000 other ones. That. And that's not the psychological.
Does that wear on you?
It does.
It does.
But yeah, I mean, in terms of what is the solution, I mean, I wish I knew, right?
And so, you know, in a certain way, these problems are maybe harder than P versus NP, right?
I mean, you know, but I think that part of it has to be for, you know, that I think that there's a lot of sort of silent support for what I'll call the the the open discourse side, the, you know, reasonable enlightenment side.
And I think that that that support has to become less silent. Right. I think that a lot of people, this sort of, you know, like agree that, you know, a lot of these
of, you know, like, agree that a lot of these cancellations and attacks are ridiculous, but are just afraid to say so, right?
Or else they'll get shouted down as well, right?
That's just the standard witch hunt dynamic, which, you know, of course, this, you know,
this faction understands and exploits to what's great advantage.
But, you know, more people just, you know, said, you know, like, we're not going to stand for this.
This is, guess what, we're against racism too, but what you're doing is ridiculous.
And the hard part is it takes a lot of mental energy.
It takes a lot of time.
Even if you feel like you're not going to be canceled or you're staying on the safe side, it takes a lot of time, even if you feel like you're not going to be canceled or you're staying
on the safe side, it takes a lot of time to phrase things in exactly the right way and
to respond to everything people say.
But I think that the more people speak up, from all political persuasions, from walks of life,
then the easier it is to move forward.
Since we've been talking about love,
can you last time I talked to you about
meaning of life a little bit?
But here, it's a weird question
to ask a computer scientist,
but has love for other human beings,
for things for the world around you played
an important role in your life.
Have you, you know, it's easy for a world class computer scientist.
You could even call yourself like a physicist, everything to be lost in the books is the
connection to other humans.
Love for the humans played an important role.
I love my kids. I love my wife. I love my parents.
You know, I am probably not different from most people in loving their families.
And in that being very important in my life,
now I should remind you that I am a theoretical computer
scientist.
If you're looking for deep insight about the nature of love,
you're probably looking in the wrong place to ask me.
But sure, it's been important.
But is there something from a computer science perspective
to be said about love?
Is there, or is that even beyond
into the realm of consciousness?
There was this great card too.
I think it was one of the classic XKCDs
where it shows like a heart.
And it's like squaring the heart,
taking the four-year transform of the heart,
integrating the heart, you know,
each thing, and then it says,
my normal approach is useless here.
I'm so glad I asked this question.
I think there's no better way to end this guy.
I hope we get a chance to talk again.
This is an amazing, cool experiment to do it outside.
And I'm really glad you made it out.
Yeah, well, I appreciate it a lot. It's been a pleasure. And I'm glad you were able to come out to
Austin. Thanks. Thanks for listening to this conversation with Skydarenson. And thank you to our
sponsors, eight sleep, simply safe, ExpressVPN, and better help. Please check out these sponsors in
the description to get a discount and to support this podcast. If you enjoy this thing subscribe by
YouTube, review it with five stars and up a podcast, follow on Spotify, support
on Patreon, or connect with me on Twitter at Lex Friedman. And now let me leave
you with some words from Scott Erickson that I also gave to you in the
introduction,
which is if you always win, then you're probably doing something wrong.
Thank you for listening and for putting up with the intro and outro in this strange room
in the middle of nowhere, in this very strange chaos of a time we're all in.
And I very much hope to see you next time
in many more ways than one.
you