Lex Fridman Podcast - #72 – Scott Aaronson: Quantum Computing
Episode Date: February 18, 2020Scott Aaronson is a professor at UT Austin, director of its Quantum Information Center, and previously a professor at MIT. His research interests center around the capabilities and limits of quantum c...omputers and computational complexity theory more generally. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching "Ride Home" in your podcast app. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 05:07 - Role of philosophy in science 29:27 - What is a quantum computer? 41:12 - Quantum decoherence (noise in quantum information) 49:22 - Quantum computer engineering challenges 51:00 - Moore's Law 56:33 - Quantum supremacy 1:12:18 - Using quantum computers to break cryptography 1:17:11 - Practical application of quantum computers 1:22:18 - Quantum machine learning, questionable claims, and cautious optimism 1:30:53 - Meaning of life
Transcript
Discussion (0)
The following is a conversation with Scott Arenson, a professor UT Austin, director of its
Quantum Information Center, and previously a professor at MIT.
His research interests center around the capabilities and limits of quantum computers and computational
complexity theory more generally.
He is an excellent writer and one of my favorite communicators of computer science in the
world.
We only had about an hour and
a half of this conversation so I decided to focus on quantum computing, but I can see
us talking again in the future on this podcast at some point about computational complexity
theory and all the complexity classes that Scott catalogs in his amazing complexity zoo
wiki. As a quick aside, based on questions and comments I've received, my goal with these conversations
is to try to be in the background without ego and do three things.
One, let the guests shine and try to discover together the most beautiful insights in their
work and in their mind.
Two, try to play Devil's Advocate, just enough to provide a creative tension and exploring
ideas to conversation.
3. To ask very basic questions about terminology, about concepts, about ideas.
Many of the topics we talk about in the podcast have been studying for years, as a grad student,
as a researcher, and generally as a curious human who loves to read.
But frankly, I see myself in these conversations as the main character for one of my favorite
novels, Badesta Yewski, called The Idiot.
I enjoy playing dumb.
Clearly it comes naturally.
But the basic questions don't come from my ignorance of the subject, but from an instinct
that the fundamentals are simple.
And if we linger on them from almost a naive perspective, we can draw an insightful thread
from computer science, to neuroscience, to physics, to philosophy, and to artificial intelligence.
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
give it 5 stars and Apple podcasts,
support it on Patreon, or simply connect with me on Twitter, at Lex Friedman spelled F-R-I-D-M-A-N.
As usual, I'll do one or two minutes of ads now and never any ads in the middle that can
break the flow of the conversation. I hope that works for you and doesn't hurt the listening
experience. Quick summary of the ads, two supporters today.
First, get CashApp and use the code LexPodcast.
Second, listen to the TechMeme Ride Home Podcast for Tech News.
Search RideHome two words in your podcast app.
This show is presented by CashApp, the number one finance app in the App Store.
When you get it, use code Lex Podcast.
CashApp lets you send money to friends by Bitcoin and invest in the stock market with
as little as $1.
Broker services are provided by CashApp investing, a subsidiary of Square and member SIPC.
Since CashApp does fractional share trading,
let me mention that the order execution algorithm
that works behind the scenes to create the abstraction
of fractional orders is an algorithmic marvel.
So big props to the Cash App engineers
for solving a hard problem that in the end
provides an easy interface that takes a step up
to the next layer of abstraction over the stock market,
making trading more accessible for new investors and diversification much easier.
So again, if you get cash out from the App Store or Google Play and use the code Lex Podcast,
you'll get $10 and cash app will also donate $10 to first, one of my favorite organizations that is helping to advance
robotics and STEM education for young people around the world.
This episode is also supported by the TechMeme Ride Home Podcast.
It's a technology podcast I've been listening to for a while and really enjoying.
It goes straight to the point, gives you the tech news you need to know and provides minimal
but essential context.
It's released every day by 5pm eastern, and is only about 15-20 minutes long.
For fun, I like building apps on smartphones, most aren't Android, so I'm always a little
curious about new flagship phones that come out.
I saw that Samsung announced the new Galaxy S20 and of course right away,
Tecmin right home has a new episode that summarizes all that I needed to know
about this new device. It also started to do weekend bonus episodes with
interviews of people like Awell Founders Steve Case, An Investing, and Gary
Marcus and AI who I've also interviewed
on this podcast.
You can find the TechMemeRide home podcast if you search your podcast app for Ride Home
2 words.
Then subscribe, enjoy, and keep up to date with the latest tech news.
And now here's my conversation with Scott Arenson.
I sometimes get criticism from a listener here and there that while having a conversation
with a world-class mathematician physicist, neurobiologist, aerospace engineer or theoretical
computer scientist like yourself, I waste time by asking philosophical questions about free
will, consciousness, mortality, love, nature of truth,
superintelligence, whether time travels possible, whether space time is emerging from the mental,
even the crazy questions like whether aliens exist, what their language might look like, what their math might look like,
whether math is invented or discovered, and of course whether we an insumulation or not. So I tried with it. I tried to dance back and forth from the deep
technical to the philosophical. So I've done that quite a bit. So you're a world class
computer scientist, and yet you've written about this very point, the philosophy is important
for experts in any technical discipline,
though they somehow seem to avoid this.
So I thought it'd be really interesting
to talk to you about this point.
Why should we computer scientists, mathematicians,
physicists, careball philosophy, do you think?
Well, I would reframe the question a little bit.
I mean, philosophy, almost by definition,
is the subject that's concerned with the biggest
questions that you could possibly ask. So the ones you mentioned, are we living in a simulation?
Are we alone in the universe? How should we even think about such questions? Is the future determined?
And what do we even mean by it being determined,
why are we alive at the time we are,
and not at some other time.
And when you contemplate the enormity of those questions,
I think you could ask, well, then why be concerned
with anything else?
Why not spend your whole life on those questions? You know, I think,
I think in some sense, that is the right way to phrase the question. And, you know, and actually,
you know, what we learned, you know, I mean, throughout history, but really starting with the
scientific revolution, with Galileo and so on, is that there is a good reason to focus on narrower questions, more technical,
mathematical or empirical questions, and that is that you can actually make progress
on them.
You can actually often answer them, and sometimes they actually tell you something about
the philosophical questions that sort of maybe motivated your curiosity as a child.
They don't necessarily resolve the philosophical questions,
but sometimes they reframe your whole understanding of them.
For me, philosophy is just the thing that you have in the background
from the very beginning that you want to...
These are the reasons why you went into intellectual life in the first place,
at least the reasons why I did, right? But, you know, math and science are tools that we have
for, you know, actually making progress. And, you know, hopefully even, you know,
changing our understanding of these philosophical questions, sometimes even more than philosophy itself does.
What do you think computer scientists avoid these questions?
We'll run away from them a little bit,
at least in the technical scientific discourse.
Well, I'm not sure if they do so
more than any other scientist though.
I mean, I mean, I mean, I mean,
Alan Turing was famously interested
and his most famous, one of his two most famous papers was in a philosophy journal mind
You know, it was the one where he proposed the Doring Test
He took a Vittgenstein's course that came bridge, you know argued with him
I just recently learned that that little bit and it's actually fascinating
I I was I was trying to look for resources in trying
to understand where the sources of disagreement and debates between Wilkins Thine and Turing
War. That's interesting that these two minds have somehow met in the arc of history.
Yeah, well, the transcript, you know, of their, the course, which was in 1939, right, is one
of the more fascinating documents that I've ever read
because, you know, a vitconstein is trying to say,
well, all of these formal systems are just
complete a relevancy, is right,
if a formal system is irrelevant, who cares,
you know, why does that matter in real life, right?
And touring is saying, well, look, if you use an inconsistent formal system to design
a bridge, the bridge may collapse.
So touring in some senses, thinking decades ahead, I think of where Vidconstein is, to
where the formal systems are actually going to be used in computers to actually do things
in the world.
And it's interesting that Turing actually dropped the course halfway through why, because
he had to go to Bledchley Park and work on something of more immediate importance.
Fascinating.
It could take a step from philosophy to actual, like the biggest possible step to actual
engineering with actual real impact.
Yeah, and I would say more generally, right?
A lot of scientists are interested in philosophy,
but they're also busy, right?
And they have a lot on their plate
and there are a lot of very concrete questions
that are already not answered,
but look like they might be answerable.
And so then you could say, well, then why break your brain over these metaphysically unanswerable
questions when there were all of these answerable ones instead.
So I think, for me, I enjoy talking about philosophy.
I even go to philosophy conferences sometimes,
such as the FQXI conferences.
I enjoy interacting with philosophers.
I would not want to be a professional philosopher
because I like being in a field where I feel like,
I get too confused about the sort of eternal questions
that I can actually make progress on something.
Can you maybe lean on that for just a little longer?
Yeah.
What do you think is the difference?
So like the corollary of the criticism
that I mentioned previously,
that why ask the philosophical questions
of the mathematician is if you want to ask philosophical questions,
then invite a real philosopher on and ask them.
So what's the difference between the way a computer scientist and mathematician ponders
a philosophical question and a philosopher ponders a philosophical question?
Well, I mean, a lot of it just depends on the individual, right?
It's hard to make generalizations about entire fields, but, you know, I. But I think if we try to stereotype, we would say that scientists
very often will be less careful in their use of words. Philosophers are really experts
in when I talk to them, they will just pounce if I,
you know, use the wrong phrase for something. That's a very nice word. You could say stichlers.
Stichlers, yeah, yeah, yeah, or, you know, they will sort of interrogate my word choices, let's say,
to a much greater extent than scientists would, right? And scientists, you know, will often,
if you ask them about a philosophical problem,
like the hard problem of consciousness or free will
or whatever, they will try to relate it back
to recent research, research about neurobiology
or the best of all is research
that they personally are involved with.
And of course, they will want to talk about that
and it is what they will think of.
And of course, you could have an argument that maybe
it's all interesting as it goes,
but maybe none of it touches the philosophical question.
But maybe a science, at least, as I said,
it does tell us concrete things.
And even if a deep dive into neurobiology will not answer the hard problem of consciousness,
maybe it can take us about as far as we can get toward expanding our minds about it,
toward thinking about it in a different way. Well, I mean, I think neurobiology can do that, but, you know, with these
profound philosophical questions, I mean, also art and literature do that, right?
They're all different ways of trying to approach these questions that, you know,
we don't, for which we don't even know really what an answer would look like, but,
and yet somehow we can't help but keep returning to the questions.
And you have a kind of mathematical, beautiful,
mathematical way of discussing this with the IDFQ prime.
Oh, right.
You write that usually the only way to make progress
on the big questions, like the philosophical questions
we're talking about now, is to pick off smaller sub-questions.
Ideally, sub-questions that you can attack
using math and empirical observation are both. You define the idea of a Q prime. So given an
unanswerable philosophical riddle Q, replace it with a mirrorly, in quote,
scientific or mathematical question, Q prime, which captures part of what
people have wanted to know when they first asked Q. Yes. Then with the lock one, it's all Q prime.
So you describe some examples of such Q prime sub-questions in your long essay titled
Why Philosopher Should Care About Computational Complexity.
So you cataloged the various Q primes on which you think theoretical computer science
has made progress.
Can you mention if you favor, is there any pop to mind or the audience?
Well, yes.
So, I mean, I would say some of the most famous examples in history of that sort of replacement
were, you know, I mean, to go back to Alan Turing, right, what he did in his computing
machinery and intelligence paper was exactly, you know,
he explicitly started with the question, can machines think? And then he said,
sorry, I think that question is too meaningless, but here's a different question,
you know, could you program a computer so that you couldn't tell the difference between it
and a human, right? And, you know, yeah. So in the very first two sentences, he in fact just formed this to Q prime.
It precisely does that.
Or, we could look at Gertel where you had these philosophers arguing for centuries
about the limits of mathematical reasoning, the limits of formal systems and then by the early 20th century,
logicians starting with Fragae, Russell, and then most spectacularly girdle managed to reframe those questions. As we have these formal systems, they have these definite rules.
Are there questions that we can phrase within the rules of these systems that are not provable within the rules of the systems, and can we prove
that fact?
Right?
And so that would be another example.
You know, I had this essay called The Ghost in the Quantum
Turing Machine.
That's one of the crazier things I've written.
But I tried to do something or to advocate doing something
similar there for free will, where
instead of talking about, is free will real or we get hung up on the meaning of what exactly
do we mean by freedom?
Can you be or do we mean compatibleist free will, libertarian free will?
What do these things mean?
I suggested just asking the question, how well, in principle, consistently with the laws
of physics, could a person's behavior be predicted without, so let's say, destroying the person's
brain, taking it apart in the process of trying to predict them?
And that actually, asking that question gets you into all sorts of media and interesting
issues, you know, issues of what is the computational substrate of the brain, you know, or can you
understand the brain, you know, just at the sort of level of the neurons, you know, at sort
of the abstraction of a neural network, or do you need to go deeper to the, you know, molecular
level, ultimately, even to the quantum level, ultimately even to the quantum level.
And of course, that would put limits on predictability
if you did.
So you need to reduce the mind to a computational device,
formalize it so that you can make predictions
about whether you could predict it.
Well, if you were trying to predict a person,
then presumably you would need some model of their brain, right?
And now the question becomes one of how accurate can such a model become?
Can you make a model that will be accurate enough to really seriously threaten people's
sense of free will, you know, not just metaphysically, but like really I have written in this envelope
what you were going to say next.
Is he not going to see the right term here.
So it's also a level of abstraction has to be right.
So if you're accurate at the somehow at the quantum level, that may not be convincing to
us at the human level.
Well, right, but the question is what accuracy at the level of the underlying mechanisms do you need
in order to predict the behavior? At the end of the day, the test is just can you foresee what the
person is going to do? In discussions of free will, it seems like both sides want to very quickly dismiss that question as a relevant.
Well, to me, it's totally relevant, because if someone says, well, a lot of us demon that
knew the complete state of the universe could predict everything you're going to do, therefore
you don't have free will, it doesn't trouble me that much because, well, I've never met such a demon.
We even have some reasons to think it could not exist as part of our world. It's only an
abstraction, a thought experiment. On the other hand, if someone said, well, I have this brain
scanning machine, you step into it it and then every paper that you will
ever write, it will write every thought that you will have even right now about the machine,
itself, it will foresee.
Well, if you can actually demonstrate that, then I think that threatens my internal sense
of having free will in a much more visceral way.
But now, you notice that we're asking a much more empirical question.
We're asking, is such a machine possible or isn't it?
We're asking, if it's not possible, then what in the laws of physics or what about the
behavior of the brain prevents it from existing?
So if you could philosophize a little bit within this empirical question, where do you think would enter the, by which mechanism would enter the
possibility that we can't predict the outcome?
So there would be something that would be akin to a free will.
Yeah.
Well, you could say the, the sort of obvious possibility, which was, you know,
recognized by adding to it and many others about as soon as Glon and
mechanics was discovered
in the 1920s was that if, let's say, a sodium ion channel in the brain, its behavior is chaotic,
it's governed by these hojli-huck skin equations in neuroscience, right, which are
differential equations that have a stochastic component, right? Now,
where does, you know, and this ultimately governs, let's say,
whether in neuron will fire or not fire? That's the basic chemical
process or electrical process by which signals are sent in the brain.
Exactly. Exactly. And, you know. And so you could ask, well, where does the randomness in the process,
that neuroscientists, what neuroscientists would treat as randomness,
where does it come from?
Ultimately, it's thermal noise.
Where does thermal noise come from?
But ultimately, there were some quantum mechanical events
at the molecular level that
are getting sort of chaotically amplified by a sort of butterfly effect. And so even if you
knew the complete quantum state of someone's brain, at best, you could predict the probabilities
that they would do one thing or do another thing. I think that part is actually relatively uncontroversial.
The controversial question is whether any of it matters for the sort of philosophical
questions that we care about, because you could say if all it's doing is just injecting
some randomness into an otherwise completely mechanistic process, well, then who cares?
More concretely, if you could build a machine that, you know, could just calculate
the even just the probabilities of all of the possible things that you would do, right?
And, you know, you know, if all the things that said you had a 10% chance of doing, you
did exactly a tenth of them, you know, and so on.
Yeah, that somehow also takes away the feeling of free will.
Exactly.
I mean, to me, it seems essentially just as bad
as if the machine deterministically predicted you.
It seems, you know, hard, we different from that.
So then, but a more subtle question
is, could you even learn enough about someone's brain
to do that?
Because, you know Because another central fact about quantum mechanics
is that making a measurement on a quantum state
is an inherently destructive operation.
So if I want to measure the position of a particle,
it was well before I measured, it had a superposition
over many different positions.
As soon as I measure, I localize it.
So now I know the position, but I've also fundamentally changed the state.
And so you could say, well, maybe in trying to build a model of someone's brain
that was accurate enough to actually make, let's say, even well-calibrated,
probabilistic predictions of their future
behavior.
Maybe you would have to make measurements that were just so accurate that you would just
fundamentally alter their brain, okay?
Or maybe not.
Maybe you only, it would suffice to just make some nano robots that just measured some
sort of much larger scale, macroscopic behavior, what is this neuron doing,
what is that neuron doing? Maybe that would be enough. But now, what I claim is that we're now
asking a question in which it is possible to envision what progress on it would look like.
Yeah, but just as you said, that question may be
slightly detached from the philosophical question
in the sense if consciousness somehow has a role
to the experience of free will.
Because ultimately when we're talking about free will,
we're also talking about not just the predictability
of our actions, but somehow the experience
of that predictability.
Yeah, well, I mean, a lot of philosophical questions ultimately,
like, feedback to the hard problem of consciousness,
you know, and as much as you can try to sort of talk around it,
or not write it, and, you know, and there is a reason
why people try to talk around it, which is that, you know,
a democratists talked about the hard problem of consciousness,
you know, in 400
BC in terms that would be totally recognizable to us today, right?
And it's really not clear if there's been progress since or what progress could possibly
consist of.
Is there a Q prime type of sub question that could help us get it consciousness?
It's something about consciousness.
Well, I mean, well, I mean, there is the whole question of AI, right?
Can you build a human level or superhuman level AI and can it work in a completely
different substrate from the brain?
I mean, of course, that was Alan Turing's point.
Even if that was done, maybe people would still argue about the hard problem of consciousness.
And yet, my claim is a little different.
My claim is that in a world where there were human level AIs, where we'd been even overtaken
by such AIs, the entire discussion of the hard problem of consciousness would have a
different character.
It would take place in different terms in such a world,
even if we hadn't answered the question.
And my claim about free will would be similar, right?
If this prediction machine that I was talking about
could actually be built,
well now the entire discussion of the free will
is sort of transformed by that.
Even if in some sense the metaphysical question hasn't been answered.
Yeah, exactly.
Transformers are fundamentally because say that machine does tell you that it can
predict perfectly.
And yet there is this deep experience of free will.
And then that changes the question completely.
Yeah.
And it starts actually getting to the question of the,
the AGI, the touring questions,
the demonstration of free will,
the demonstration of intelligence,
the demonstration of consciousness,
does that equal consciousness intelligence and free will?
But see, Alex, if every time I was contemplating a decision,
this machine had printed out an envelope, where I could open it and see that it knew Alex, if every time I was contemplating a decision,
this machine had printed out an envelope
where I could open it and see that it knew my decision,
I think that actually would change my subjective experience
of making decisions.
You might have no knowledge.
Change your subjective experience.
Well, the knowledge that this machine had predicted
everything I would do, I mean, it might drive me
completely insane, right?
But at any rate, it would change my experience to not just discuss such a machine as
a thought experiment, but to actually see it. Yeah. I mean, you could say at that point,
you could say, why not simply call this machine a second instantiation of me and be done with it, right?
What we know, what, why even privilege the original me
over this perfect duplicate that exists in the machine?
Yeah.
Or there could be a religious experience with it, too.
It's kind of what God throughout the generations
is supposed to have, that God kind of represents
that perfect machine is able to, I guess actually, I don't
even know what are the religious interpretations of free will. So if God knows perfectly
everything in religion, in the various religions, where does free will fit into that? Do you know
that? Do you know that?
That has been one of the big things that theologians have argued about for thousands of years.
I am not a theologian, so maybe I shouldn't go there.
So there's not a clear answer in a book like, I mean, this is, you know, the Calvinists
debated this, this has been, you know, I mean, different religious movements have taken
different positions on that question.
But that is how they think about it.
Meanwhile, a large part of what
animates theoretical computer science,
you can say, we are asking, what are the ultimate limits
of what you can know or calculate or figure out
by entities that you can actually
build in the physical world, right?
And if I were trying to explain it to a theologian,
maybe I would say, we are studying,
to what extent, gods can be made manifest
in the physical world.
I'm not sure my colleagues would like that.
So let's talk about quantum computers for a while.
Yeah, sure, sure.
As you've said, quantum computing, at least in the 1990s, was a profound story at the
intersection of computer science, physics, engineering, math, and philosophy.
So there's this broad and deep aspect to quantum computing that represents more than just
the quantum computer.
But can we start at the very basics?
What is quantum computing?
Yeah.
So it's a proposal for a new type of computation,
or let's say a new way to harness nature to do computation
that is based on the principles of quantum mechanics.
Now the principles of quantum mechanics
have been in place since 1926.
They haven't changed. What's new is how we want to use them. What does quantum
mechanics say about the world? The physicists, I think, over the generations, convinced people that
that is an unbelievably complicated question and just give up on trying to understand it. I can
let you in, not being a physicist, I can let you in not being a physicist.
I can let you in on a secret, which is that it becomes a lot simpler.
If you do what we do in quantum information theory
and sort of take the physics out of it.
So the way that we think about quantum mechanics is sort of as a generalization
of the rules of probability themselves.
So you might say there's a, you know,
there was a 30% chance that it was going to snow today
or something.
You would never say that there was a negative 30% chance,
right?
That would be nonsense.
Much less would you say that there was a, you know,
an eye percent chance, you know,
a square root of minus 1% chance.
Now, the central discovery that sort of quantum mechanics made is that fundamentally the world
is described by, let's say, the possibilities for what a system could be doing are described
using numbers called amplitudes, which are like probabilities in some ways, but
they are not probabilities.
They can be positive, for one thing, they can be positive or negative.
In fact, they can even be complex numbers.
Okay, and if you've heard of a quantum superposition, this just means that some state of affairs where
you assign an amplitude, one of these complex numbers, to every possible
configuration that you could see a system in on measuring it.
So for example, you might say that an electron has some amplitude for being here and some
other amplitude for being there.
Right?
Now, if you look to see where it is, you will localize it. You will force the amplitudes to be converted into probabilities that happens by taking their
squared absolute value.
Then, you can say either the electron will be here or it will be there.
Knowing the amplitudes, you can predict at least the probabilities that it will see each possible
outcome. But while a system is isolated from the whole rest of the universe, the rest of its
environment, the amplitudes can change in time by rules that are different from the normal rules
of probability and that are alien to our everyday experience.
So any time anyone ever tells you anything about the weirdness of the quantum world, you
know, or assuming that they're not lying to you, right?
They are telling you, you know, yet another consequence of nature being described by these
amplitudes.
So most famously, what amplitudes can do is that they can interfere with each other.
Okay, so in the famous double slit experiment, what happens is that you shoot a particle
like an electron, let's say, at a screen with two slits in it, and you find that there
are, you know, on a second screen, now there are certain places where that electron will
never end up, you know, after it passes through the first screen.
And yet, if I close off one of the slits, then the electron can appear in that place.
Okay?
So by decreasing the number of paths that the electron could take to get somewhere, you
can increase the chance that it gets there.
Okay.
Now, how is that possible?
Well, it's because, you know, as we would say, now the electron has a superposition state.
Okay, it has some amplitude for reaching this point by going through the first slit.
It has some other amplitude for reaching it by going through the second slit.
But now if one amplitude is positive and the other one is negative, then no, you know,
I have to add them all up, right? I have to add the amplitudes for every path that the electron could have taken to reach
this point. And those amplitudes, if they're pointing in different directions, they can cancel
each other out. That would mean the total amplitude is zero, and the thing never happens at all.
I close off one of the possibilities, then the amplitude is positive or it's negative
and now the thing can happen. Okay, so that is sort of the one trick of quantum mechanics.
And now I can tell you what a quantum computer is. Okay, a quantum computer is a, a, a, a computer
that tries to exploit, you know, these exactly these phenomena, superposition, amplitudes and interference
in order to solve certain problems much faster
than we know how to solve them otherwise.
So it's the basic building block of a quantum computer
is what we call a quantum bit or a qubit.
That just means a bit that has some amplitude for being zero
and some other amplitude for being one.
So it's a superposition of zero and one states,
right? But now the key point is that if I've got, let's say, a thousand qubits, the rules of
quantum mechanics are completely unequivocal that I do not just need one amp, you know, I don't
just need amplitudes for each qubit separately, okay? In general, I need an amplitude for every possible setting of all thousand of
those bits. So that what that means is 2 to the 1000 power amplitudes. If I had to write
those down, let's say in the memory of a conventional computer, if I had to write down 2 to the
1000 complex numbers, that would not fit within the entire observable universe. Okay. And yet, you know, quantum mechanics is unequivocal that if these qubits
can all interact with each other, and in some sense, I need two to the 1000
parameters, you know, amplitudes to describe what is going on. Now, you know,
now I can tell you know, where all the popular articles, you know, about
quantum computing, go off the rails, is that they say, you know where all the popular articles about quantum computing go off the rails,
is that they say, they sort of say what I just said, and then they say, oh, so the way a quantum
computer works is just by trying every possible answer in parallel. That sounds too good to be
true, and unfortunately, it is too good to be true. The problem is I could make a superposition over every possible
answer to my problem, even if there are two to the 1,000 of them. I can easily do that.
The trouble is for a computer to be useful. At some point, you've got to look at it and
see an output. If I just measure a superposition over every possible answer, then the rules of
quantum mechanics tell me that all I'll see will be a random answer. If I just measure a superposition over every possible answer. Then the rules of quantum mechanics tell me that all I'll see will be a random answer.
If I just wanted a random answer, well, I could have picked one myself with a lot less
trouble.
Right?
So the entire trick with quantum computing, with every algorithm for a quantum computer,
is that you try to choreograph a pattern of interference of amplitudes.
And you try to do it so that for each wrong answer, some of the paths leading to that wrong answer
have positive amplitudes, and others have negative amplitudes. So on the whole, they cancel each
other out. Whereas all the paths leading to the right answer should reinforce each other,
you know, should have amplitudes pointing the same direction.
So the design of algorithms in the space
is the choreography of the interferences.
Precisely, that's precisely what it is.
Can we take a brief step back and you mentioned information?
Yes.
So in which part of this beautiful picture
that you've painted is information contained?
Oh, well, information is at the core of everything that we've been talking about, right?
I mean, the bit is, you know, the basic unit of information since, you know, collage
Shannon's paper in 1948, you know, you know, of course, you know, people had the concept
even before that, you know, he popularized the name, right?
But I mean, but I bit it zero or one. That's right. That's right.
That's right. And what we would say is that the basic unit of quantum information is the qubit is,
you know, the object, any object that can be maintained and manipulated in a superposition of zero
in one states. Now, you know, sometimes people ask, well, but, but, but what is a qubit physically, right? And there are all these different, you know,
proposals that are being pursued in parallel for how you implement
qubits. There is, you know, superconducting quantum computing that was in the
news recently because of Google's quantum supremacy experiment, right?
Where you would have some little coils where a current can
flow through them in two different energy states, one
representing a zero, another representing the one.
And if you cool these coils to just slightly above absolute zero,
like 100th of a degree, then they superconduct.
And then the current can actually be in a superposition
of the two different states.
So that's one kind of qubit.
Another kind would be, you know, just an individual atomic nucleus, right?
It has a spin.
It could be spinning clockwise.
It could be spinning counterclockwise, or it could be in a superposition of the two spin
states.
That is another qubit.
But so just like in the classical world, right, you could be a a superposition of the two spin states. That is another qubit. But see, just like in the classical world,
you could be a virtuoso programmer
without having any idea of what a transistor is,
or how the bits are physically represented inside the machine,
even that the machine uses electricity.
You just care about the logic.
It's sort of the same with quantum computing.
Qubits could be realized by many, many different quantum systems,
and yet all of their systems will lead to the same logic,
you know, the logic of Cubits,
and how you measure them, how you change them over time.
And so, you know, the subject of, you know,
how Cubits behave and what you can do with Cubits,
that is quantum information.
So just to linger on that,
so the physical design implementation of a qubit
does not interfere with the next level of abstraction
that you can program over it.
So the truth is, the idea of it is the eight, is it okay?
Well, to be honest with you,
today they do interfere with each other.
That's because all the quantum computers
we can build today are very noisy, right?
And so sort of the qubits are very far from perfect.
And so the lower level sort of does affect the higher levels
and we sort of have to think about all of them at once.
Okay, but eventually, where we hope to get is to what are called error corrected quantum
computers.
Where the qubits really do behave like perfect abstract qubits for as long as we want them
to.
And in that future, you know, the, you know, which, you know, a future that we can already
sort of prove theorems about or think about today, but in that future, the logic
of it really does become decoupled from the hardware.
So if noise is currently like the biggest problem for quantum computing and then the dream
is air correcting quantum computers, can you just maybe describe what does it mean for
there to be noise in the system?
Absolutely.
So, yes, the problem is even a little more specific than noise.
So the fundamental problem if you're trying to actually build a quantum computer of any
appreciable size is something called decoherence.
This was recognized from the very beginning when people first started thinking about this in the 1990s.
Now, what decoherence means is sort of the unwanted interaction between your qubits,
the state of your quantum computer and the external environment.
And why is that such a problem?
I said, talk before about how when you measure a quantum system.
So let's say if I measure a qubit that's in a superposition of zero and one states to
ask it, you know, are you zero or are you one?
Well, now I force it to make up its mind, right?
And now, probabilistically, it chooses one or the other.
And now, you know, it's no longer a superposition.
There's no longer amplitudes.
There's just there's some probability that I get a zero, and there's some that I get a what. Now, the trouble is that it doesn't have to be me who's looking.
In fact, it doesn't have to be any conscious entity. Any kind of interaction with the external
world that leaks out the information about whether this qubit was a zero or a one,
sort of that causes the zeroness or the oneness of the qubit to be recorded in the radiation
and the room and the molecules of the air and the wires that are connected to my device.
Any of that, as soon as the information leaks out, it is as if that cube
it has been measured.
It is, you know, the state has now collapsed.
Another way to say it is that it's become entangled
with its environment.
But, you know, from the perspective of someone
who's just looking at this cube, it is as though
it has lost its quantum state.
And so what this means is that if I want to do a quantum computation, I have to keep the
qubits sort of finatically well isolated from their environment.
But then at the same time, they can't be perfectly isolated because I need to tell them what
to do.
I need to make them interact with each other for one thing and not only that, but in a
precisely choreographed way. to make them interact with each other for one thing, and not only that, but in a precisely
choreographed way.
And that is such a staggering problem, right?
How do I isolate these qubits from the whole universe, but then also tell them exactly
what to do?
I mean, you know, there were distinguished physicists and computer scientists in the
90s who said, this is fundamentally impossible.
You know, the laws of physics will just never let you control qubits to the degree of
accuracy that you're talking about.
Now what changed the views of most of us was a profound discovery in the mid to late 90s,
which was called the theory of quantum error correction and quantum fault tolerance.
Okay. called the theory of quantum error correction and quantum fault tolerance. And the upshot of that theory is that if I want to build
a reliable quantum computer and scale it up
to an arbitrary number of as many qubits as I want
and doing as much on them as I want,
I do not actually have to get the qubits perfectly isolated
from their environment.
It is enough to get them really, really, really well isolated. And even if every qubit is sort of leaking, it's state into the environment
at some rate, as long as that rate is low enough, I can sort of encode the information that I care
about in very clever ways across the collective states of multiple Cubits.
In such a way that even if a small percentage of my Cubits leak,
well, I'm constantly monitoring them to see if that leak happened. I can detect it and I can
correct it. I can recover the information I care about from the remaining Cubits.
about from the remaining cubits. And so you can build a reliable quantum computer even out
of unreliable parts.
Now, in some sense, that discovery
is what set the engineering agenda for quantum computing
research from the 1990s until the present.
The goal has been engineer cubits that are not perfectly reliable, but reliable enough that you can then use these error correcting codes to have them simulate qubits that are even more reliable than they are.
The error correction becomes a net win rather than a net loss, right?
And then once you reach that sort of crossover point, then you are simulated qubits,
could in turn simulate qubits that are even more reliable
and so on until you've just, you know,
effectively you have arbitrarily reliable qubits.
So long story short, we are not at that break even point yet.
We are a hell of a lot closer than we were
when people started doing this in the 90s,
like orders of magnitude closer.
But the key ingredient there is the more cubits,
the better, because...
Well, the more cubits, the larger the computation,
you can do, right?
I mean, I mean, a cubits are what constitute
the memory of your quantum computer, right?
But also for the, sorry, for the error correcting mechanism.
Ah, yes.
So, the way I would say it is that
error correction imposes an overhead in the number of qubits
And that is actually one of the biggest practical problems with building a scalable quantum computer if you look at the
error correcting codes at least the ones that we know about today and you look at you know
What would it take to actually use a quantum computer to?
you know
I'm hack your credit card number, which, you know,
you know, the most famous application people talk about,
right, let's say to factor huge numbers in there by break
the RSA crypto system.
Well, what that would take would be thousands of,
several thousand logical qubits,
but now with the known error correcting codes,
each of those logical qubits would need to be encoded itself using thousands of physical qubits.
So at that point, you're talking about millions of physical qubits. And in some sense, that is the reason why quantum computers are not breaking cryptography already. It's because of these immense overheads involved.
So that overhead is additive or multiple? Well, it's multiplicative. I mean, it's like you take the number of logical qubits that
you need in your abstract quantum circuit, you multiply it by 1,000 or so. So, you know,
there's a lot of work on, you know, inventing better, trying to invent better error correcting
codes. Okay, that is the situation right now. In the meantime, we are now in what the physicist John Presco called the noisy intermediate scale
quantum or NISC era.
This is the era.
You can think of it as sort of like the vacuum, you know, we are now entering the very early
vacuum tube era of quantum computers.
The quantum computer analog of the transistor has not been invented yet.
Right.
That would be like true error correction.
We are not or something else that would achieve the same effect. We are not there yet.
But where we are now, let's say as of a few months ago, as of Google's announcement of quantum
supremacy, we are now finally at the point, we with a non-era corrected quantum computer with you know these noisy devices, we can do something that is hard for classical computers to simulate.
Okay, so we can eke out some advantage. Now, will we in this noisy era be able to do something beyond what a classical computer can do that is also useful to someone. That we still don't know.
People are going to be racing over the next decade to try to do that by people. I mean
Google, IBM, a bunch of startup companies and, you know, players, yeah, and research labs
and governments. And yeah, he just mentioned a million things. I'll backtrack for a second.
Yeah, sure. Sure. So, when in these vacuum tube days. Yeah. Just entering and just entering. Wow. Okay. So
how do we escape the vacuum? So how do we get to, how do we get to where we are now with
the CPU? Is this a fundamental engineering challenge? Is there breakthroughs on the physics side that they're
needed on the computer science side? Is there a financial issue where much larger just
share investment and excitement is needed?
So there's excellent questions. My guess would be all of the above. I mean, my guess, you know, I mean,
you know, you say fundamentally it is an engineering issue, right? The theory has been in place
since the 90s, you know, at least, you know, this is what, you know, error correction would look
like, you know, we do not have the hardware that is at that level. But at the same time, you
know, so you could just try to power through, maybe even like if someone spent a trillion
dollars on some quantum computing Manhattan project, right? Then conceivably, they could
just build an error corrected quantum computer as it was envisioned back in the 90s, right?
I think the more plausible thing to happen is that there will be further theoretical
breakthroughs and there will be further insights that will cut down the cost of doing this.
So let's take a brief step to the philosophical.
I just recently talked to Jim Keller, who's sort of like the famed architect on the in the
Michael processor world. Okay. And he's been told for decades every year that the Moore's Law
is going to die this year. And he tries to argue that the Moore's Law is still alive and well,
and it'll be alive for quite a long time to come.
So long.
How long did that?
Well, the main point is it's still alive,
but he thinks there's still a thousand X improvement
just on shrinking the transition, that's possible.
Whatever, the point is that the exponential growth
we see is actually a huge number of these S curves,
just constant breakthroughs at the philosophical level.
Why do you think we, as a descendents of Apes,
were able to just keep coming up with these new breakthroughs
on the CPU side?
Is this something unique to this particular endeavor
or will it be possible to replicate
in the quantum computer space?
Okay.
All right.
There was a lot there to break off something.
I mean, I think we are in an extremely special period of human history.
Right?
I mean, it is, you could say, obviously, special, you special, in many ways, right?
There are way more people alive
than there have been.
And the whole future of the planet is in question
in a way that it hasn't been for the rest of human history.
But in particular, we are in the era where we finally
figured out how to build universal machines, the things that we call computers, machines that you
program to simulate the behavior of whatever machine you want.
simulate the behavior of whatever machine you want. And once you've crossed this threshold of universality,
you've built, you could say, touring, you've instantiated
touring machines in the physical world, well then the main questions are
ones of numbers. They are ones of how much memory can you access, how fast does it run,
how many parallel processors, you know, at least until quantum computing, quantum computing is the one
thing that changes what I just said. But, you know, as long as it's classical computing, then it's
all questions of numbers. And, you know, the, the, you could say at a theoretical level,
the computers that we have today are the same
as the ones in the 50s.
They're just millions of times, you know,
faster with millions of times more memory.
And, you know, I mean, I think there's been
an immense economic pressure to, you know,
get more and more transistors, you know,
get them smaller and smaller, get, you know,
add more and more transistors, you know, get them smaller and smaller, get you add more and more cores.
And, you know, and, and, and, and in, in, in, in some sense, like a huge fraction of sort of all
of the technological progress that there is in all of civilization has gotten concentrated,
just more narrowly into just those problems, right? And so, you know, it has been
one of the biggest success stories in the history of technology, right? And so, you know, it has been one of the biggest success stories
in the history of technology, right? There's, you know, I mean, it is, I am as amazed by
it as as anyone else is, right? But at the same time, you know, we also know that it, you
know, I really do mean we know that it cannot continue indefinitely,
because you will reach fundamental limits
on how small you can possibly make a processor.
And if you want a real proof that would justify my use
of the word, we know that Moore's Law has to end,
I mean, ultimately you will reach the limits
imposed by quantum gravity.
If you were doing, if you tried to build a computer that operated at 10 to the 43 Hertz,
so did 10 to the 43 operations per second, that computer would use so much energy that
it would simply collapse to a black hole. In in reality we're going to reach the limits long before that,
but you know that is a sufficient proof that there's a limit. Yes, yes.
But it would be interesting to try to understand the mechanism, the economic pressure that you
said, just like the Cold War was a pressure on getting us getting us get because I'm both my us as both the Soviet
Union and the United States. Yeah. Getting us the two countries to get to hurry up to
get the space to the moon. There seems to be that same kind of economic pressure that somehow
created a chain of engineering breakthroughs that resulted in the Moore's law. Yeah, what?
It'd be nice to replicate. Yeah, well, I mean, some people are sort of get depressed about the fact that technological
progress may seem to have slowed down in many, many realms outside of computing.
Right?
And there was this whole thing of, you know, we wanted flying cars and we only got Twitter
instead, right?
Yeah.
Girl Peter Tiel, yeah. Yeah, yeah, yeah, right, right.
So, then jumping to another really interesting topic
that you mentioned, so Google announced
with their work in the paper in nature
with quantum supremacy.
Yes.
Can you describe, again, back to the basic,
what is perhaps not so basic,
what is quantum supremacy? Absolutely? What is quantum supremacy?
Absolutely. So quantum supremacy is a term that was coined by again by John Preskel in 2012.
Not not everyone likes the name, but it's sort of stuck. We sort of haven't found a better alternative.
Technically quantum computational.
Yeah, yeah, super-premise.
That's right, that's right.
But the basic idea is actually one that goes all the way back to the beginnings of quantum
computing when Richard Feynman and David Doige, people like that, were talking about it
in the early 80s.
And quantum supremacy just refers to sort of the point in history when you can first
use a quantum computer to do some well-defined task much faster than any known algorithm running
on any of the classical computers that are available.
So notice that I did not say a useful task.
It could be something completely artificial,
but it's important that the task be well-defined. So in other words, it is something that has
right and wrong answers that are nowable independently of this device, and we can then run the
device, see if it gets the right answer or not. Can you clarify a small point? You said much faster than the classical implementation.
What about the space with where the class,
there's not, it doesn't even exist a classical algorithm
to solve the problem.
So maybe I should clarify, everything
that a quantum computer can do, a classical computer,
can also eventually do.
And the reason why we know that is that a classical computer can also eventually do. The reason why we know that is that a classical computer could always, if it had no limits
of time and memory, it could always just store the entire quantum state of the quantum
right store a list of all the amplitudes in the state of the quantum computer and then just do some
linear algebra to just update that state.
So anything that quantum computers can do can also be done by classical computers, albeit
exponentially slower.
So quantum computers don't go into some magical place outside of Alan Turing's definition
of computation.
Precisely. They do not solve the whole thing problem. They cannot solve anything that is
uncomputable in Alan Turing's sense. What we think they do change is what is efficiently
computable. And since the 1960s, the word efficiently, as well as been a central word in computer
science, but it's sort of a code word for something technical,
which is basically with polynomial scaling, you know, that as you get to larger and larger inputs,
you would like an algorithm that uses an amount of time that scales only like the size of the input raised to some power,
and not exponentially with the size of the input. Right. So I do hope we get to talk again because one of the many topics that there's probably
several hours with a conversation on is complexity, which will probably won't even get a
chance to touch today.
But you briefly mentioned it.
But let's maybe try to continue.
So you said the definition of quantum supremacy is basically
design is achieving a place where much faster on a formal
that quantum computer is much faster on a formal well-defined problem. Yes. That is always
not useful. Yeah, yeah, yeah, right, right. And I would say that we really want three things, right?
We want first of all, the quantum computer
to be much faster, just in the literal sense
of like number of seconds, you know?
It's a solving this, you know, well-defined, you know, problem.
Secondly, we want it to be sort of, you know,
for a problem where we really believe
that a quantum computer has better scaling behavior, right?
So it's not just an incidental matter of hardware,
but it's that as you went to larger and larger inputs,
the classical scaling would be exponential
and the scaling for the quantum algorithm
would only be polynomial.
And then thirdly, we want the first thing,
the actual observed speed up, to only be explainable in terms of the scaling behavior.
So I want a real problem to get solved,
let's say, by a quantum computer with 50 qubits,
and for no one to be able to explain that in any way
other than, well, you know, to this, this computer involved a quantum state
with two to the 50th power amplitudes and, you know, a classical simulation, at least any
that we know today would require keeping track of two to the 50th numbers. And this is the
reason why it was faster. So the intuition is that then if you demonstrate on 50 qubits,
then once you get to 100 qubits,
then it'll be even much more faster.
Precisely.
Yeah.
And quantum supremacy does not require error correction.
We don't have, you could take true scalability yet or true error correction yet, but you could
say quantum supremacy is already enough by itself to refute
the skeptics who said a quantum computer will never outperform a classical computer for anything.
But one, how do you demonstrate quantum supremacy and two, what's up with these news articles?
I'm reading that Google did so. Yeah, all right. Well, they actually do great, great questions because now you get into actually, you know,
a lot of the work that I've, you know, I and my students have been doing for the last decade,
which was precisely about how do you demonstrate quantum supremacy using technologies that,
you know, we thought would be available in the near future. And so, one of the main things that we realized in around 2011, and this was me and my student,
Alex Arcopov at MIT at the time, and independently of some others, including Bremner, Joseph, and Shepard. Okay. And the realization that we came to was that if you just want to prove that a quantum
computer is faster, you know, and not do something useful with it, then there are huge advantages
to sort of switching your attention from problems like factoring numbers that have a single
right answer to what we call sampling problems.
So these are problems where the goal is just to output a sample from some
probability distribution. Let's say over strings of 50 bits. Right, so there
are, you know, many, many, many possible valid outputs. You know, your computer
will probably never even produce the same output twice, you know, if it's running as
even assuming it's running perfectly, okay? But the key is that some outputs are supposed to be likelier than other ones. So, sorry to clarify, is there a set of outputs that are valid and
set there not? Or is it more that the distribution of a particular kind of output is more,
there's a specific distribution of particular kind of.
There's a specific distribution that you're trying to hit, that you're trying to sample from.
Now, there are a lot of questions about this. How do you do that? Now, how do you do it?
It turns out that with a quantum computer,
even with the noisy quantum computers that we have now,
that we have today, what you can do is basically just
apply a randomly chosen sequence of operations.
So we, in some of this, that part is almost trivial.
We just sort of get the qubits to interact in some random way,
although it was sort of precisely specified random way, so we can repeat the exact same random
sequence of interactions again and get another sample from that same distribution. And what this
does is it basically, well, it creates a lot of garbage, but very specific garbage. So of all of the, so we're going to talk about Google's device, there were 53 qubits there.
And so there are two to the 53 power possible outputs.
Now for some of those outputs, there was a little bit more destructive interference in
their amplitude.
So their amplitudes were a little bit smaller and for others
There was a little more constructive interference
You know the amplitudes were a little bit more aligned with each other
You know, and so those those that were a little bit likelier, okay?
All of the outputs are exponentially unlikely, but some are let's say two times or three times
You know, unlikelier than others, okay? And so you can define, you know,
the sequence of operations that gives rise
to this probability distribution.
Okay, now the next question would be,
well, how do you, you know,
even if you're sampling from it,
how do you verify that?
How do you know?
And so my students and I,
and also the people at Google who are doing the experiment came
up with statistical tests that you can apply to the outputs in order to try to verify,
you know, what is, you know, that at least that some hard problem is being solved.
The test that Google ended up using
was something that they called the linear cross entropy
benchmark.
And it's basically, so the drawback of this test
is that it requires you to do a two to the 53 time
calculation with your classical computer.
So it's very expensive to do the test
on a classical computer.
The good news is-
I'll think of a numbers two to the same thing.
It's about nine quadrillion.
Okay. That doesn't help.
Well, you know, you do want to make
scientific notation, and I don't know what I mean.
Yeah, it is impossible to run on a super computer.
Yeah, so we will come back to that.
It is just barely possible to run.
We think on the largest supercomputer
that currently exists on Earth,
which is called Summit at Oak Ridge National Lab.
Great, this is exciting.
That's the short answer.
So ironically, for this type of experiment,
we don't want 100 qubits, because with 100 qubits,
even if it works, we don't know how to verify the results. Okay, so we want a number of qubits, okay, because with 100 qubits, even if it works, we don't know how to verify
the results, okay?
So we want, you know, a number of qubits that is enough that, you know, the biggest
classical computers on Earth will have to sweat, you know, and will just barely, you know,
be able to keep up with the quantum computer, you know, using much more time, but they will
still be able to do it in order that we can verify.
That was just where the 53 comes from for the kids numbers.
Basically, well, I mean, I mean, I mean, that's also that sort of, you know, the, I mean,
that's that's that's sort of where they are now in terms of scaling, you know, and then, you know,
soon, you know, that point will be passed. And, and then when you get to larger numbers of
qubits, then, you know, these, these types of sampling experiments will no longer be so interesting because we won't even be able to verify the results and we'll have to switch to other types of computation.
So with the sampling thing, so the test that Google applied with this linear cross entropy benchmark was basically just take the samples that were generated,
which are, you know, a very small subset of all the possible samples that there are.
But for those, you calculate with your classical computer the probabilities that they should
have been output.
And you see, are those probabilities like larger than the mean?
You know, so is the quantum computer biased toward outputting the strings that you want it to be biased toward?
And then finally, we come to a very crucial question, which is supposing that it does that.
Well, how do we know that a classical computer could not have quickly done the same thing?
How do we know that this couldn't have been spoofed by a classical computer?
And so the first answer is we don't know for sure,
because this takes us into questions of complexity theory.
You know, the, I mean, questions of the magnitude
of the P versus NP question.
And think of that, right?
We don't know how to rule out definitively
that there could be fast classical algorithms
for even simulating quantum mechanics
and for simulating experiments like these, but we can give some evidence against that possibility.
And that was sort of the main thrust of a lot of the work that my colleagues and I did
over the last decade, which is then sort of in around 2015 or so, what led to Google deciding
to do this experiment?
So is that kind of evidence to your first of all, the hard peak was NP problem that you
mentioned and the kind of evidence the year we're looking at, is that something you come
to on a sheet of paper or is this something, are these empirical experiments?
It's math for the most part. I mean, it's also, we have a bunch of methods that are known
for simulating quantum circuits or quantum computations with classical computers. And so
we have to try them all out and make sure that they don't work. Make
sure that they have exponential scaling on these problems and not just theoretically,
but with the actual range of parameters that are actually arising in Google's experiment.
So there is an empirical component to it, right? But now, on the theoretical side, you know, what basically, what we know how
to do in theoretical computer science and computational complexity is, you know, we don't
know how to prove that most of the problems we care about are hard, but we know how to
pass the blame to someone else. We know how to say, well, look, you know, I can't prove
that this problem is hard, but if it is easy, then all these other things
that you probably were much more confident
or were hard, then those would be easy as well.
Okay, so we can give what are called reductions.
And this has been the basic strategy
in NP completeness, right,
in all of theoretical computer science
and cryptography since the 1970s, really.
And so we were able to give some reduction evidence for the hardness of simulating these sampling experiments,
these sampling-based quantum supremacy experiments.
The reduction evidence is not as satisfactory as it should be.
One of the biggest open problems in this area is to make it better.
But we can do something.
Certainly, we can say that if there is a fast classical algorithm to spoof these experiments,
then it has to be very, very unlike any of the algorithms that we know. Which is kind of in the same kind of space of reasoning that people say p equal not equals
np. Yeah, it's in the same spirit. Yeah, in the same spirit. Okay, so Andrew Yang, a very intelligent
and the presidential candidate with a lot of interesting ideas in all kinds of technological fields
tweeted that because of quantum computing no code is uncrackable. Is he wrong or right?
He was premature. Let's say. So well, okay, wrong.
I still look, I'm a fan of Andrew Yang.
I like his ideas.
I like his candidacy.
I think that he may be ahead of his time with the universal basic income and so forth,
and he may also be ahead of his time in that tweet that you referenced. So regarding using quantum computers to break cryptography, so the situation is this.
So the famous discovery of Peter Schor 26 years ago that really started quantum computing
as an autonomous field was that if you built a full scalable quantum computer,
then you could use it to efficiently find the prime factors
of huge numbers and calculate discrete logarithms
and solve a few other problems
that are very, very special in character, right?
They're not NP-complete problems,
we're pretty sure they're not, okay?
But it so happens that
most of the public key cryptography that we currently use to protect the internet is based on
the belief that these problems are hard. What Showa showed is that once you get scalable quantum
computers, then that's no longer true. But now, before people panic, there are two important points to understand here.
The first is that quantum supremacy, the milestone that Google just achieved, is very, very
far from the kind of scalable quantum computer that would be needed to actually threaten public
key cryptography.
We touched on this earlier, but Google's device has 53 physical qubits.
To threaten cryptography, you're talking with any of the known error correction methods,
you're talking millions of physical qubits.
Because error correction would be required to think of cryptography.
Yes, yes.
Yes, yes, it's it's certainly would, right? And, you know, how much, you know, how great will
the overhead be from the error correction that we don't know yet? But with the known codes,
you're talking millions of physical qubits and of a much higher quality than any that we
have now. Okay. So, you know, I don't I don't think that that is coming soon, although people who have secrets that need
to stay secret for 20 years are already worried about this, for the good reason that we presume
that intelligence agencies are already scooping up, in the hope that eventually they'll be able to decode
it once quantum computers become available. Okay. So so so so so so so this brings me to
the second point I wanted to make, which is that there are other public key crypto systems
that are known that we don't know how to break even with quantum computers. Okay. And
there's so there's a whole field devoted to this now, which is called post-quantum cryptography.
There is already...
We have some good candidates now.
The best known being what are called lattice-based crypto systems.
There is already some push to try to migrate to these crypto systems. So NIST in the US is holding a competition
to create standards for post quantum cryptography,
which will be the first step in trying to get every web browser
and every router to upgrade and use some like SSL
that would be based on what we think
is quantum secure cryptography.
But, you know, this will, this will be a long process.
But, you know, it is, it is something that people are already starting to do.
And so, so, you know, I'm sure as algorithm is sort of a dramatic discovery, you know,
it could be a big deal for whatever intelligence agency first gets a scalable quantum computer.
At least certainly if no one else knows that they have it.
But eventually we think that we could migrate the internet to the post-quantum cryptography,
and then we'd be more or less back where we started. So this is not the application of quantum
computing. I think that's really going to change the world in a sustainable way, right?
The big, by the way, the biggest practical application of quantum computing that we know about
by far, I think is simply the simulation of quantum mechanics itself.
In order to learn about chemical reactions, design, maybe new chemical processes, new materials,
new drugs, new cellar cells, new superconductors, all kinds of things like that.
What's the size of a quantum computer that would be able to simulate the, you know, quantum
mechanical systems themselves that would be impactful for the real world for the kind of chemical
reactions and that kind of work. What scale are we talking about?
Now you're asking a very, very current question, a very big question. People are going to
be racing over the next decade to try to do useful quantum simulations, even with 100
or 200 cubic quantum computers of the sort that we expect
to be able to build over the next decade.
So that might be the first application of quantum computing that we're able to realize,
or maybe it will prove to be too difficult, and maybe even that will require fault tolerance,
or will require error correction.
So that's an aggressive race to come up with
the one case study kind of like the computer shore
with the idea that would just capture the world's imagination
of the look we can actually do something very useful here.
But I think, you know, within the next decade,
the best shot we have is certainly not, you know,
using shores algorithm
to break cryptography.
It's just because it requires too much in the way of our correction.
The best shot we have is to do some quantum simulation that tells the material scientists
or chemists or nuclear physicists something that is useful to them and that they didn't already know.
And you might only need one or two successes
in order to change some billion dollar industries.
The way that people make fertilizer right now
is still based on the Haber Boche process from a century ago.
And it is some many-body quantum mechanics problem
that no one really understands.
If you could design a better way to make fertilizer,
right, that's billions of dollars right there.
So, so those are sort of the applications
that people are going to be aggressively racing toward
over the next decade.
Now, I don't know if they're gonna realize it or not,
but you know, it is, you know, they certainly,
at least have a shot.
So it's gonna be a very, very interesting next decade.
But just to clarify, what's your intuition?
Is if a breakthrough like that comes,
is it possible for that breakthrough
to be on 50 to 100 qubits,
or is scale a fundamental thing,
like the 500, 1000 plus qubits?
Yeah, so I can tell you what the current studies are saying.
I think probably better to rely on that than on my intuition.
But there was a group at Microsoft had a study
a few years ago that said, even with only about 100 qubits,
you could already learn something new about this, the chemical reaction that
makes fertilizer, for example, the trouble is they're talking about 100 qubits and about
a million layers of quantum gates. Okay, so there's so basically they're talking about
100 nearly perfect qubits. So the logical qubits is you might be so.
Yeah, exactly. 100 logical qubits.
And now, you know, the hard part for the next decade is going to be, well, what can we
do with 100 to 200 noisy qubits?
Yeah.
Is that an error correction breakthroughs that might come without and need to do thousands
or millions of fiscal qubits?
Yeah.
So people are going to be pushing simultaneously
on a bunch of different directions.
One direction, of course, is just making the qubits better.
And there is tremendous progress there.
I mean, the fidelity is like the accuracy of the qubits
is improved by several orders of magnitude
in the last decade or two.
The second thing is designing better error,
you know, let's say lower overhead error correcting codes
and even short of doing the full recursive error correction,
you know, there were these error mitigation strategies
that you can use, you know, that may, you know,
allow you to eke out a useful speed up in the near
term.
And then the third thing is just taking the quantum algorithms for simulating quantum
chemistry or materials and making them more efficient.
And those algorithms are already dramatically more efficient than they were, let's say,
five years ago.
So when I quoted these estimates like a circuit depth of 1 million,
and so I hope that because people will care enough that these numbers are going to come down.
So you're one of the world-class researchers in this space.
There's a few groups I could mention Google and IBM working at this. There's
other research labs, but you put also you have an amazing blog. You just you put a lot
of you put a you paid me to say it. You put a lot of efforts to communicating the science of this and communicating exposing some of the BS and
sort of the the natural just like in the AI space, the natural
charlatanism, if that's the word in this in quantum mechanics in general, but quantum computers and so on.
Can you give some notes about people or ideas that people like me or listeners in general
from outside the field should be cautious of when they're taking in news headings that
Google achieved quantum supremacy?
So what should we look out for?
Where's the Charlotteslands in the space?
Where's the BS?
Yeah.
So, a good question. Unfortunately, quantum computing is a little bit like cryptocurrency or deep learning.
Like, there is a core of something that is genuinely revolutionary and exciting.
And because of that core, it attracts this sort of vast penumbra of, you know, people making,
you know, just utterly ridiculous claims.
And so with quantum computing, I would say that the main way that people go astray is by
not focusing on sort of the question of, are you getting a speed up over a classical
computer or not?
And so people like dismissed quantum supremacy
because it's not useful, right?
Or, you know, it's not itself.
Let's say obviously useful for anything.
Okay, but, you know, ironically,
these are some of the same people who will go and say,
well, we care about useful applications.
We care about solving traffic routing
and, you know, and financial optimization and all these things.
That sounds really good,
but their entire spiel is sort of counting on
nobody asking the question,
yes, but how well could a classical computer do the same thing?
Yes.
I really mean the entire thing.
They say, well, a quantum computer can do this, a quantum computer can do that.
And they just avoid the question, are you getting a speed up over a classical computer or
not?
And how do you know?
Have you really thought carefully about classical algorithms to solve the same problem.
A lot of the application areas that the companies and investors are most excited about, that
the popular press is most excited about, for quantum computers, have been things like
machine learning, AI optimization. And the problem with that is that since the very beginning, even if you
have a perfect fault-tolerant, scalable quantum computer, we have known of only modest
speedups that you can get for these problems. Okay? So there is a famous quantum algorithm called Grover's algorithm.
Okay?
And what it can do is it can solve many, many of the problems
that arise in AI, machine learning, optimization,
including NP-complete problems.
Okay?
But it can solve them in about the square root
of the number of steps that a classical computer
would need for the same problems.
Okay, now a square root speed up is important, it's impressive, it is not an exponential speed up.
Okay, so it is not the kind of game changer that let's say sure is algorithm for factoring is
or for that matter that simulation of quantum mechanics is. Okay, it is a more modest speed up.
Now let's say, roughly, in theory, it could roughly double the
size of the optimization problems that you could handle. Because people found that two boring,
two unimpressive, they've gone on to invent all of these heuristic algorithms where, you know, because no one really understands them,
you can just project your hopes onto them, right?
That, well, maybe it gets an exponential speed up.
You can't prove that it doesn't, you know,
and the burden is on you to prove
that it doesn't get a speed up, right?
And, you know, so they've done an immense amount
of that kind of thing,
and a really worrying amount of the case
for building a quantum computer has come to rest on this stuff that those of thing, and a really worrying amount of the case for building a quantum computer
has come to rest on this stuff that those of us in this field know perfectly well is on extremely
shaky foundations. So the fundamental question is show that there's a speed up of the classical.
Absolutely. And in this space that you're referring to, which is actually interesting, the area that
a lot of people excited about is machine learning.
So your sense is, do you think it will, I know that there's a lot of smoke currently?
But do you think there actually eventually might be breakthroughs where you do get exponential
speed ups in the machine learning space?
Absolutely, there might be.
I mean, I think we know of modest speed-ups
that you can get for these problems.
I think whether you can get bigger speed-ups
is one of the biggest questions for quantum computing theory,
for people like me to be thinking about.
Now, we had, actually recently,
a really, a super exciting candidate for an
exponential quantum speed up for a machine learning problem that people really care about.
This is basically the Netflix problem, the problem of recommending products to users,
given some sparse data about their preferences. Caronitas and Prakash in 2016 had an algorithm for sampling recommendations that was exponentially
faster than any known classical algorithm, right?
And so, you know, a lot of people were excited.
I was excited about it.
I had an 18-year-old undergrad by the name of Ewin Tang, and she was, you know, she was
obviously brilliant.
And she was looking for a project. I gave her as a project, can you prove that this speed up is real? Can
you prove that, you know, any classical algorithm would need to access exponentially more
data, right? And, you know, this, this was a case where if that was true, this was not
like a p versus np type of question, right? This, this might well have been provable.
But she, she worked on it for a year, she couldn't do it.
Eventually she figured out why she couldn't do it
and the reason was that that was false.
There is a classical algorithm
with a similar performance to the quantum algorithm.
So E-Win succeeded in de-quantizing
that machine learning algorithm
and then in the last couple of years, building on E-Win's breakthrough, a bunch of the other quantum machine learning algorithm. And then in the last couple of years, building on E-Wins breakthrough,
a bunch of the other quantum machine learning algorithms
that were proposed have now also been de-quantized.
And so I would say,
an important backward step.
Yes.
Like a forward step for science,
but a step for quantum machine learning
that precedes the big next forward step. Right, but step for quantum machine learning that precedes the big next forward
step.
Right.
Right.
Now, some people will say, well, you know, there's a silver lining in this cloud.
They say, well, thinking about quantum computing has led to the discovery of potentially
useful new classical algorithms.
That's true.
Right.
And so, you know, so you get these spin-off applications.
But if you want a quantum speed up,
you really have to think carefully about that.
EWINS work was a perfect illustration of why.
And I think that the challenge, the field is now open,
find a better example, find where quantum computers
are going to deliver big gains for machine learning.
I am not only do I ardently support people thinking about that, I'm trying to think about
it myself and have my students and postdocs think about it, but we should not pretend that
those speedups are already established. The problem comes when so many of the companies
and journalists in the space are pretending that.
Like all good things, like life itself, this conversation must soon come to an end.
Let me ask the most absurdly philosophical last question.
What is the meaning of life? What gives your life, fulfillment,
purpose, happiness, and, yeah, meaning?
I would say, you know, number one, trying to discover new things about the world and
share them and, you know, communicate and learn what other people have discovered.
Number two, my friends, my family, my kids, my students, the people around me.
Number three, trying when I can to you know, when I can to, you know, make the world better
in some small ways. And, you know, it's depressing that I can't do more. And that, you know,
the world is, you know, in, you know, facing crises over, you know, the climate and over,
you know, sort of resurgent authoritarianism and all these other things. But, you know,
trying to stand
against the things that I find horrible when I can.
Let me ask you one more absurd question.
Yeah. What makes you smile?
Well, yeah, I guess your question just did.
I thought I tried that absurd one on you.
Well, it was a huge honor to talk to you.
We'll probably talk to you for many more hours.
God, thank you so much.
Well, thank you.
Thank you.
It was great.
Thank you for listening to this conversation with Scott
Arenson.
And thank you to our presenting sponsor, Cash App.
Download it, use code, Lex Podcast.
You'll get $10.
$10 will go to first.
An organization that inspires and educates young minds to become science and technology innovators of tomorrow.
If you enjoyed this podcast, subscribe on YouTube, give it 5 stars and Apple podcasts, support
it on Patreon or simply connect with me on Twitter at Lex Friedman.
Now, let me leave you with some words from a funny and insightful blog post, Scott wrote, over 10 years ago,
on the ever-present Malthusianisms in our daily lives.
Quote, again and again, of undergone the humbling experience of first lamenting how badly something
sucks, then only much later, having the crucial insight that it's not sucking, wouldn't
have been a Nash equilibrium.
Thank you for listening. I hope to see you next time.
Thank you.