The Joy of Why - What Is the True Promise of Quantum Computing?
Episode Date: April 3, 2025Quantum computing promises unprecedented speed, but in practice, it’s proven remarkably difficult to find important questions that quantum machines can solve faster than classical ones. One... of the most notable demonstrations of this came from Ewin Tang who rose to prominence in the field as a teenager. When quantum algorithms had in principle cracked the so-called recommendation problem, Tang designed classical algorithms that could match them.So began the approach of “dequantizing,” in which computer scientists look at quantum algorithms and try to achieve the same speeds with classical counterparts. To understand the ongoing contest between classical and quantum computing, co-host Janna Levin spoke to Tang on The Joy of Why podcast. The wide-ranging conversation covered what it was like for Tang to challenge the prevailing wisdom at such a young age, the role of failure in scientific progress, and whether quantum computing will ultimately fulfill its grand ambitions.
Transcript
Discussion (0)
Have you heard about the Night Science podcast where we talk about the creative scientific process?
You know, day science, testing ideas through experiments, but getting those ideas, that's night science.
So on the podcast, we explore this through discussions with many of today's most amazing scientists,
but also philosophers and artists.
And we talk with them about the tricks of the creative scientific trade.
So we are your host there. I'm Itai and I.
And I'm Martin Lercher.
Join us on the Night Science Podcast.
I'm Jana Levin.
And I'm Steve Strogatz.
And this is The Joy of Why, a podcast from Quantum Magazine exploring some of the biggest
unanswered questions in math and science today.
Hey, Steve.
Hi, Jana.
I'm looking forward to talking to you about my interview with Yuen Tang, a computer scientist
at UC Berkeley.
I realized talking to her that I don't know anything about computer science.
Welcome to the club.
Do you feel like you've got a handle on quantum computing, classical computing?
Not really. I mean, I can say the words. I've heard of Turing machines. I've heard of qubits.
I'm puzzled by it. I can't wait to hear your episode.
Yeah, it's very fascinating because she's really working on understanding if classical algorithms can do everything a quantum algorithm can do.
And there's this kind of promise in quantum computing of this exponential speed up, this
incredible power, not just in the hardware, but even in the algorithms, because the algorithms
have to be written specifically for the machine.
And so there's this kind of classic problem that you and I deal with every day involving
recommendations and how we get recommendations from some of these websites we go to.
And it doesn't sound like it would be that important a problem, but it turns out it's
a very sophisticated problem.
I'm just wondering if I'm getting you.
So is it like the problem that Netflix has where each user has only seen a small subset
of the movies offered by Netflix and
they want to tell you might like this other movie?
Yes, exactly.
If you're doing these recommendation problems, you have to build these large matrices.
And in fact, it was one of the key problems that quantum algorithms had claimed to exponentially
speed up.
Ewan goes to prove that there could be no classical counterpart,
but accidentally, in her frustration and getting blocked,
went down this other path.
And she sort of inadvertently realized
she might have a classical version that was doing just as well.
Yeah.
This is a really surprising idea.
That's not supposed to be possible.
Yeah, it's really surprising.
And I think it's also really surprising that she did this at around 18. What? Now this is somebody who
I believe started college at the University of Texas at Austin at 14,
having skipped three grades. Well, alright, so there's somebody special there. Yeah, do Do you remember being 14? Unfortunately, I do.
Mercifully, I don't.
So an exceptional person.
Well, I think we should hear from her.
Here is computer scientist, Uyen Tang.
She's a Miller postdoctoral fellow at UC Berkeley.
Uyen, welcome to the show. We're really excited to talk to you.
Thank you for having me.
So, algorithms are behind a lot of things like dating apps and social media.
When I go to watch something, whether it's one of the streaming platforms or YouTube,
I get a list of recommendations.
And I have to admit, I'm unsure how they're picked.
What's happening literally behind the scenes
with these recommendations?
So recommendation systems are, of course,
proprietary algorithms.
We don't exactly know what's the secret sauce
behind Netflix's recommendation algorithm.
We do have some sense about what it might
be doing behind the scenes.
For example, there is a notable Netflix challenge
where Netflix released some of their data
and then challenged a bunch of academic teams
to try to produce the best recommendations
with their data set.
A lot of these techniques are, I think, more or less pretty
standard in the industry.
So some things that some companies still use
is like an item-based recommendation.
This is more of an item-by-item basis where it says,
if you like this item, then you might like these other things.
So this is very similar to what you would see on Amazon,
where on the page for a particular item,
you see a list of other items that are related.
But there's also other things that you could do.
The most standard theory abstractions of this question of question are around low rank matrices.
If I imagine every user's preference as a vector, and I imagine the space of all these
vectors, then this is going to be more or less low rank in the sense that I can explain
somebody's preference with only a few pieces of information.
Like if you take this Netflix data data and if you plot it along certain
dimensions, you can explain a lot about what somebody's preferences are by whether they like
rom-coms or whether they like action movies. This is one axis in which you can describe
somebody's preference. And it turns out you don't need that many axes in order to really pin down
what somebody likes. Let me try to pick this apart for people who don't know vectors and ranks and matrices.
So let's imagine I have a spreadsheet and across the top are a list of movies.
And I have a bunch of rows that are users.
The simplest kind of matrix is I'm just plotting whether or not each of those users watches
these movies, for instance. And then I can get more complicated and say, oh, they like all these rom-coms,
or they like to watch limited series, or they like French movies.
Is that right? Is that what you mean by a matrix?
The matrix I'm envisioning is that you have all your users as rows,
and then I have all of my videos as columns in my spreadsheet.
And Netflix doesn't actually do this,
but you could imagine asking every user
to rank their opinion about every particular video.
And then they give some sort of score.
Maybe it's 1 through 10.
Maybe it's 0 or 1, whether they liked it or not.
And this is the matrix that people have observed empirically
is low rank in the sense that you can say my preferences
are like a combination of these 10 people's preferences or something. And that does a
very good job of explaining what my preferences are. And that actually gives you a lot of
structure with which to run algorithms. And so you see that a lot of the algorithms that
people come up with for producing good recommendations are based around this low-rank structure.
Of course, you don't actually have their preference matrix in full, but you can still use that
structure to get something from the incomplete data that you have.
So, this very abstract subject has now become very practical in some ways.
And lots of people have heard the terminology algorithm.
So what even is an algorithm?
I think people throw it around and aren't really sure.
So what do people really mean when they're discussing algorithms now?
There's an informal notion of the word algorithm, which I throw around all the time.
But more concretely speaking, typically you think of an algorithm as something that performs a task.
So there's some particular kind of input, and you want to compute some function of that input.
For example, I'm given a list of numbers
and the goal is to output a sorted list of numbers.
And then the thing that goes from the input to the output
is called an algorithm.
You could imagine constructing an algorithm
in a variety of ways.
For example, you could imagine writing it in code,
but that's the basic idea.
A little bit of a digression out of sheer curiosity,
but do you think the human mind works algorithmically?
We're taking inputs, we're processing, and then we have an output?
This is definitely above my pay grade.
But I think people think that the universe is a quantum computer, right?
People say this, and so it's not that far of a stretch to imagine what you're
doing is doing some kind of computation.
So this issue of having an algorithm that's trying to determine preferences
is important enough that it was a big part of computer science called the recommendation problem.
How did you get introduced to the recommendation problem?
You're a very young researcher. You're interested in higher math.
And you become interested in computer science. So how did it come across your radar? Great, so I was interested in quantum computing. I
think I was a junior. I took an intro course in quantum computing and quantum
information taught by Scott Aaronson and after the course I asked him whether he
could supervise my senior thesis. He said yes and then he gave me a few
problems that were on his mind at the time.
And one of these was this recommendation problem.
So this was an instance where people were able to find some potential practical use
for a quantum computer through this recommendation problem.
And they are able to justify this sort of switch and try to argue that this thing could actually produce good recommendations and do it much, much faster than other classical algorithms can.
So let me see if I understand. You start out, you're interested in quantum computing. And around that time, there were kind of some breakthroughs.
One of the most important bragging points for quantum computing was that they could really speed up this recommendation algorithm. There was some real excitement about that. Finally, quantum computing had
proven it could do something that it had long claimed it might theoretically be able to
do, which is speed up a computation or an algorithm.
Right. I think quantum computing arguably has some kind of problem with finding very
wide-ranging applications. In the community, we have some sense that, okay, quantum computers, they could be used to perform various kinds of
simulations of physical systems. And we also know that it can factor numbers and break modern
public key cryptography. But you might hope to have a much broader impact. It's kind of striking
that quantum computers can only
solve what seems like maybe a fairly limited set of problems
much faster than classical algorithms.
So recommendation systems is this attempt
of trying to break into like a larger sphere
of different kinds of algorithms,
things related to manipulating data and things
which are present everywhere in tech, in industry,
not just maybe more niche
domain.
And so is there a simple way to understand what's meant by the difference between a classical
algorithm and a quantum algorithm?
The difference between quantum and classical algorithms, I feel like it's a nuanced question.
Welcome to the world of quantum.
Yeah.
I think one of the simplest ways to understand what's going on is that quantum
computers work with these things called superpositions. But you can think about these as more advanced
versions of just probabilities. For example, like if I have a list of numbers, and I want
to understand, okay, what's the average of this list of numbers? If the list of numbers
is really, really big,
then it might take a long time for me to sum all these up and divide it by, you know, the number of numbers. But something I could do to make it faster is I could randomly choose a bunch of these
numbers and then do the average of these set of numbers. There's this whole field of statistics,
which basically says that if I choose enough numbers, then it'll more or less be close to the
full average of the entire data set.
So this is sort of what you're able to do
with classical probabilities.
You're able to take these big, big data sets
and somehow operate on them, only touching
a subset of the data.
Now quantum algorithms are able to do this,
but they have some additional powers, like interference
and so on.
There are situations in which I would really
like it if two probabilities canceled out. And on a classical computer you really can't do this, and so this is a genuine
bottleneck that you experience. On a quantum computer you don't have this issue. They're doing
these things using superposition in order to manipulate data, perform recommendations, and so
on. Even though it's not obvious, it turns out that the quantum algorithm for this recommendation
problem wasn't really heavily using these sorts of quantum features of the superposition.
And the classical algorithm is somehow able to do this by replacing the superpositions
with probabilities.
That's more or less how I would explain it.
And so this suggestion from Scott Aronson that you take on a range of problems and you
hone in on the recommendation problem, was it to specifically jump in and solve the recommendation problem,
or was it to see if the quantum computing claim was irrefutable?
I was hoping we could prove that no classical computer could do just as well as the quantum one
for the recommendation systems problem. This would really show that there is a genuine exponential speed up between quantum and classical here.
Scott sent me at the time this article called Read the Fine Print,
where he lays out the challenges of actually proving this sort of exponential speed up.
The hope was that this Karen Niedersen-Perkosh algorithm, this algorithm for recommendation systems,
resolved all of the complications that Scott Aronson had brought up was that this Karen Edison Prakash algorithm, this algorithm for recommendation systems,
resolved all of the complications that Scott Aronson had brought up. And as a result, it
could be used to then finally find this sort of separation between quantum and classical
that we were hoping to find in machine learning tasks.
So you were hoping to prove that you couldn't do classically what these researchers had
done with the quantum algorithm.
Yeah, yeah. I was trying to do it. I was getting very stuck.
Do you go to your advisor and say, look, I'm stuck, I can't do it, are you feeling defeated?
Or how does this turn out to be something very exciting for you? How do you go from,
you know, the perils of failure, right, to realizing, oh, maybe I actually have something in my inability to
prove that I can't do a classical algorithm.
This is something I was spending a decent amount of time on in senior year of undergrad.
It was a pretty difficult process, I guess.
At the time, Scott was on sabbatical.
We talked a few times.
I was getting most of my advice from people in his lab.
I was just very much hitting a brick wall
in terms of trying to prove this lower bound.
I feel like these experiences were kind of new to me
at the time with respect to like-
Failure?
Yeah, no, no, no.
Well, I think failure in research
is definitely a different beast
because it's not necessarily
even about you spend a lot of time and you fail.
It's more like if you don't have any ideas, then you can't even spend that much time thinking
about the problem.
And so I kept having to like push myself to actually think about this because I just literally
had no footholds to attempt anything.
So what eventually ended up happening is that I started just reading
the literature on related problems in the classical world. And I had seen one paper
that seemed maybe a little bit relevant in the sense that it was claiming to do something
that was also much, much faster than your normal classical algorithm. It seemed sort
of related, but had these weird kinds of assumptions, and it was a bit tough.
And so I put it down, and I stopped thinking about it.
And then much later, I decided to turn again to saying, OK,
I can't actually prove this lower bound.
So let me just try to break up this problem
into different pieces and decide, OK, which part do I
believe is the hard part of this algorithm?
Which is the part that the classical algorithm can't do?
And try to isolate what that is.
And so at that point, I break up the problem into two parts.
The first one is the actual linear algebra part,
like a matrix approximation problem.
And the second one is about sampling a recommendation.
And when I break this up into two parts,
I then realize that the paper that I've seen
before actually solves the first one outright.
Or at least, like, you could squint at it and maybe believe that it could.
And so you're still thinking you're going to find it now in this more clearly isolated
piece.
Yeah, I was happy because the second part was very simple.
And so I was like, okay, this is like a concrete problem now.
And I'm just trying to show that this simple self-contained task
is hard.
And then I think about it some more,
and then I find an algorithm for that part.
And then at that point, I start getting excited.
I start being able to think, OK, maybe I can actually
put these two pieces together.
If both of these parts are parts that I thought were hard
but are actually easy, then maybe you
could just make the whole thing classically easy.
And then after that there is like I guess months of doing math and trying to work out
the details.
So is there a moment where your goals shift to, oh, I'm not proving this can't be done,
I'm actually proving it can be done in the classical algorithm.
What was that moment like for you when you realized, oh, I think I actually did the opposite
of what I set out to do,
and maybe it's even better than what I set out to do?
Honestly, I sort of come at this
without understanding the context that well.
At the time, I had sort of thought
this is like a reasonable candidate
for a quantum algorithm that could be fast.
But whether it's actually hard for classical computers or not was not something that was on my mind as being as
big of a surprise as it turned out to be. But there was definitely a point during
the research process where I started feeling like it was much more productive
just in the sense of I kept on having ideas when I was trying to build an
algorithm and I kept on having no ideas when I was trying to think how to prove hardness.
So at the time I'm thinking,
well, this is silly because I've been tasked to
prove hardness and I'm out here doing something completely different.
I was just thinking how am I going to salvage the senior thesis?
But eventually I do realize like,
okay, maybe this is actually something that I could put together and get an actual result.
And at that point, I decided, okay, I'm actually going to understand this paper that I have
been skimming, then actually try to figure out the details and start writing things up.
So you go to your advisor, you go to Aronson, and what do you say? Is he following all along
or do you kind of surprise him with this change in direction?
I think I surprised him. There's maybe one or two emails where I'm like, I think I might have an algorithm, but I'm not sure yet.
And then I sent it to Scott and was like, I think this is how you have an algorithm. Maybe he didn't believe me at first. I'm not really sure.
So there was kind of a moment of disbelief.
It's a surprising claim, yeah.
So we sent it to Jordanis Karanidis and Anupam Prakash,
the authors of the recommendation system problem.
And then I later on presented it at the Simons Institute
also here at Berkeley.
So let's talk about that presentation.
You mentioned earlier that the architects
of the quantum algorithm that had made kind
of a big splash were also going to be there at this workshop where you were meant to present
this result that you had sped up the algorithm with equal success classically.
That was not what anyone anticipated.
Yeah, it was maybe summer of 2018, I think, that I went to UC Berkeley and they were there and some other
people were there who were interested in quantum machine learning kinds of problems.
So you're an 18-year-old senior in college.
Do they even know this at the time?
I don't know.
I'm not quite sure.
Was that nerve-wracking for you or is this just sort of, this is just what's done?
I definitely felt like this is a little strange.
I mean, presenting an entire paper, an entire set of proofs on the board, I think is always
a nerve-wracking task.
At the best of times.
Yeah, yeah, even in the best of times.
And I was thankful just because I didn't know that many people.
So later on, I realized these are all like really big names.
By the time I didn't
really realize that. So you're at Berkeley presenting this result and then how long does
this go on for? I think it was like most of the day. I think it was like maybe one and a half hours
and then lunch and then another one and a half hours and then talking after. We really did work
through most of the details of this proof.
And what was the reaction?
Honestly, okay, I don't remember that much.
What I do remember is basically a rough agreement that it seemed correct.
Thinking back about the way I presented the proof, it was kind of a mess.
So the fact that people were able to parse it and sort of understand that it was probably correct is appreciated.
But the main thing was they started suggesting follow-ups like, oh, you could use these
techniques to solve X problem or Y problem. That led me down a big rabbit hole trying to
solve other kinds of problems using similar techniques.
Very encouraging actually to hear the response, you know, because sometimes colleagues are
dismissive or discouraging.
It sounds like just the opposite here.
Yeah.
In fact, there were two of the key people who originally had said they had cracked the
problem, the recommendation problem with a quantum algorithm.
So the generosity that they're sitting there and helping her pick apart where to go further
to prove they didn't have really a leg up with the quantum approach.
Doesn't that make you feel good to be a scientist where we do this kind of thing with each other?
I think it also speaks to this kind of rabbit holes in scientific research.
You end up following something which wasn't your intended target often, don't you think?
It's a great strategy when you think something is true, try to think about proving the opposite thing.
And sometimes that's what's true.
Yeah, exactly. And it really sounds like that's what happened to her,
and that she was just open to the discovery, open to going down that path.
There's this industry now of dequantizing, of looking at these quantum algorithms
and figuring out, hey, can I do this classically with just a little clever maneuver?
So in the same way that she went down this kind of unexpected rabbit hole, right now
she's working in a very interesting way modeling natural systems, which kind of surprised me.
More on this after the break. Hey, welcome back to The Joy of Why.
We're here with Yuen Tang, who in her undergraduate thesis showed that the recommendation problem
in computer science could be solved equally well by a classical algorithm as by a quantum
algorithm.
And that was really just the beginning.
You're able to reveal this in front of the world's experts.
And now there's all these applications,
and there's this whole area called
dequantizing algorithms.
Can you explain that a little bit?
What is dequantizing of algorithms?
The process of designing quantum algorithms
is always kind of a push and pull
with the usual algorithms community.
Because generally what happens is
that you want to find some problem that you
can solve much faster with a quantum computer
than you can a classical computer.
And in order to do this, often you
have to sort of change the problem
that you want to solve a little bit.
So like, I want to solve linear regression or something.
And well, I can't solve it outright.
And so I sort of change the problem a little bit.
I change the input a little bit.
I change what I want the output to be a little bit.
And then I can say, OK, when I change this thing, then I can get this really fast algorithm.
But when you do that, it leaves open room that you could have a classical algorithm that actually
uses this slightly different input
and is able to get this slightly different output just as fast
as your quantum algorithm, right?
Because you're changing the problem,
you're introducing a problem that perhaps no classical
algorithms person has ever studied.
And so the question is, when you change this problem,
do you make it too easy for a classical computer
to solve and ruin your claimed exponential speedup?
So there's the quantum algorithm side, which is, okay, we found this new great algorithm.
And then there's the dequantizing side, which is, we have a classical algorithm that can
do just as well.
So are these communities duking it out?
Often it's the quantum algorithms people who are doing both the proving and the dequantizing.
And so in that sense maybe not, but there's been a couple of high profile cases where
there's been a team of classical algorithms people to argue about certain claims. Do you
think that quantum computing is going to fulfill its promise? I mean, I know that's a huge
question because I don't think anybody knows yet, but given what you've been working on,
do you think quantum computing will fulfill its promise? I think quantum computing definitely has potential.
They make these really big claims that I don't necessarily believe will come to pass.
But overall, I think the reasoning for why you could expect quantum computers to work is pretty
sound. And I think the only reason that it will not come to pass is if there are like things that are maybe sociological in nature and outside of the scope of my expertise.
So, like a real scientist, you're not taking sides.
That's right.
I actually want to talk about what comes next, which are future directions. more recently on physical systems and how all the work that has led to this point influences
or allows you to make progress on thinking about more natural systems.
Can you tell us about this?
I started working on physical systems kind of as a consequence of the de-quantizing work.
You know, this de-quantizing stuff made me feel a little bit like potentially there could
be other things that are more exciting or like have potential for near-term impact. And so I started thinking about these applications
related to trying to simulate physics and so on. We have some belief that quantum computers can
simulate quantum physics faster than classical computers. And it is true for certain kinds of
contexts, at least if you believe quantum computers work at all, then certain kinds of simulation problems can be done on these quantum computers. But there's
actually a weird kind of gap between what quantum computers can do and if you ask somebody who's
interested in solving problems related to these systems, what they actually care about, the
practitioners, I guess. For example, the thing that we know how to do as quantum computing,
quantum algorithms people, is we know how to do as quantum computing, quantum algorithms people,
is we know how to simulate dynamics of systems,
so how a system evolves in time.
Whereas typically, if you talk to somebody
who cares about superconducting materials or chemical
reactions, they care about ground states.
They care about static properties of systems.
These are the states that you get if you leave a system for a long period of time. These things are sort of different
from the dynamical properties. And we actually don't have nearly as good of an understanding
of these static properties as we do these dynamical properties, these dynamical algorithms.
And so my work has shifted to thinking about various tasks around manipulating these
systems and trying to understand what we can do with them. And to begin with, we don't actually
know that much about quantum information theory, the behavior of these big quantum systems.
And so designing algorithms and trying to find good applications goes hand in hand with just
trying to understand the objects in the first place
So let me see if I understand you're trying to use classical algorithms now
To model quantum physical systems, or are you also using quantum algorithms to understand quantum systems?
It's a little bit of both. So
the stuff I've been working on recently has been in the realm of learning and simulating quantum systems. And here the setup is that I have some system on a quantum
computer that I don't know, some unknown thing, and I want to extract information from it.
I want to understand the underlying mechanics of the system. Or conversely, I have some
mechanics of a system and I want to simulate on a quantum computer what it's doing.
So can you give me an example of the kind of system?
What would the system be?
So I've been thinking a lot about Gibbs states or systems at thermal equilibrium.
So does this mean a room full of quantum particles that have come to equilibrium?
Yeah, yeah.
So you're modeling them and what kinds of dynamical properties can you extract?
So general simulation tasks for these dynamical properties, you can, for example, estimate
the energy, you can estimate correlation functions, like what's the correlation between two different
particles that are far away or something.
So these are things that you can do if I just care about what happens when I evolve my system
with time. But if I evolve my system with time.
But if I consider my system at equilibrium, then the question becomes a little bit more
difficult and you have to solve a simulation task and actually be able to say, okay, I'm
going to prepare my system, prepare my quantum particles.
And then once I have that, then I can just do the measurement that I want to do.
Somehow this is an easier part than actually preparing the system in the first place.
When people hear scientists talk about I'm preparing a system, it's all very abstract.
It's hard to say what the physical system is. Maybe these are quantum particles spinning or
why are we doing this? To people on the outside it could sound like we set up a little game for ourselves.
Here's the game, right? The game is to take a spin system and extract properties.
game for ourselves. Here's the game, right? The game is to take a spin system and extract properties. But I think that we're all actually motivated by trying to answer some bigger
question. Why are we doing this? It's more than just spending our time on a cool game,
right? So what are those bigger questions that are at stake when you're looking at these
quantum systems?
That's a great question. I'm currently on the job market right now,
so I'm having to answer questions like this
pretty regularly.
I guess I could take it from two different angles.
One of them is the more practical angle,
which is that there are certain kinds of problems
that when you abstract details away,
they are about the problem that I stated.
For example, something that I think the Microsoft group is hoping to do with quantum computers
is understand nitrogenases.
I don't know what those are.
New to me.
So nitrogenases are what allows you to make fertilizer.
And when people talk about adding nitrogen to the soil, what they mean is they're taking
some kind of nitrogen gas and they're doing some chemical reaction and the nitrogen adding nitrogen to the soil, what they mean is they're taking some kind of nitrogen gas
and they're doing some chemical reaction
and the nitrogen ends up in the soil.
And actually, we do this when we're trying to extract
nitrogen from food and things like this.
And the sort of behavior that underlies
these sorts of reactions are not very well understood.
For example, I think I saw a talk on this recently,
there are certain kinds of transition metals
that appear in your complex,
and they're somewhat far away from where the action is happening in the chemical reaction,
but somehow they're very important in the behavior of these molecules and these reactions.
And this is in the search of finding better, more efficient ways of producing things like fertilizer.
And producing fertilizer is actually really energy intensive. So understanding this scientific task
can be reduced down to, can we build a quantum computer that
can actually slowly simulate this chemical reaction,
sort of break it down into pieces
that we can understand.
And if we can understand this better,
then maybe we can then engineer better chemical reactions,
engineer better ways to perform this procedure.
Ultimately, there are these real-world applications, consequences for people, not just what YouTube
video they're going to find themselves losing their day into.
So are you often thinking about this interplay, or for you is it really very much one of the
blue skies?
I do think about it.
I mean, it feels like a common experience to me that you start off doing something because
maybe the math is cool, and then you have to ask yourself, is this actually going to
make some impact on some other areas?
I'm finding this especially interesting to do in quantum computing because I have some kind of understanding
of what matters in computer science, then then I look at what the physicists or
the chemists are interested in and I have to try to piece together is what
I'm doing actually helpful for the potential applications that I'm
envisioning in my head. What are the different kinds of things that people
care about what actually leads to downstream impact? In what sense is that
actually important for understanding the things two layers of extraction
down or many layers of extraction down?
It's a process of finding models and models and models from the real world, which is messy.
Yeah.
I mean, that's kind of the whole paradigm of life, right?
You start because it's fun and then you wonder what the actual value is. And I wonder with all of the conversations that have gotten really heated lately around
things like AI, how much of that is something that's relevant for you?
And do you ever wonder if some of the work you do will feed into that, whether it's taken
by you in that direction or by others in that direction.
Definitely I'm cognizant of this new era that we seem to be entering with regards to AI.
If you ask me whether quantum computing has a role to play in this whole landscape, I
would say I'm not sure.
There's a lot of aspects of current AI that really, really is using the sort of weird
nature of the computational objects that we've
built up, you know. A lot of the architecture for these artificial intelligence systems
are based around training these linear algebra algorithms, which are then run on GPUs, these
like graphics processing units. But I have no clue about where the technology of quantum computing will fit into this whole new thing.
Will it be better than a GPU in any regard for an algorithm?
Maybe the right algorithms are not
based on these sorts of current techniques
that are being used for classical computers.
I mean, these are all possible.
I don't know if I would bet money on it.
So I want to ask you a question that we sometimes ask here at The Joy of Why, and that is what
about your research brings you joy?
I like the collaboration aspect of it.
You're sort of able to explore these uncharted territories of research and you're also able to do it while spending time with people that are really smart and fun. And the sort of business of academia is one
that is nice enough to support this kind of enterprise and provides the whole
thing some meaning.
That's such an important point, particularly with what's going on in the
world now and the hostilities towards not just science but literally universities.
This seeming abstract blue skies work has these important
consequences for the world that we're all living in.
Right, yeah. It's like exploring for the sake of trying to find
new ideas, the space of possibilities.
That's beautiful. Thank you so much, Ewan.
Really fun to talk to you.
Now, I would have thought the answer,
will quantum computing make a big difference in AI?
Is of course it will.
It's such an enormously powerful new way of computing,
if it ever comes to pass.
How could it not have an enormous impact?
But she's more open-minded, and I think that's reflected
in the results that you described. Yeah, and she she's more open-minded and I think that's reflected in the results that you described.
Yeah, and she described being so open-minded herself without really having kind of a bet
on it.
I mean, just being completely open to how it's going to play out.
And she described that as similar of even the people who are in quantum computing, that
a lot of them were playing both sides, looking at dequantizing as well as quantum algorithms
just in the spirit of openness.
It's very uplifting actually this discussion you had with her.
Absolutely.
And I will say that element of Blue Sky's research that's so misunderstood, that's
crucial for the trickle-down practical consequences is actually imperative that we have dreamers
who are just thinking about these very difficult abstract problems.
Yeah.
I mean, because there's this concept of mission-driven research where you know
what you're trying to do and then you throw money at it and you throw people and resources
and sometimes those things pan out and sometimes they don't.
But at least you know what you're shooting for.
Whereas pure curiosity-driven research, blue sky research, dreamy research, often has the
biggest payoff of all when it works, but a lot of
times it's a dud because you don't know what you're doing.
You're just dreaming.
But, I mean, why should taxpayer money support research that people generally find incomprehensible?
Yeah.
It's a fair question.
And there was a visionary after World War II named Vannevar Bush who had this idea that
if you supported the research enterprise in the United States in a really big way with
the generosity of the taxpayers, that good things would come from it, really good things,
but unpredictable good things.
And so for a long time, that was the philosophy in this country.
And I mean, we've got a lot of big payoffs from it, all kinds of cures for diseases. We've
got semiconductors making the chips in our cell phones and computers. I mean, we could
go on and on. All these things came from wild ideas that nobody anticipated. And it's because
genuine discovery is not predictable. And I don't know, you gotta trust us. We do deliver,
we the scientists. But if you don't let us do our thing, we're not gonna do our thing.
Right, yeah. Interesting. Steve, thanks for hanging out with me again. I really appreciate
it.
It's been a pleasure.
We'll see you at the next episode.
Can't wait.
Bye.
Bye-bye.
Thanks for listening.
If you're enjoying The Joy of Why and you're not already subscribed, hit the subscribe
or follow button where you're listening.
You can also leave a review for the show.
It helps people find this podcast.
Find articles, newsletters, videos, and more at quantamagazine.org.
The Joy of Why is a podcast from Quanta Magazine, an editorially independent publication supported
by the Simons Foundation.
Funding decisions by the Simons Foundation have no influence on the selection of topics,
guests, or other editorial decisions in this podcast or in Quanta Magazine.
The Joy of Why is produced by PRX Productions.
The production team is Caitlin Folds, Livia Brock, Genevieve Sponsler, and Merit Jacob.
The executive producer of PRX Productions is Jocelyn Gonzalez. Edwin Ochoa is our project manager.
From Quanta magazine, Simon France and Samir Patel provide editorial guidance with support from Matt Carlstrom,
Samuel Velasco, Simone Barr, and Michael Cagniogolo.
Samir Patel is Quanta's editor-in-chief.
Our theme music is from APM Music.
The episode art is by Peter Greenwood, and our logo is by Jackie King and Christina Armitage.
Special thanks to the Columbia Journalism School and the Cornell Broadcast Studios.
I'm your host, Jana Levin.
If you have any questions or comments for us, please email us at quanta at simonsfoundation.org.
Thanks for listening.