@HPC Podcast Archives - OrionX.net - @HPCpodcast-108: Olivier Ezratty on State of Quantum Computing – In Depth
Episode Date: April 28, 2026Olivier Ezratty, one of the most respected independent voices in quantum technologies, is the special guest of the @HPCpodcast in a lively, candid, and wide-ranging discussion of the state of the qua...ntum industry. [audio mp3="https://orionx.net/wp-content/uploads/2026/04/108@HPCpodcast_ID_Olivier-Ezratty_Quantum-Computing_20260428.mp3"][/audio] The post @HPCpodcast-108: Olivier Ezratty on State of Quantum Computing – In Depth appeared first on OrionX.net.
Transcript
Discussion (0)
It looks like the hardware progress is going to be the critical path.
It's a three-ray race, even though Europe is very fragmented compared to the US and China.
You have a surprising lead of non-U.
Because I organize the book in five different zones.
So one is what I call prologue.
It's a job because the prologue is more than 300 pages.
Because useful doesn't mean deploy.
Useful doesn't mean scale level.
In the end, what cuts more is the electronics and not the cradings.
They have a huge way.
They have a huge ecosystem that is totally unknown in Europe and in the US.
From Orionx.net, this is the At-HPC podcast. Join Shaheen Khan and Doug Black as they discuss
supercomputing, AI, quantum technologies, and the applications, markets, and policies that shape them.
Thank you for being with us.
Hi, everyone. Welcome to the At-HBC podcast. I'm Doug Black, and with me is Shaheen Khan.
Today, we're very happy to have with us a noted authority on quantum computing, quantum technologies.
Olivier Esratti.
Now, it's not easy to quickly sum up Olivier's career and current activities because he is doing and has done so many things.
But suffice it to say that Olivier is based in France and internationally active.
He is an author, professor, and advisor on quantum technologies working with businesses
and public services. He is also very well known for his open source book, Understanding Quantum Technologies,
which is in its eighth edition, the most recently published in 2025. So Olivier, welcome.
Thank you, Doug. It's a delight to have you, Olivier. I've been following your work for a number of
years now. So really excited to have this opportunity to have a more in-depth discussion. Thank you for
making the time. Yeah. Thank you for hosting me. And it was a pleasure to meet you in
Santa Clara last time. Yes. At the Q2B. That's right. Okay. So, Olivier, if you would,
please give us your general view on the overall state of quantum computing and other quantum
technologies presently. Well, I've been tracking this domain for the last eight years,
so I can at least make a comparison between the beginnings of my journey there and where we are
now. Of course, there was a lot of progress. I remember that. I remember
We had 10 years ago, we had the first IBM systems we put on the cloud with a 5 qubits, I think.
Then we've seen a lot of progress across all modalities.
What I observed, which was striking, was that in this kind of horse race, the order is changing every two years.
That's so funny.
So initially, it was about mostly superconducting qubits because of IBM and Google.
Then we heard a lot about trap giants.
than recently, at least since 23,
we hear a lot of our cold atoms.
And maybe someday silicon qubit
will jump in the race to a better position than today.
We don't know yet.
And probably I've got some hope
that also photon qubits may become a very challenging technology
compared to the four others I didn't mention.
There was a lot of progress also,
both in the science and technology,
in the software realm many many things happening in the algorithm domain mostly in the nisk
era at the beginning so there's been a lot of work being done in a variation of algorithms which i
don't like very much personally because i know about all their limitations doesn't scale very well
the number of circuit shots is huge particularly for vQE for doing chemical simulations and there's
a lot of scaling problems so i believe a lot in a tcc so creating large
cubits and fault tolerance, which is in the making right now.
So it's still going on.
So what I'm expecting for the future is all the progress that vendors and academics
are trying to implement in the FTUC domain.
Fault tolerant quantum computing, yes.
Yeah, exactly.
Yeah.
Now, can you give us, again, from a high-level perspective,
are you generally encouraged, impressed by progress that you're seeing
or are things not moving ahead, as you would wish?
Well, I'm kind of in between.
I kind of wishy-wash.
I don't believe the claim that everything is exponential
and we've seen so much progress in the last 12 months.
Every year you hear that.
If you look objectively about, let's say, the qubit count
or even the qubit fiddities,
the progress is not that stellar.
It does progress, and it depends on the qubit modality.
But what I remember is when I started eight years ago,
I mean, many of the promises from some vendors like IonQ, Riggeti and others,
were that they even had back then more than 100 qubits.
And right now, they even don't have that eight years later.
So these are two vendors.
Maybe it's an exception.
But even we should take Google.
Google, they did their famous Sycamore experiment in October 19 when it was published.
And it was 53 qubits.
You had to wait five years to get to the double.
So there's no Moslow, or at least Mosulow, or at least Mosul.
Law is slow in that space.
And they had some increase in the qubit filetities, but what I see, at least at the hardware level,
is how painful it is for all these scientists and vendors to increase both the quality
and the quantity of the qubits.
It's one against the other.
So I've seen amazing results with trapped ions recently, with three nines and four nines
of cubic fidelities, gate fidelity.
So it's the fidelity, so it's 99.9.9%.
which is very good, but all these has been obtained only with a very small number of physical qubits,
sometimes only two. So I'm expecting, I hope, and it's just hope, because the science is different
than hope, but I hope that this is going to happen on a larger number of physical qubit, and then
we can scale that to a larger number of physical qubit to implement FTQC. But it's, I think it's
painful. It's very hard. And particularly when you see all these papers being published on the resource
system aids for breaking RSA, breaking ACC, and the Bitcoin, you see a kind of dual phenomenon.
One is thanks to progress in error correction and some tricks, you're able to decrease
the number of physical qubit that are required to break those keys.
But if you look at the hardware progress meanwhile, it's being much slower.
So it looks like the hardware progress is going to be the critical path rather than improving error
correction because error correction improves a lot.
So even the smaller resource system that we have right now, for breaking RSA or even doing
useful chemical simulation, it's still in the tens of thousands of physical qubits.
And moving up from 50 or 100 or so physical qubit to that level is really, I mean, it's uncharted
territory, even for physics.
I mean, it's experimental physics has never experimented to control such a large number
of integral objects.
So it's like we are doing technology development
and trying to find the limits of the Heisenberg cut simultaneously
which is a scientific quest, I would say,
and a technology quest altogether.
So it's hard to talk about quantum computing
without talking about modalities.
Of course, yeah.
And one observation is that regardless of modality,
some things are easier and some things are harder.
And the combination of qubit count
and speed and coherence and connectivity and depth and what else did I have, just the specific
functions that something is good at and not good at, and then scaling all of that, the physical
to logical ratio, that there's a laundry list of qualifications that need to happen at the same
time, and nothing can do it all. If you do this, you can't do that. If you do that, you can't do
this. But I think that also what makes advances harder to assess.
because somebody solves the problem for their modality that was solved in a different modality
that has a different problem. When you look at all of that, that's the usual question,
does one modality emerge as more promising?
Well, at some point, people have beliefs there. So of course, if you talk to somebody in trapped
ironwilling, they say they're going to win and the others will lose. Same for Fortunes, same for
super PACWIT, so everybody has complaints about the other. So,
When you look objectively, they all have some pain, some technology and scientific pain to scale.
And it looks like when you look at the detail, particularly on resource-systemates,
that there's a kind of equalizing in the end.
So let me give you an example between the slow and the fast qubits.
So we know that solid-stay cubits are fast because they are fast-gays and fast operations.
And the slow cubits, like cold atoms and trapped ions for various reasons, they are slow.
But when you incur for all the costs, the cost of error correction,
the benefit from using so-called non-local error-correcting code,
which have a lower encoding rate than the one you have with a silicon or superconductic
quits, when you count for the so-called constant time overhead codes,
which are scaling much better in computing time compared to non-concent time-overate codes
that you have with so-called surface codes with what Google is trying to do,
when you embed everything and you look at the literature,
in the end, for large-circuit execution,
it ends up being the same kind of computing time or same range.
So there's a kind of equalizing.
I can give you another very counterintuitive example,
which is more about one of my furry topics,
which is energetics, which is one of the stuff I do,
because I co-founded the Contam Energy Initiative.
So I can give you one or two counterintuitive examples.
One is cryogenics.
So when observers look at the cost of a contumbril, they fear that cryogenics is a big pain point.
And of course, when you look at an IBM or Google or Alice Anberg or an IQM system,
the cryogenics looks impressive because it's what's visible.
But most of the time you don't care about all the racks of electronics
which are next door to the cryogenics.
And what people don't know yet is that when you scale those systems to,
the FTQC realm with thousands of physical qubits.
In the end, what cuts more is the electronics, not the cryogenics.
There's a reason for that, at the fundamental level.
At the fundamental level, when you look at the scales,
you need more electricity to power the electronics
that creates the cubic drive than to cool even the stuff
that's in the cryogenics.
So it's counterintuitive.
And the other way around, let's take photonic qubits.
If you take photonic qubits,
and Psyconton, as an example,
Psycantum, they say,
oh, we don't need a cryostat
that cools our qubits at 10 mili-Calvin.
Okay, they use only 4 Kelvin.
But in the end,
for running their system,
whether it's going to be 100 or 1,000
or a couple thousand of thousand logical qubits,
they will need more energy
and more cryogenics than IBM and Google and altogether.
And the system they are installing Brisbane in Australia,
and in Chicago in the US,
they both will need more than 100 megawat,
which is way beyond what IBM is supposed to cost.
IBM has published their numbers.
It's less than 10 megawatts.
So it's counterintuitive.
It's hotter, but you need more cooling power.
There's a reason why.
The reason why is that they have to cool everything.
They have to cool the photonic source.
They have to cool the circuit where the actual computing happens,
and they've got to cool the photon detectors.
So they need to cool a large set of sources.
circuits, they don't cool the routers, but they cool a lot of stuff.
So they cool a big mass of systems.
And in the end, you need the cooling power of the CERN, the LHT of the CERN, or the
SIR, the big nuclear fusion systems.
So it's so counterintuitive.
So hotter means surprisingly more expensive because of all the aspects that you have
to embed in that computation.
So one big revelation I got from one of your talks was indeed what you're just pointing to.
And my walk away was that it was a question of scale, that you kind of do a comparison for one
cubit and then for five, but then if you make it a million, different parts of the system
scale differently, as is usually the case.
And in fact, the energy part may not be the part that is constant or grows really slowly.
it may actually grow even faster.
Yeah, it depends of the technology, of course.
It depends also on your assumptions.
Because if you look at the vendors' roadmap,
for the few ones which are advertising the energy consumption of their systems,
there are only a few, not many of them.
They are basing this on various assumptions.
Sometimes they're optimistic, sometimes they're not.
And in a lot of situations,
they are already betting the farm on some technology progress
that has not happened yet.
improve in cryogenic yield, for example, improved in electronics, power consumption,
improves on cable multiplexing, for example, signage multiplexing and stuff like that.
But there are still some optimizations which can be done on top of their plants.
And there can be also, I hope, a lot of good surprises.
For example, I was hitting hard on Psyconten, but I know that they're trying to improve
their circuits to reduce the so-called photon losses.
because all the efficiency of the system depends on the losses.
And they've been doing amazing progress recently in the last three years
with the work they do at Gobel-Fandreys in, I think in the US in Malta.
And they've been able to reduce the losses to a very good level,
but they still need to even further improve the loss level in their circuits.
But it's going in the good direction.
But when you make an objective comparison,
the systems are surprisingly very different.
The scale of energy that I showcase at Santa Clara is,
amazing. You have a scale of 1 to 1,000. It was a pretty wide scale. That's right.
In power difference, it's a very wide scale. The only case where you could find that in classical
technology would be a kind of uneven comparison between an Nvidia GPU or a set of Nvidia
clusters and the so-called neuromorphic chips or synaptic chips for AI. But they don't do the same
stuff. They don't do the training. They are very good for embedded systems. They are not deployed yet
at scale, but are supposed also to have a 1,000 gain in energy spending compared to a
classical system running, let's say, an Nvidia, a GPU.
So this is the only case I know there's such a big gap in a given technology domain.
And my belief, that's what I explained in Santa Clara, is at some point this gap is so big
that if those systems deliver the same value, computing-wise, I mean, in computing time and
computing value and the size of circuit, you can.
run. I think in the end, the energy is going to be on top of the
CAPEX, I mean, the cost of the system is going to be a determining
competitive figure of merit, which will sort out the good from the bad.
Yeah. So two or three things come from this conversation, I'll come to,
but just focusing on the energy part. As you said, my point was that
different parts of systems scale differently depending on the modality.
Exactly. And that is part of the challenge because you do need scale and
nobody has quite done it.
But you also had done some studies of traditional supercomputer energy use
at various supercomputers national labs,
and then compare that with what quantum computers would need to do.
Yeah, what the chart that I've been doing is showing is that the spread of power consumption
between the future FTQC system is kind of covering the existing spread of HPC systems.
So I can give you numbers.
If you take the top 10 HPCs in the world, supercomputers,
the power range is between 10 megawatts and 38 megawatts.
Most of the systems are in the US, a couple of ones are in Europe,
one or two in China and Japan.
And this is only 10 systems in the world.
And then if you take the systems which need a power from 1 megawatt to 10 megawat,
it's about 100 in total maximum.
Yes.
It's not a lot.
So you have 100 systems costing more than 1 megawatts in the world.
And now if you look at the speed,
spread of power consumption of future FTQC systems, so quantum systems, it spread between
less than one megawatts, up to a couple hundreds of megawatts. So the spread is larger,
but if you look at the way it spread, it's kind of overlapping the spread of the current
HPC system, but it's going beyond both ways. It can be lower, where you could have a potential
quantum energy advantage with some technology if they work. And the other way around, it can be worse
than the largest systems.
And so that's a very surprising observation.
Then there is another one which is a kind of gut feeling,
which is complicated, is can we guess who's going to work best?
What I mean work is not from the cost perspective,
even from the power perspective,
but actually work at quantum scale.
So is there physics or physical law
which will favor chapter ions,
called atoms, silicon cubic, supercondicubit, or photonicubit, or not.
And there's another very old law that I'm kind of wondering about.
It's Darwin.
Darwin said, only the fetus survive.
So would that law work for quantum computing?
It's a question.
I don't have the answer.
I would prefer that the fetus survive, meaning the fetus is the one which consumes
the less power and drive the creation of a sizable market with a lot of democratization of
the usage of contemporary, if it makes sense, if the algorithms are there, because that's another
question.
So if the features survive, it's going to be a much larger, much broader, probably a more diverse
market.
And when I mean diverse, it's not the AI.
It's diverse in meaning the kinds of publications you will run on those systems.
Because otherwise, if the systems are too, you would be a different.
expensive. They can be only used by governments and the public resort centers. And they're going
to be used on the cloud, okay, but they're going to be rented to only a few users. It's going to
cost a lot. So it's going to be very elitistic in the way it's being used. So if you want the technology
to become mainstream, even a B2B technology, you need to have a reasonable price. It must be
affordable at some point. So that's why I like Darwin in that case. Right. But nature gets to decide
what it means by fit.
Exactly.
No, it's not fit.
It's the fittest.
It's a comparison.
Well, but it defines the metric and then go.
Maybe we're comparing different dinosaurs,
but still we want the smaller swam.
So in this case, for nature,
we're really talking about the market, I assume.
Exactly.
Yep.
Yeah.
Now, the market is famous for not being so fittest
or defining a way that people don't like
with the Betamax and VHS being the proverbial example.
So hopefully the market will have the right thing.
In any organization, particularly in the industry.
Yes.
I mean, people, they have P&L.
They have good to look at the cost.
And usually you don't buy the most expensive stuff
unless you're buying luxury products like a big watch or a nice car.
But if you are in a business, you look at the return investment.
You make an economic reasoning and you try to have, I mean,
the features.
to fit the need.
And if there's not a functional difference
between two systems,
you buy the cheapest one.
And I mean, if there's no difference
in what it delivers.
Yeah.
It's a cost-effectiveness.
So any organization,
particularly right now,
I've been doing that a long time ago
when I was working at Microsoft
more than 20 years ago.
I learned that IT
is still an expense
in most organizations.
So quantum cooling
is going to be part of R&D and IT, depending on the organization.
So if you are in business operations, it's going to be an IT stuff.
If you are in research, it's going to be more investment in R&D.
But in both cases, the budgets are constrained.
And so people will look at what's the most affordable solution in the end.
Olivier, now tell us about your open source book,
understanding quantum technologies, which is readily findable on Amazon.
It's free to download as well.
And free to download as a PDF.
Yeah, yeah, yeah.
It's on my website.
It's on archive as well, even though on archive it's on the latest version because it's a pain to update that on archive.
If you want the paper, you can buy it on Amazon indeed.
Yeah.
So along with maybe describe the book and its purpose, but also why did you publish it under a Creative Commons license as open source?
I've been doing that for 20 years now.
I wrote a lot of books before I was in quantum.
I started to write a book on entrepreneurship because I went in the startup ecosystem in France.
It was in French.
Then I wrote a book on Consumer Electronics.
It was actually a book that's very famous only in the French-speaking countries.
It was a book which was a big report on the CS from Las Vegas where I was going in Vegas every year in January.
But it was a companion of everything happening in the consumer electronic stuff.
So I was looking at digital television, photography, semiconductors, everything, from tech to marketing.
And it was very appreciated.
And then 10 years ago, I started in AI.
I did the same in AI.
The big change which happened with quantum technologies starting eight years ago when I started the first edition of the book is first there was a lot of science in that book.
Because quantum technologies is about science.
It's highly complicated compared to the other topics I mentioned.
And then I made a decision in 21 to switch from French to English,
which was a turning point in my case,
because it made a book available to, of course, a wider audience.
In the beginning, it was hard because moving from French to English,
all the French complained.
All the French people, they complained,
oh, I don't read English very well.
I say, too bad for you.
If you want to be a good specialist in quantum, you need to learn English,
if it's not the case yet.
And so then the audience grew across multiple countries, all time zones included.
But then the book grew to unprecedented size because my largest book before doing quantum
was in AI and the largest edition, the last one was 750 pages.
And now I'm more than double in quantum, because it's 15,00, 324.
So it's huge.
But I mean, you can download this into your own LLM or notebook.
or whatever, you can download in five parts the book.
You can have five parts printed edition.
Because I organized the book in five different zones.
So one is what I call prologue.
It's a joke because the prolog is more than 300 pages.
It's the basics of quantum physics history, all the whole of frame,
and the basics of quantum physics and mathematics and linear algebra and what's the qubit and blah, blah.
Then there's energetics inside that and all the transversal issues like error correction.
The second book is on the hardware, quantum computing hardware.
The third book is only on software, but it covers algorithms first,
then software tools, then applications.
So 300 pages.
And then the fourth one is on telecoms and sensing,
which is the under, I mean, covered topic in quantum technologies.
And the last one is more geopolitics and social issues.
So it's spread across a lot of topics, which is always,
I've been always doing that in all the books I worked in the past.
working always in a 360 degree fashion in a given topic.
And I share it because that's been my, I would say my intellectual and business model for 20 years now,
for all the topics I've been covering.
And it's been successful because, like I explained in the book, people who have time,
don't have the money to pay anything.
So students, for example.
And so it creates a huge audience of people who are students.
And people who don't have the time, but they have the money,
they don't have time to read the book and they ask me to do the short version which I
which I get paid for. So it's an asymmetric model which works very well. And also there's another
reason which is not business wise, which is I believe in the dissemination of knowledge and science.
We are in a new world where the knowledge should be available at least common knowledge.
And since all the topics I've been covering in my journey in the digital world,
It was always about helping people discover a new world.
So being a publishing open content book in a new domain
is a way to make it accessible to a broad audience.
The other reason I found with experience
was that the more you publish, the more you distribute,
the more people want to help you,
the more opportunity you have to meet interesting people,
both in the academic and the industry domains.
And in the case of quantum, it's been the pinnacle of my personal journey in tech.
I've never met so many interesting people than in quantum compared to my other experience.
I met several Nobel rights so far in physics.
I met so good scientists everywhere in the world.
I met all the people from many, many, many companies, not all, but many, many companies throughout the world.
And so I enjoyed that experience to meet interesting people, learn from them.
Of course, I learned from other sources and reading papers.
But learning from the people, having a good network of people, thanks to being a publisher, is a very good helper.
You mentioned that an aspect of your book is techno-politics or the geopolitics of quantum.
and I'm curious if you have views on, we hear a lot about the AI race, which typically focuses on the U.S. versus China, would you say the stakes are equally high in the quantum race between the U.S. and China? And would their models, their R&D models, which in China are seen as very centralized versus...
You forgot something in your question. You know what?
Go ahead. I mean European.
Okay.
We are in between.
We don't see just the planes flying over us.
We are region.
It's a three-way race.
It's a three-way race, even though Europe is very fragmented compared to the US and China.
But still, Europe has a place to take in that world, which, of course, may be dominated by the US at some point.
But there are some jewels in Europe that we shouldn't forget.
But whatever.
Yeah, there's a race.
I would rate the race as being as important as the AI race.
I think the AI race is much more important, more impactful on the lives of everybody.
It's going to change a lot of different works.
All the knowledge working activities are going to be entirely changed due to AI.
The race is a bit different.
There are two different aspects for the race in the quantum.
One is everything that relates to cybersecurity.
Because one of the motivation from governments to fund the quantum race, particularly in the US, was the show algorithm.
It was a defining moment.
The idea to create a quantum grueler came in the early 80s.
So it's a mix of Paul Beniof, Yorimani, Richard Freeman, between 79 and 81.
Then David Doche kind of laid out the basic theory of quantum ruling in 85.
But he took nine years for governments to say, uh-huh, this is.
interesting. That's because of Peter Shaw. Because his algorithms drove a mix of fear and hope
saying it was, say, oh, we want to have this weapon. So every government said, we want to be the first
to get to have this weapon. So, I mean, the federal government in the US, whatever the,
whether it was a Democrat or a Republican, they were willing to get hands on that technology. So there
was a lot of funding. And surprisingly, it drove a lot of excitement and a lot of interest. If you look,
for example, that the first trapped ion
conton-corrius, they were created just after
that. I know the folks who
did that, like Reinhard Blat in Innsbruck,
with Peter Zoller and in Eirch-Sherag, they created the first
trapped ion system in 95-96
just after Shore. And
Shore was a motivation to do that because there was
this kind of fear and hope altogether.
So that drove
a lot of interest. Same for supercondu Kroly Kibits.
I mean, the first one was created
in 1999 by Yasunobun Nakabura in Japan,
and then in France in 202, and then in the US in Yale.
And so the race came from that.
The second reason I see, in Paris,
that's why in Europe we are interested by this,
is it's a new playing round.
I mean, we don't know yet what's going to be the winning technology.
There may be a couple winning technologies, not just one.
there's still hope that somebody somewhere can find a way,
mostly with a collaborative work,
but can find a way to be a company that counts in this new market.
So there's hope that a new ground creates new hope
for getting a slice of this new pie.
That's the second reason.
Then there is a third one, which is kind of wishy-washy,
which is every government is all.
over-selling and over-amplifying their investment in that domain.
And some governments are honest, I would say, particularly the US.
The US is kind of honest in the publicity they do at the federal level
on the government investment and what they actually do.
It's kind of synchronized.
But in some countries, there's a big gap.
In some countries, you have numbers which are floating around
and the real numbers are much lower.
I won't name names, but it happens in several countries.
And the most mysterious one is China,
because you see numbers popping here and there.
And nobody actually knows what's the real number.
Even China doesn't know or even people from China
know that the public numbers which don't necessarily come from China,
by the way, are not the right ones.
And so there's a, there's kind of a fuzzy understanding
of the situation in China,
but which, by the way,
is true for all their economics.
I mean, many numbers are wrong and fake in the economics of this kind of country.
So you never know really what's the good number, but not just in quantum.
There's another example which I hate, which came from last year.
Last year, around May, Japan announced a $7 billion
plan. That plan was embedding a huge industry investment,
in semiconductor manufacturing, particularly for memory and flash memory.
And there was some quantum stuff inside that plan.
So what happens is everybody believes that Japan has a $7 billion quantum plan,
which is not true.
So the trick of the trade in some countries is to merge quantum
in a broader technology quest, typically with AI or semiconductors.
And then you have a big plan which includes quantum,
and everybody is getting confused about what's a real quantum plan.
It's just a bigger number that covers a lot of different things, but includes.
Yeah, yeah.
So you got a lot of wrong numbers.
So I did this weekend.
I did an experiment.
I dumped all the archive metadata data on my laptop.
And I did some analysis using Python and stuff like that.
And I found out that I could make a comparison between the number of papers and
the number of authors, publishing papers on archive between
AI and quantum.
And of course, there's a gap.
The gap is about 1 to 1, 1 to 20.
Oh, smaller than I thought, actually.
It's not that big, given the importance,
the current importance of AI.
And there's always a treat because the way you manage your keywords in the,
when you interrogate your database, may bias the system, but whatever.
So it looks like quantum is not that small compared to AI,
but still it's 5% of the total.
So it's a much smaller ecosystem in academics.
So if you count the number of authors
and I've seen some other sources
trying to extract the same number,
I think there are about 20,000-ish authors
in cotton papers right now.
That's really interesting analysis.
One thing that I observe is that,
unlike AI, that almost crept up on people,
it was like not there
and then it's suddenly there
and it works better than you thought it does
and it's using GPUs and, oh, I thought those were just for graphics.
Quantum computing has been such a clean split
that everybody has seen the movie before
and everybody has quantum physicists, all the countries, I mean.
So therefore, there's an opportunity to participate.
It hasn't the barriers to entry have not raised yet.
So I see quantum computing efforts everywhere in the world,
whereas it is much harder to see semiconductor manufacturing at the high end
everywhere in the world.
Yeah, there's something, I don't know if so much different compared to classical computing,
but at least there are two things.
One is, of course, what we call enabling technologies.
There's a lot of value in the electronics, in the photonics and the cryogenics.
And if you look at the cost of a quantum builder, that costs a lot.
And so there's value in being a leader in that space.
And so, for example, if you look at the leaders in some of the quantum
technologies, you have a surprising lead of non-U.S. countries. Who's the number one leader in the world
in cryogenics? You know? Blue Force. Finnish company. Finland. I don't know if the Americans know
where Finland is. So, I mean, it's a five million habitant country. Five million people.
It's an important country, yes. This is a small country. Worldwide leader in cryogenics,
and they have IQM on top of that. Who's one of the worldwide leaders in control electronics for
superintendence? It's an Israeli company, Controlled.
machines. Who's the worldwide leader in lasers? It's a German company, Topeka.
Who's the worldwide leader in a so-called error syndrome detection in error correction? It's
smaller business, but still it's an important one. River Lane, UK company.
So who's the worldwide leader in superconducting cables? It's a Japanese company called
Coat Co. So when you look at all these different aspects,
You have not a big company.
It's not like a Google or IBM,
but it's a small company, a small business would say,
or MeTouchstand in Germany, you would say,
but that company is not in the US necessarily.
And it can be even in a very small country.
Yeah, this is a great segue into,
I was going to ask you,
when we look at geopolitics of quantum computing
and technology in general,
the common conversation is really about how
Europe is falling behind. It's not investing as much. It's like, you know, you have to hurt the cats and
it's too fragmented. But in fact, as you mentioned, you look at all the examples that you mentioned
plus Carl Zeiss, plus ASML, plus, you know, Ame, plus all, you know, this, that and the other,
there are several really strong holes. How do you assess the relative strengths of technology in a
geographic sense? Well, in the case of quantum or in general, this is, this is, this,
Let's start with quantum.
Yeah, in the case of quantum, the examples I gave show that there's some leadership
that can come from Europe, but sometime it started, in the case of ASML, it started four years ago,
so it's a long story.
It takes time to create a leader.
And in the case of Topeka, it was created about 25 years ago, I think.
Yes.
So many of the examples, even rare isotopes, for example, we produce that in Europe as well.
So we don't depend on the US.
So that's another example with some raw materials.
So there are many situations where you can create leadership.
The big one of the old question we have is,
is it possible in quantum computing per se?
So with a quantum vendor.
Will IQM, Pascal, Alice and Bob, Condela and Planck or Electron or AQT,
will the strive against the leaders from the US?
That's a way more difficult question.
My belief so far is that, of course, if you look at the funding,
the funding goes into the US in priority,
but that's where you get the money.
You have all these packs happening right now.
And the other observation I make,
based on the comparisons we used before,
the ASML, Topeka and others,
is if in some country there's a good ecosystem
that helps a company to strive with a good differentiation
at the technology and the scientific level,
it may help compete success.
with China and the US. It may. It's not obvious, but it may help. If you don't have a technology
superiority, you lose. Because the financial power, the marketing power from this large geography
that is the US makes it much more difficult for a fragmented region like Europe. So you need to
have very good technology and scientific differentiation. And when I look at the details of the
strategies of all the companies, there's some hope for a couple of companies, maybe not 12 or 20,
But there's some hope from at least, let's say, five to ten companies in Europe, which could strive, but they would need funding.
Of course, there's a big project in Europe to create a large-cap funding for these kind of companies in deep tech.
It's, of course, the same obvious question about the funding.
What I believe strongly, and I've been advocating that with the government at the French level and the EU level, is that there's no unique market in Europe.
You got to make a difference between the market and the demand and the offer.
It's very complicated to sell in countries where you have different language, different culture.
Maybe the same currency, euro is the same currency, but it's hard to sell partly in the consumer space
the same way in the US, in Germany, in France, in Spain, Italy.
The culture is different.
But what we can do is a bit like Airbus and similar companies.
build multinational companies which gather the cultures and the strengths from different countries
and create European companies.
I think that's easier than to sell products throughout Europe in a consistent way.
So at least from this perspective, there's some hope in Europe that we consolidate some of the
offering.
There are many ways to do that.
And we have to be faster, of course, because otherwise all the American communities will
acquire European companies.
So there's some hope that some stuff can be done.
it's about strategy.
So we have to be quiet about this.
But it's about building a strategy
and being able to gather the forces.
And you look, for example, one way to look at this
is to look at the supply chain.
Look at the supply chain,
what's upstream and downstream
when you create a content mobility.
So upstream, you have all the hardware components,
like the ones I mentioned.
And downstream, you have the applications,
mostly software and software tools.
and applications.
So when you built an ecosystem and you create a new company,
you have to build your ecosystem upward and downward or upstream downstream.
And most of the case, there are many companies we miss either one or both of them
when they build their ecosystem.
So that's what we have to learn on the past and AI and all the digital revolutions from
the last 40 years.
We've got to learn from that and build good ecosystems.
And it's the same for China, by the way.
China will have the same challenge.
If you look at China objectively right now,
they may have some big amounts invested in the public academic space.
We have Jain We're Jain-Wei Pan who's very famous, driving stuff in Hefei.
But if you look in details,
the strengths and weaknesses of China are way different from those from the US, for example.
So they were the best in the world to experiment at scale quantum communications.
Because in Europe and in the US,
there was a government kind of skepticism about the interest of using the so-called QKD
to distribute cryptography keys.
And partly in the US, there was a belief that the NIST-driven PQC was the thing to do
and not quantum communication.
Now it's changing in the US and Europe because everybody knows that you need quantum
communication to connect quantum objects like quantum computers and quantum sensors.
It's changing.
So that's what the strength of China.
If you look at quantum computing, they fare not badly in photonics, even though not at a commercial level.
It's mostly experiments like boson sampling experiments, which are useless at this point in time in computing.
And they have a lot of things happening in superconduct incubates.
But so far, they've not done better than IBM and Google and even European companies.
So they are kind of imitating what the US are doing.
They've got a one to two years lag, which is not a lot for China.
they can cope with that.
They even are trying to develop their own cryogenic fridges.
So they want to be independent.
So they develop a lot of custom electronics,
a lot of cryogenics on their own.
They want to be self-sufficient.
But so far, they're not as good as the Western world.
But never say never.
So it can change.
And if at some point they want to export,
they just stole a cold atom control to Pakistan.
I wouldn't, well, it's a strange export,
but whatever.
They try to export some of the stuff they do in non-aligned countries.
So we have to watch carefully what they do.
And the third one in China, which is not very known and not widely discussed,
is what they do in sensing.
They have a huge ecosystem that is totally unknown in Europe and in the US,
which is an ecosystem of quantum sensing for the army.
So it's military.
There are many companies who don't have a Western name.
They just have the name of the city,
limited something and a complicated name and maybe there are four names four different names sometimes
it's always very hard to Google them and you can find them only on a search in China
like Beidu search engine and if you want to know what they do it's kind of sales because they work for
the military so that's something we don't know well yet and sometimes they are also overselling what
they do they say that they can detect a nuclear submarines they can detect stealth aircrafts
So it's hard to check that.
And the scientists are very skeptical about that, at least in Europe.
Yeah, I mean, advancement of science has historically benefited from free flow of information around the world.
Yeah.
And just kind of move forward.
And when you restrict that, when you add friction to that, it's a much harder challenge for all the players.
Yeah, yeah.
I've seen that coming.
Some scientists are really not happy with the situation.
They see exports controls coming from the NATO countries, mostly.
And there's also a limitation in the exchanges, scientific exchanges we can have with China.
And we still have a lot of very good scientists in China.
I remember I saw something from Austin Fowler.
I don't know if you know this guy.
Austin Fowler, he was a surface code scientist working at Google.
He left Google last year, and he's on his own now.
And he was interviewed recently in a podcast that I watch.
interestingly he said something that is striking it's so difficult to create a viable large-scale
content computers that it should be a worldwide journey with attracting older talents in the world
and so all the fragmentation all the stuff happening in all the countries we want to be a leader
in the world and kind of ring fans what they do against the other this is not good for science
because the science is very complicated and i have been observing that on my own
What I've been observing is that there are two things happening
which limit the speed at which we make progress.
One is too many people work in silos.
There's not enough interdisciplinary across the disciplines
that I'm discovering when looking at the energetics of the systems
because when you look at the energetics,
you have to look at everything.
You need to look at the software, the compiler,
the error correction, control, the cryogenics,
the qubit themselves, the quantum physics,
quantum thermodynamics.
You need to look at all the stack.
But you need to have people working together.
gather. You need to create a kind of holistic model which embed all the figures of merit.
So it's complicated. Unless you are in a large company like with IBM or Google or a large
startup, it's very hard to do in the academic world. So you need more collaboration between those
things. And the other thing we can observe, and that's what Austin was saying, and I observed
that on my own. There's a lot of duplication. So garment funding everywhere funds the same stuff.
you've got the same teams doing the same stuff
and repeating the same experiments
and you lose
a lot of, I would say, efficiency.
But you can control that
like if you were the Chinese government or the USSR
government with a big hierarchical
control of everything. It's impossible.
But there's probably some
efficiencies, improvements to put in place
in part of the ecosystem
to make sure that there's more collaboration
to reach that. The problem
we have right now, compared to the
Manhattan Project or even the early computers, ENIAC and stuff like that.
What's happening is we went into a commercial domain kind of too quickly, but for good reasons.
There are good and bad reasons.
The good reason is you can't create a complex product in an academic setting.
There are very few places in the world where you can do that.
Even NASA, if you take NASA, the Apollo program was, I mean, it was Rockwell and Boeing,
if I remember well, and Luke and Martin.
They were doing the Saturn 5 and everything.
NASA was an integrator.
It was not the industry company.
So you still needed an industry to win the race to the moon.
And it's still a case today, by the way.
So if you look at the way the race was organized,
we wanted to creating many startups everywhere in the world
because the governments had a kind of overexpitation
of the economic value brought out by content computers.
So they gave money to the startups instead of giving money to the academics.
The academic world is not well organized to create a product.
And so in the end, it creates a kind of strange loop.
where you create large teams in the commercial world,
so they have to bullshit what they do,
they get to get an investor,
they get to overhead what they do,
and they get to be closer than what you do in the academic world.
So what would be good?
Would be to be able to keep the openness,
the international openness of the academic research
and make sure that the academic world connects well with the industry world.
But it's a very competitive world,
so we have to accept that situation.
But there are probably some efficiencies
and collaboration.
Maybe that will come from some adverse effect,
which is called consolidation.
So market consolidation.
That's the big buzzword with INQ buying Oxford Ionics
and Skywater or whoever.
So the consolidation may be a driving force
for making optimizations,
but the other thing we know from large organizations
is the larger the organization is,
the less efficient it is usually.
So it's complicated.
Olivia, very curious.
We've been hearing some reports from some vendors,
that they have customers that are actually doing useful work with quantum.
Do you have views in this part of the field?
It depends what you rate as a customer and what you rate is useful.
How real is this stuff is the question?
Yeah, yeah.
So I can tell you what I feel is real and what's not real.
What's real is in the last two years, we've seen, I would say, case studies.
being deployed or tested on G-Wave systems, on IBM systems,
on continuum systems, to name the few leaders,
maybe Pascal and Quera to some extent,
where there was a kind of beginning of a quantum advantage
to solve very specific problems.
And the specific problems we know that were solved
in a near-Contain advantage regime were mostly quantum physics problems.
Yes.
So it was about simulating
mainly material science stuff.
So ferromagnetisms, speed models,
usually very theoretical models
which are running at zero Kelvin,
which is very cool.
And so it was in most of the case
I know about helping
fundamental physics research do simulations
in a pure academic setting
without industry relevance.
That's where we are right now,
even though there are some people
from the classical computing world
using tensor networks which are kind of arguing about what can be done classically to do the same kind of computation,
but still we are in a situation where the first use case are in that space.
And if you look at the theory, it makes a lot of sense.
Because if you take the idea from Richard Feynman, which was about using a quantum system to simulate another quantum system,
because I would say the semantics are not that far, it makes a lot of sense that this is what works now.
The closer you are with a speed model, particularly, not Ferminic model, the speed model,
the closer you are, the happier you're going to be, because you're going to have a low overhead.
And if you're in the case of D-Wave or even the cold atoms,
when you want to optimize a communitarian optimization problem and solve this kind of problem,
you have a technique called embedding, which is converting the structure of your problem
into the problem that the computer can solve.
And the difference in semantics creates an overhead, which is much larger.
than when you solve a spin model.
And so as a result, the overhead is larger,
and the counterweight is not that obvious.
And so when I hear Gwave saying,
I've got a solution that works,
that's deployed in some retail company in Canada
or whoever in some other country,
I'm very skeptical.
The other reason why I'm skeptical in that case,
in that particular case, is that Gwave is using a so-called hybrid solver.
This hybrid solver is a black box.
It's not open science.
The way they use this hybrid solver
to rely on the D-Wave anneeler on one hand
and the classical solver on the other hand
is not publicly documented.
And all the anecdotal evidence
I have about this is not very good for D-Wave.
So I would like the company to be more open
on what's the load that is on their annealer
and what's the load that is on relying on the classical system.
In the case of IBM and the gate-based systems,
it's more open, it's easier to audit.
It's easier to benchmark.
So I trust more what they do.
But IBM and Quera and others,
they have not claimed quantum advantage on the optimization problem so far.
It's mostly about physics simulation.
So that's where we are today.
So when a company says, I have a useful case,
well, you have to take that with a lot of grains of sorts, usually.
Because useful doesn't mean deploy.
Useful doesn't mean scalable.
And people are always gaming with language in semantics here.
So if you take the exact words that are used by companies, they are all semi-lides, I would say.
Because an industry use case is mostly in all cases, a use case is not a case study.
And I use the semantic that the case study is something from the past that you have experimented,
that you can audit, where you can learn lessons from what you have deployed,
on the cost, on the optimization, and everything.
And the use case is for the future.
So a use case is something you plan to do when the system is available.
So right now we have more use case than case studies.
When you have somebody who says, I have a deployed industry case study,
usually it's a toy model that was tested on four physical qubits,
which could be done on your laptop, by the way, or even a Raspberry Pi,
because four qubit is really a picnic for any classical computing system.
And of course, it's not deployed.
Because I don't know any company who would like to deploy a solution on a $10 million
machine, which could be deployed on a laptop,
or a one new server in the cloud.
So these are toy models in most cases.
So the story is going to change when you exit the toy model era.
And it's the same for VQE, for chemical administration.
It's the same for QAOA.
It's the same for all the stuff I see in the academic papers
where it's being tested on less and fewer than 20 qubits.
And there's a reason why.
Why is it always tested on between 4 and 20 qubits in many?
in many cases, it's because variation algorithms don't scale very well.
Yeah.
Because when you reach a large number of physical qubits, you need to have a lot of circuit
shots.
You have the so-called barren plateau, which prevents you from converging or your optimization
loop with the classical optimizer.
So it makes it very difficult.
So for classical variation algorithms, actually, the most interesting algorithms that
were deployed recently or tested recently on IBM systems were non-variational algorithms.
There are non-variational algorithms which work better, but they require
some preparation, classical preparation.
And there's some interesting stuff happening there.
So it's a complicated world, I would say.
Yeah.
So we can say, deploy that scale, no way.
Right.
So we can say it's more or less the same as before,
that the market is for those who are experimenting with the technology
and are working to be prepared for when it is ready for scale.
Exactly.
And this learning curve is very useful.
Because we know that in any industry, it takes time to learn.
a new paradigm. And so since quantum computing is a new paradigm, and since it's highly complicated,
and one of the reasons it's complicated is the complexity is not where you think it is.
Usually people think that the complexity is about developing a quantum circuit, because it's new.
But when you look at the actual case studies, particularly in chemical simulations, which are the
big, big chunk of the potential market, when you look at the work that you have to do, and you know
that because you know about HPC in your case, you know that the heavy workload is in the
classical side of your system. Because you've got to prepare your ground state, you've got to
prepare your circuit, you've got to do a lot of pre-computation. And this classical pre-computation
is using a DFT or FCI or R3Fog simulation and stuff like that. It's classical stuff.
And so if you want to, in a company, even a financial service company, if you want to be a good
developer for a quantum solution, or even if you're a service company, you need to master
both quantum circuits, which are totally new, but you need to master the rest.
Right.
And the rest is a heavy workload.
So you need to have teams which I put that in my book, I think, a while ago, which is you
got three skills to assemble.
One is quantum computing programming.
One is classical computing programming using the most modern tensor networks or whatever machine learning
tools that they are. And the third one is know your industry. Discipline. Yeah.
So financial mathematics, chemical mathematics and stuff like that. So you need three skills
in an organization or in an ecosystem to assemble a solution. So the learning curve is long.
And it explains why it's good to spend time now. And there's another reason which I discovered
recently, which is large customers who usually create a small team in quantum computing. They
hire very good people there.
So it's a way to attract talent.
And there's competition for talent.
So if you breed a small team of high talented people in quantum computing and on top
of that on classical computing, you may be able to expand your knowledge about mathematics,
your knowledge about modern computing.
And even if quantum computers take some time to be working in production, you're going to
upscale your teams, whatever you do.
Yeah.
So there's always a benefit, an intellectual benefit, which may have a return in investment
because it's going to help your company to be more efficient in mathematics,
in solving complicated problems, whatever the technology that's being used.
And that really is for public money towards quantum computing.
That's really a very nice fallback position, that even if nothing happens,
just gathering that many highly qualified people is bound to do something good for society.
So it's a good spend of money in that sense.
Whatever the domain, by the way,
it's financial services, chemical simulation, life science,
logistics, whoever.
I mean, even government, defense, stuff like that.
So everywhere you can have some benefit from looking at this edge of science.
And it's so funny because sometimes people ask me,
why are you in this domain,
given it's a very small market,
it's not going as fast as people are thinking?
I say, I've never seen so interesting a domain.
Because you look at everything.
You look at science.
You look at technology.
You look at electronics.
You look at case studies in multiple domains.
You are, I mean, I've never been so intellectually challenged than in this quantum world.
And so it's a reason why there are interesting people there.
And that's why if you like science, if you like intellectually challenging, you stay there.
Yeah, yeah.
And you move around.
You move around in between those different domains within the quantum space.
I was curious about your book.
One topic it looks into is alternatives to quantum and classical supercomputing.
What are some...
It's unconventional computing, I call it.
Unconventional.
Unconventional computing.
There's a chapter on that.
It's the last part of the hardware part of the book.
And you found out interesting?
Well, I'm just curious what you deem the leading candidates for...
Oh, in that space?
Oh, it's very hard to say.
I see a surge in Photonics computing companies right now.
There are many, many companies being created because Photonics is fast.
It's very energy efficient.
That's what I used to say to people who fear the short algorithms,
quantum computing computers breaking RSA everywhere.
I say, maybe the fear is not well-placed.
I say to them, maybe you should look at unconventional computing.
Even though the companies are not very well-funded compared to AI and quantum space,
those companies are sometimes very interesting.
Under-assessed, I would say there are not enough people looking at what they do.
The theory is maybe not clear enough.
But in some cases, I think that some of these companies may be able to deliver power improvements
compared to classical systems, probably not as far as Contonovovovers, maybe in a shorter time.
So I believe it makes sense to look at them.
And the other reason I look at this is in some cases it can help quantum computing.
Let me give you two examples.
One is a so-called coherentizing model in photonics.
There's one quantum company which is looking at this.
It's the combination of HPE and Collab, the company created by Alan Hoare,
and John Martinis.
And when they published their kind of blueprint late 24,
they explicitly explained that they believed
that quantum computing would be good for quantum chemistry,
but not for optimization.
And the guys from HPE with Mosein, Mazzini,
and I've got the name, Rehibo Solé, I think,
which is a very old researcher at HPE,
they decided that solving optimization,
problems would rely on coherentizing model or unconventional computing, which is not quantum computing.
So it's a combination.
This is one example.
The other one is another unconventional computing technology that I'm carefully looking at.
It's superconducting computing, which is not superconducting qubits.
So it's computing that works at very low temperature, that's using, let's say, the equivalent
of transistor based on Josephson-Junchungs, like what we do in, in conference.
But it's using classical logic.
And this classical logic is operating at very low temperature, and it may be very power efficient.
And also, it may be very fast.
You could theoretically, and it was experimented a while ago, go up to 700 gigahertz.
It's a lot compared to a 3 gig or 1 gig from Nvidia GPUs right now.
So it could be very efficient.
So that could be a way, but it's been tried for 40 years now.
now. There was a IRPA NSA program a while ago which didn't bring any fruitful results,
but still it's an area of investigation. And why is it important? It's important because
if at some point this works, this will enable the creation of control electronics operating
at very low temperature to control qubits, like superconding qubits. There's one company which I know
very well called Sikh. They're based in New York State, upstate New York. They are in
Elmsford. Where are they exactly forgot? I think they are in Elmsford, north of New York,
not far from White Plains, where IBM is located, and White Plains and Yorktown Heights.
And I went there two years ago. And so they are developing control electronics for qubits,
and it came from this big realm where companies were trying to do classical computing
running at very low temperature. So these are,
two examples of kind of cross-pollinization of technology between different fields.
But all of these, Olivier, are, well, not all of them, but really, the genre is a little bit
of a science project out there, may or may not happen. And in some ways, it's sort of indicative
of like the world we live in, which pushes us more and more upstream to unbaked, unproven
technology because it might work. Yeah. And it seems to me that it would be,
much better to plug that into the existing academically driven process and let it be proven
out like you said earlier about quantum computing too.
There are companies who are trying.
So let's let the company try to do that.
And then it's up to them to find the funding and sometimes it's difficult for them.
But recently, I think Sikh got a very good funding round.
I forgot the level.
I think they got a funding round valuing them at.
one billion dollars, which was not bad.
Maybe that's the new model.
Maybe that's the new model of funding is that you start with a concept,
maybe from academia,
and then it gets funded.
Now it has to make money.
And if it doesn't make money,
it goes back to academia.
Exactly, exactly.
But of course,
well,
one thing is you shouldn't put all your eggs on the same basket,
but still you need to focus at some point.
So the strategy is about making choices.
And so,
Yeah.
It's a radar between making choices and hedging your bets.
So that's...
And it's fundamentally complex.
It's fundamentally complex.
And it's about gut feelings.
It's about knowing the science that's behind all these things.
Yeah.
That's why I try to do.
It's funny because as I've been exploring that space for eight years now,
I've been having the feeling that the more you know,
the more you may become skeptic because you know all the difficulties
that the people share to build those systems.
But the more you know, the more you understand what people are trying to do.
And it makes you being confident to some extent as well.
So because you know that people are working on the hard tasks.
So I'm kind of having always mixed feelings about that.
I know how hard it is.
So when I see the roadmaps which are kind of crazy, I say,
maybe it's too optimistic.
But I still have some trust in the people.
trying many things and knowing what's hard and there's some ingenuity being deployed and some
workarounds being invented to fix various sets of problems.
So it's what makes the situation interesting.
By the way, you've got a kind of joke about that, which I try to use.
The joke is there was a French guy from BCG, Jean-Fonsoaubier.
I know him very well.
He was, by the way, at Santa Clara.
And he wrote something back in 21 saying, it's not a matter of if it's a lot of, if it's
it's only one, about when we have a useful contribution.
And so I said, well, you are violating the Eisenberg
Interimacy principle with this.
So I said, you must accept that the if is uncertain
to find a way to get the when.
Exactly.
Because otherwise, you've got some kind of complementary variables
where if you are not uncertain on the if,
it means that the when is totally uncertain from now to infinite.
Exactly.
And so if you want to have a better precision,
it's squeezing. It's called squeezing in quantum physics.
So if you have a better precision on when it's going to happen,
you need to accept that the heath is still an if.
It's just a joke by analogy.
But it works very well in practice because if you understand that there's questions,
you look at the questions, you look at the challenges,
and then you look at how people are going to address the challenge.
Sometimes you see challenges which have no response yet.
Sometimes you see challenges where there are multiple options
that have to be tested and it takes time.
That's why the when is uncertain,
because you have to test maybe 10 options
with five topics in your system.
One example is interconnect.
So how do you interconnect QPs?
So how do you interconnect quantum computers
with quantum interconnect?
It's very complicated because you put losses
all along the path and efficiency are not that good.
It may be costly.
You don't know.
You have a cost on compiling your code
because you've got circuit partitioning
all over the place.
So that's complicated.
So some things,
is he says, are saying it's difficult up to impossible, others are trying.
So let's let people try to do that and see what happens.
Yeah, yeah.
I liked your observation that really in an ideal world, this is such a complex, huge problem.
The world really should come together to solve quantum computing.
But in the world we're in?
We take it as it is, like we say.
Not very likely to.
Well, if you're a Star Trek fan, we are a little bit far away from Starfleet
command. Of course, yeah, of course. Yeah. The planetary. Yeah. It's more Star Wars right now.
Yeah.
Nice. Nice. Very good. Well, that's all the questions. Well, I have many more, but, you know,
it's been well over an hour and it's as you like. You have a couple ones. No, it's,
if you have a couple ones, I can respond to that and then you, you well, I, but maybe I will
ask one final one. And that is really fidelity versus error.
I've seen that discussion most vividly when Jack Krupanseki posted that we need a certain minimum level of fidelity before error correction can really be used.
As an analogy, we look at our existing classical computing where there is error correcting code, but it's not relied upon as heavily.
It's very different.
It's very different.
But maybe we need a few more nines before error correction can actually.
What is your perspective on that?
We don't need many nines.
We need three nines to have functional error correction.
We don't need four nights, five nines.
No, no.
The idea was that if you want to scale the overhead of error correction as something that is relied upon so heavily,
makes it untenable.
That's at least how I understood it.
But if you have a lot more nines, then error correction is there, but it's not invoked so often.
Yeah, but when I wrote my paper on the error correction recently, it's a 45-page paper
that's on my blog.
I found out something interesting.
It's called the Law of Diminition Returns.
It says that, and you can see that very easily in the curve
that's being published in 2012
about the efficiency of surface codes.
It's a curve like this.
And so the curve shows that you have the so-called threshold.
The threshold, which is dependent on the architecture,
is the fidelity,
and I will explain what's fidelity, by the way,
but the fidelity at which you start to have an error creation code
which create a logical liquid that's better than the physical quibit.
But what we know is when you are right on the threshold,
the size of your logic liquid is infinite.
So you need thousands of physically quid.
It doesn't make any sense.
So it's an asymptotic curve.
So what you need in practice to create a functional logic liquid
is you need a tenfold increase in fidelity,
or a narrow rate, by the way, a decrease in error rate,
compared to the threshold.
So let's say the threshold,
the 32 threshold is 99%.
You will need 99.9%
fidelity on your physical liquid
so that your surface code or your color code or equivalent
works.
But then if you go on increasing the fidelity,
the gain is going to be not that big.
The gain is significant when you move from 99 to 9999.
But when you go to 4-9-5-9, the gain is not that good.
And so we know that it's so difficult
to move to 4-9s and 5-9s,
most of the technologies.
And by the way, the control electronics is limited at six nights.
So whatever you do, the electronics will not be good at better than six nights, as far as I know.
I've seen some papers from the Dutch on that a while ago.
It's the fundamental limitations on the electronics control.
And it's the same for lasers, by the way.
So you can't go beyond six nights physically.
But if you reach three nights, three nights, it's enough to get functional error creation.
But then the problem is, how do you get three nights at scale?
Because right now, I mean, the only company and technology which is reaching three-nines are small-scale trapped ions.
Continuum is having three-nines, but it doesn't work very well in practice as far as I know.
But ion queue with Oxford Unix, they reach four-nines for two-kubes, but only with two-kubits.
So I'm expecting that they release someday this year they are promised a 256 chip from OI,
so Xodai Unix in the UK.
And I hope they're going to deliver at least three-nines based on that.
We'll see.
I don't know.
I don't know if it's going to work.
But 3-9 is going to be enough to scale.
But then you need to have this at-scale.
And so one thing that is not well understood is what is the 3-9 about?
The 3-9 is about all the operations fidelity.
So you need a 3-9 for the two-tuff-est ones.
The gate, tough one is the 2-cubil gate, so C-Node, or Ice 1.
up depending on the physical implementation.
The second one, which is harder, is a readout.
All the papers, mostly the recent papers which are published by iceberg quantum,
Ooratomic, Google and so on breaking ECC or RSA,
they were all based on 3-9 fidelityes and the 3-9 includes readout, theoretically.
As far as I know, theoretically.
So some error correction codes can be relaxed about what they need
on the 2KBit gate in some situations for error syndrome measurements
and sometimes also on the measurement.
And you have to look at the details.
What is the actual error rate that you can tolerate
for all the figures of merit?
But if you are just looking at the basic science and basic theory,
you need 3-9s in 2-kibit gate and measurement.
And it's hard to get that scale.
Particularly the 2-cubit gate is hard to get at scale
because of cross-talk.
and a lot of side effects and the correlated errors and stuff like that,
which at scale would create some mess depending on the cubic type.
But sometimes the good error that people like to have,
particularly in cold atoms, is a loss of atom,
which creates a so-called erasure error.
It's weird because looking from far away, you say,
oh, it's bad to lose the atom.
But actually, it's easy to detect.
So an error that's easy to detect is not as bad as an error
that's hard to detect.
You can solve it.
Like a power error,
a small error in a physical
qubit that doesn't move.
So a leakage error or
an air reservoir where you lose
the state or you lose even the object
itself, a loss of the
quantum object,
this loss error is easier to take.
That's why the guys from QCI
were acquired by D-Wave.
Because D-Wave acquired this company,
which was created by Rob Schoelko from Yale.
And this company is using a
supercony cubic and the technology
called dual rail.
And the dual rail
may make it easy
to detect an error
and erasure error.
So,
when the error is a big one,
but easy to detect,
it's better than a small error
that's harder to detect.
And another reason why
it's interesting,
it's because of the cost
of error syndrome detection.
Because error syndrome detection
is very costly.
Because it's using
a graph-based mechanism,
that's classical,
that scales very badly
with the size of your surface code.
And it's very expensive.
So it's interesting to know that
and to be able then to compare those different platforms.
Well, that was the motivation for more nines
was for errors to not be present to begin with
so that when they do happen,
it is more rare.
What Jack is saying,
what Jack is probably thinking about
is to make NISC viable systems.
So let's say,
let's say you have a four-nines.
If you have a 4-9, in theory, without error mitigation,
which, by the way, doesn't scale well or so,
you can scale up to 10,000 gates.
But if you look at the circuits that people are designing,
whether it's for doing a chemical simulation,
optimizations, portfolio optimization, or even shore breaking,
you need billions, billions of gates, minimum.
It's 10 power 9 to 10 power 12.
It's way above the 10K.
So you need scalable error correction in the end.
Because of the circuit size, the circuit size demands error correction.
No, I'm not saying let's eliminate error correction.
I'm saying that if error correction has non-trivial overhead,
in terms of bandwidth processing,
it is therefore best if you do not have errors.
Now, occasionally you're going to have errors,
and then you can do the overhead, but not so much.
Okay, if it was easy to get a five, nine,
it would have been done. It's hard. I mean, it's so hard. I can give you an example. Some of the
good academics in this world are in the good companies. They do a benchmarking, of course,
and they try to understand where is the noise coming from. So they try to detect the source of
the noise. And recently, there were a couple of very interesting papers from IQM and others,
explaining in details for trapped ions, for superconducting units and others where the noise
was coming from. So you budget the errors. And then we have a good budget of
the source of errors, you can say, oh, I'm going to focus on these two things,
and maybe I'm going to improve that.
So you can focus on the quality of electronics, the quality of the signal.
You can focus on the isolation.
That's why you use two enable couplers in superconduct cubits.
You can focus on other aspects of the design of the cubic itself when it's not an atom.
In the case of an atom, you have to focus on the photonics that controls the atom.
You can focus on the types of transition, energy transitions,
because the energy transitions are not the same if it's an hyper.
or if it's an optical transition.
So because on one way you're going to use a microwave drive, which can be local,
and on the other way, you're going to use a photonic drive, which use lasers and laser beams
and laser shaping devices which are complicated.
So there are many ways.
Another example of the things people are trying to do is they are trying to replace the so-called
SEL-AIMs, the special light modulators and the AOD, the acoustic optical deflectors that are
using cold atom systems.
They want to replace that with nano.
photonic circuits, which are using metasurface, which are shaping the light in a much more
efficient way than what we do with SLMs and AODs. So that's an enabling tech. And it's
being experimented in many places, in Europe and in the US. And there are so many things to dress,
to reduce the noise. That's a lot of technology development. Yeah. And experiments and characterization.
Yeah. Well, excellent. And thank you. We definitely are way over now.
Thank you, Olivier.
Really, a great tour of the quantum world.
And it's been a pleasure, really.
Thank you, guys.
Yeah, really, a total delight.
Thank you, Olivier.
Thank you for being generous with your time
and really looking forward to seeing you in person again
and having conversations like this again.
I know I'm back on my papers.
Thanks so much.
See you next time.
Bye-bye, guys.
Brilliant.
All right.
O'clock.
Bye-bye.
Bye-bye, indeed.
That's it for the...
this episode of the at hPC podcast if you like the show please rate and review it on apple podcasts or
wherever you listen every episode is posted on or irionx.net contact us with any questions or proposed topics of
discussion the at hbc podcast is a production of arian x thank you for listening
