@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20250623
Episode Date: June 23, 2025- Fault Tolerant Quantum Computer in 2029? - Quantum computing roadmaps, performance benchmarks, industry metrics, M&A - RIKEN and Fujitsu team up again for Fugaku.next, Japan’s next-gen flagsh...ip supercomputer [audio mp3="https://orionx.net/wp-content/uploads/2025/06/HPCNB_20250623.mp3"][/audio] The post HPC News Bytes – 20250623 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC News Bites, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone.
Welcome to HPC News Bites.
I'm Doug Black of InsideHPC, and with me is Shaheen Khan of OrionX.net.
We're going to start off with quite a bit of quantum news.
We're seeing more news and greater claims by the quantum computing industry touting
their roadmaps, projected or achieved performance metrics, and mergers and acquisitions.
In recent weeks, Quantinum, IBM, IonQ, Brigetti, D-Wave, IQM, Microsoft, and others have made
big announcements.
IBM launched an updated roadmap,
putting stakes in the ground. The company said it has a quote, clear, rigorous, comprehensive
framework for realizing a large-scale fault-tolerant quantum computer by 2029. That's a pretty bold
statement, Shaheen. I suggest we put reminders in our calendars for four years from now so we can hold IBM accountable for what they said.
IBM and Rican published a paper on results of a quantum chemistry application run on a hybrid system comprised of
Fugaku and IBM quantum. IonQ announced a series of acquisitions to beef up its offerings and roadmap.
The quantum industry is crowded,
so a consolidation wave is really not surprising. IonQ and Kipu Quantum announced the successful
solution of, quote, the most complex known protein folding problem using Kipu's algorithmic framework
and IonQ's quantum hardware. Quantenium announced a record for quantum volume
and said it has achieved its five-year goal
of 10x annual QV increases.
They also set a new record by creating a 32-qubit
entangled state with high fidelity.
We talked about D-Wave's Advantage 2 system recently.
They talked about quantum advantage on a problem
that looked
pretty real. IBM, Microsoft, Google, Qera and others talked about breakthroughs in quantum
error correction. So Shaheen, what do you make of all this?
Well, my short answer continues to be the same as before. The industry is making great
progress, but it's far from a quote transistor moment or quantum advantage for an important application.
So it has a lot more progress to make.
As for IBM, I think I'll take you up on that suggestion, Doug.
They have a pretty good record of executing on their roadmap
and the plans were delivered with confidence,
but delivering a real programmable fault-tolerant quantum computer
with 200 logical qubits and capable of 100 million quantum
gates that can provide quantum advantage for a real looking application and do all of that
in 2029, that is pretty ambitious, even as we cheer them on and hope they hit it out
of the ballpark.
IBM has been talking about logical qubits and quantum gates, and if it's really fault
tolerant, then coherence
times are also covered. But those numbers don't say enough about circuit size and
circuit depth, which imply what applications you could run. So we should
cover the metrics that are typical in the industry. Those include random circuit
sampling. RCS is an artificial test that is especially hard for classical systems to do, but verifies that the quantum computer can do what is expected of it to run random circuits.
It's what was used to claim quantum advantage. Randomized benchmarking aims for a rigorous quantitative assessment of gate fidelity by checking the quality of quantum operations. Quantum volume, which was introduced by IBM
and adopted by the industry,
especially by the trapped ion vendors like Quantinium,
it's a measure of overall computational performance
based on number of qubits,
connectivity among those qubits,
error rates, and coherence times.
Algorithmic qubit, used especially by IonQ, quantifies the usable
size of a quantum computer. It's an analog of quantum volume. Circuit layer operations
per second, CLOPs, also introduced by IBM, measures how many layers of gates a system
can execute per second, indicating processing speed. Most of these are good approaches to characterize a system
and a necessary step before you go up the ladder
to run small kernels and go up from there
to more substantial code.
Another good way to track these
is the DeVincenzo criteria.
These were proposed by physicist David DeVincenzo in 2000
and is a set of seven conditions that make a system
eligible to be a quantum computer. They include scalability, initialization, long
coherence times, universal gates, qubit measurement, and interconversion or
transmission of qubits. Achieving all of them simultaneously is the big challenge,
especially scalability. These criteria can also provide a framework for evaluating hardware performance.
Fujitsu said it has won the contract to design Japan's next-gen supercomputer to succeed
their Fugaku system, which entered the top 500 list in 2020 and remained the world's
number one supercomputer until the Frontier Exascale system displaced it in 2022.
Fugaku is still on the list at number seven.
According to an article in the Register, the successor, provisionally called Fugaku Next,
is expected to be another ARM-based system using a CPU derived from the upcoming Fujitsu
Monaca data center, Silicon.
Shaheen, several times I've heard you express admiration
for the Fugaku system.
What is it about the architecture you like so much?
Well, right, I do.
And what I like about it is its ability
to run really difficult problems very fast,
not just the easy problems.
So according to the news,
Japan's steering committee for supercomputing
has chosen Riken to lead the overall project and
assemble the coalition of players that are going to develop Japan's next generation flagship
supercomputer. And just like last time, Riken and Fujitsu will cooperate to build this.
We covered a chip that will be used in this system last time. It's focused on performance
and energy efficiency for a wide range
of applications, including AI. For the CPU itself, they projected very nice performance targets
for LLMs, for example, and other AI applications. But they also expect to have the ability to use
AMD GPUs for seriously high density computational needs. So yes, I am pretty excited to see this and look
forward to learning more. All right, that's it for this episode. Thank you all for being with us.
HPC News Bytes is a production of OrionX in association with Inside HPC. Shaheen Khan and
Doug Black host the show. Every episode is featured on insidehpc.com and posted on orionx.net.
Thank you for listening.