@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20251027
Episode Date: October 27, 2025- Google Claims Verifiable Quantum Advantage - Quantum Computing Applications and Current Status - US Govt’s Rumored Interest in Equity Stakes in Quantum-Computing Firms - NextSilicon’s New Chip ...- A Bit of History on Reconfigurable Computing, Data Flow Architecture, Systolic Arrays [audio mp3="https://orionx.net/wp-content/uploads/2025/10/HPCNB_20251027.mp3"][/audio] The post HPC News Bytes – 20251027 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC Newsbytes, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone. Welcome to HBC Newsbytes. I'm Doug Black of InsideHPC, and with me is Shaheen Khan
of OrionX.net. Google has joined D-Wave and saying they are the first company to demonstrate,
quote, verifiable quantum advantage. And this is with
their willow quantum chip. Google used its most recent 105 physical cubits superconducting chip
to simulate a scientific effect and said the answer can be verified, meaning it is repeatable
on their system, and could be checked via any other means. The company issued a blog saying their
achievement was made possible by the precision and speed engineered into their quantum hardware
system, and that they, quote, set out to demonstrate its power in a complex practical application
built to reveal hidden information about the inner dynamics of quantum systems, such as molecules.
In so doing, they, quote, successfully executed the highly complex quantum echoes algorithm,
which relies on reversing the flow of quantum data in quantum computers, which in turn places
strong demands on Willow's performance as the system scales.
Sheen, I bow to your superior knowledge of quantum.
What do you make of Google's announcement?
Well, big picture, it's the same picture.
There's steady progress, and each milestone,
even small ones are quite impressive in their own right,
but only slightly closer to the destination,
and a lot remains to be done.
So to quickly recap, so far, nobody has come close
to beating a cluster of GPUs with quantum computers,
except for useless tasks like testing
that a random set of instructions on a quantum computer works,
or very narrow quantum science apps,
which other researchers have quickly argued
maybe could have been done faster with existing systems.
This is what happened to D-Wave some time ago
and is now happening to Google.
Quantum physics or chemistry is the foundation
of so-called ab initio material
research. Abinicio in Latin means starting from the beginning, which means electronic structure and
such. These applications are already analyzing quantum interactions, so they should be a natural
for quantum computing, and they are. Because a lot of such analysis is about finding the
equilibrium among various forces, and that's another way of saying the optimal arrangements
among such forces, quantum computing could also be used for optimization. And many problems
in life can be formulated as a kind of optimization.
And then we have Schor's algorithm, named after Professor Peter Shore,
the Caltech grad, MIT PhD, Berkeley professor,
Bell Lab scientist, and MIT professor of computer science since 2003,
who came up with it.
Shore's algorithm can factor numbers into a multiplication of prime numbers,
which is something that can be used to decrypt data,
only if a big enough quantum computer were available.
It is the one big reason quantum computing is pursued and feared.
So the main expected applications of quantum computing are quantum science, optimization, prime factorization,
and also generation of high-quality random numbers, which in fact is quite useful, though speed is not quite the issue there.
Sticking with quantum, the Trump administration has added to Intel and to a rare earth minerals company,
companies that they've taken equity positions in, by moving.
toward taking stakes in U.S. quantum computing firms. This is according to a story in the Wall
Street Journal, which cited anonymous sources, saying the companies include IonQ, Raghetti, and D-Wave,
and that Quantum Computing, and atom computing, are considering similar deals. But the administration
denied the story, with the result that quantum company share prices, which jumped significantly
on the news, have receded a bit. If the story turns out to be true, then these quantum companies
companies would join Intel and American Rare Earth's company MP materials that the White House
has directed the federal government to take stakes in.
Yeah, it was really interesting that the news caused stocks of trading quantum computers because
not all quantum computers are trading. Many of them are private, but those stocks jumped by more
than 7 percent. And even large companies like IBM who work on quantum computers and do so very
well, but not exclusively. We have to cover rare earth at some point. And they,
interesting supply chain complexity that has emerged there, starting with them not being so rare,
but being very earthy, as in very messy to process and produce. Yes, indeed. Next Silicon is an
intriguing startup that last week released more information about its, quote, data flow architecture
and performance benchmarks, along with a significant customer testimonial. This is an Israeli company
that has never been in stealth mode. CEO Ilad Raz has loudly proclaimed their
system designed to be superior for high precision computing than conventional CPUs and GPUs.
In fact, in their announcement, they said Maverick 2, their new processor that is built on the
intelligent compute architecture, delivers up to 10x performance over leading GPUs, and does
so at 60% less power in algorithmically complex workloads.
They also said their hardware adapts to provide peak performance regardless of software, branching,
or parallelism, which means users can bring their own unmodified code to run out-of-the-box with no
specific optimizations necessary. A senior scientist at Sandia National Labs said they worked with
NEC Silicon for three years evaluating the technology and said, quote, the out-of-the-box
HPCG performance results are impressive, showing real promise for advancing our computational
capabilities without the overhead of extensive code modifications.
Maver 2's data flow approach demonstrates strong potential in real-world scenarios, unquote.
It's also interesting that Nex Silicon said unabashedly that they've targeted the HPC market
more than AI, which raised quite a few eyebrows.
Sheen, you've told me you have experienced with the data flow architecture.
What are your impressions of Nex Silicon?
And why do you think they're at least initially focusing, focusing,
more on HPC than on the much larger AI market.
Well, I love it that they're focused on 64-bit HPC computing.
That's a gap in the market that I believe is a good idea to address.
But their announcement material did not rule out AI at all.
In fact, they pointed out that their architecture is suitable for all manner of applications.
There are a few things going on that create a space for new innovations in chip architecture.
First, the ever-growing electricity requirement of GPUs.
Second is the fact that after Moore's law was exhausted, the way to get performance is through
architectural innovation. Then it is the decoupling of the design from manufacture of chips,
which is driven by companies like TSM, and allows a lot of other vendors to actually
design chips because they don't have to manufacture them. And then it's the time-consuming and
specialized work that is required to optimize an application or even a kernel to a new
architecture, which is what the quote from Sandia National Lab is really referencing.
And then finally, it's the changing nature of modern algorithms, and especially in AI,
where new advances can make previous work obsolete in a week.
So you add all that up, and that explains why we continue to have so many new chip vendors.
The trend is not stopping, and new chip vendors get formed all the time.
These new players, furthermore, can afford to dust up old strategies and find new value in
since some of the obstacles of the old times have been removed over time, and use cases have
changed, etc., etc. So the concepts behind reconfigurable computing, data flow or data-driven
architectures, and programmable logic date back to the 1960s. Actual systems emerged in the 70s
and 80s, mostly in research departments or implementing some concepts, but not all. The Manchester
data flow machine at the University of Manchester and MIT's tagged token data flow architecture
are two notable examples from the 70s. Data flow systems go well with streaming applications
and so they showed up also in graphics software and hardware. The Evans and Sutherland
PS300 Graphic System, the one that I actually have used personally, was a very advanced
parallel vector data flow system with specialized processors for specific stages of
the graphics pipeline, and that was for real-time 3D wireframe graphics.
Some of the good ideas in data flow systems have been incorporated in systems.
So today you have field programmable gate arrays, FPGAs, coarse-grained reconfigurable arrays,
CGRAs, systolic arrays, instruction windows and speculative execution, etc.
And that's not to mention vector and matrix processors.
GPUs themselves have similarities with systolic arrays and data flow systems and use some of their
principles. FVGAs have been used to optimize straight-through processing and financial services
and also used within AI processors. So back to the news, we have to wait until more detail
is published about exactly how they are implementing the technology, but they clearly have
solved a lot of the problems. If I understand correctly, they allocate a big part of the chip
to arithmetic logical units, ALUs,
which can be logically reconfigured
so that the result of one can flow into another.
And then they have a compiler that decomposes the code
into a so-called directed graph,
like the old style flow charts,
and then maps it to these ALUs,
and it does that in real time more or less,
so you can redo it as the application runs
and new patterns are detected.
What I just said solves a bunch of the problems
that cause those early systems to fade.
And benchmarks that have been released so far, no surprise, are promising.
So it's a new architecture, it's by a great team, and just that makes it very well.
All right, that's it for this episode.
Thank you all for being with us.
HPC Newsbytes is a production of OrionX in association with InsideHPC.
Shaheen Khan and Doug Black host the show.
Every episode is featured on InsidehPC.com and posted on OrionX.
Thank you for listening.
