@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20251006
Episode Date: October 6, 2025- Provably unconditional quantum (information) supremacy - Big clouds balance own vs merchant GPUs - Big unexpected players in HPC/AI infratech [audio mp3="https://orionx.net/wp-content/uploads/2025.../10/HPCNB_20251006.mp3"][/audio] The post HPC News Bytes – 20251006 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC Newsbytes, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone, welcome to HBC Newsbytes. I'm Doug Black of Inside HPC, and with me is Shaheen
Khan of OrionX.net.
Shaheen, as we all know, a holy grail in quantum computing is the notion of quantum superiority,
which is a system performing a task faster than a classical supercomputer will be able to.
We've had several claims over the past five or six years,
but they've never been broadly recognized because the workloads haven't related to anything useful in the real world.
They've been theoretical quantum exercises.
But a new claim has been issued via a paper published on the Archive ARXIV site
by researchers at the University of Texas at Austin,
and they are boldly calling it, quote, unconditional separation, unquote, between quantum and classical
computers. This, by the way, has been nicely covered in an article in popular mechanics.
The researchers say they formed a task for which the most space-efficient classical algorithm
requires between 62 and 382 bits of memory in a classical system, but only 12 cubits in a quantum
computer. Quote, this form of quantum advantage, which we call quantum information supremacy,
represents a new benchmark in quantum computing, one that does not rely on unproven conjectures.
But what is it exactly that the researchers have done?
There are a dozen or two research powerhouses around the world in quantum computing
in academic, industrial, and government labs. And UT Austin is not only very much on that
list, it's got a well-earned reputation for being a very careful.
voice when it comes to claims in the industry. As you said, there have been claims of quantum
advantage, but they've had a few problems. The practical problem, as you mentioned, is the tasks
have been useless altogether or too narrow to be useful. The second problem has been more theoretical.
What is the source of quantum advantage, and to what extent can it be generalized? For example,
what problems can quantum computing actually solve? In this vein, a very important paper in 2022
who by Takashi Yamakawa and Mark Jandri of NTT and Princeton University
expanded the set of problems that can, in theory, achieve quantum advantage.
Or another example is how can we be sure that some new classical algorithm
isn't going to wipe out the quantum advantage?
In several cases, that is what happened.
A quantum computer claimed a performance advantage,
but other researchers showed that if only you use the different algorithm on the classical
computer, there would be no advantage. And in fact, the classical computer could be faster.
The UT Austin paper addresses this last point. They had no claim of an actual application
level advantage, but they showed that it is possible for a quantum computer to have an unconditional
advantage over a classical computer in a way that no algorithm on the classical computer could
ever change. So this was proven mathematically and then demonstrated on a 12-cubit trapped ion
quantum computer. The path they took was to focus on space requirements, determining the lower
bound of how many bits a classical algorithm could possibly need, and then demonstrating that a
quantum computer could do it with fewer qubits. They constructed a problem that provably needs no fewer
than 62 bits, and then solved it with 12 qubits. Think of this demo as a sanity check. The idea is
that as the problem size grows, the gap between required bits and required qubits
will grow exponentially, thus proving that there is, in fact, a set of problems that are
truly quantum native, so to say. To quote from the article, unlike prior demonstrations
of quantum advantage that rely on unproven complexity assumptions, our result is
provable and permanent. No future development in classical algorithms can close this gap, unquote.
Now, it is a common belief in the industry that quantum computing will eventually beat
classical computers for some problems, but this fortifies the theoretical foundation of that belief.
Quantum computers could have speed, accuracy, or energy advantages, so this paper is directly
about speed advantage.
And for that, scale and therefore fidelity, and then connectivity and error correction continue
to be big challenges.
Earlier last week, we were treated to two stories about
Microsoft's chip strategy that seemed to contradict each other directly. One from Bloomberg reported
that the company signed up for $33 billion in compute capacity deals with new AI cloud firms,
part of it being a $19 billion deal with Nebius last month that provides Microsoft access to more
than 100,000 NVIDIA-GB-300 chips. Microsoft also is in agreements with CoreWeave Lambda and
end scale for additional AI compute capacity. Right. And then on the same day, CNBC reported that Microsoft
CTO had said the company absolutely plans to use mainly Microsoft AI chips in the future. So he corroborated
the view that the main issue in the industry is demand outstripping supply and therefore building
capacity. He said the company will consider any option that helps it meet demand. The public cloud
providers are big enough to build their own GPU optimized infrastructure with their own GPUs
and to do it on their own terms. So the expectation is they will do all of the above. Use merchant
technologies to bring in customers who want that and also build their own technology and position it
as a more attractive offering for what they see as a growing number of customers who would use it.
But when capacity is limited and the market is in a frenzy, customers are willing to get whatever
they can get, and vendors will sell whatever they can build. So it's not like a traditional
competitive market. And when you've got a large existing market that's expected to grow at 10 to 12
percent Kager as far as the eye can see, which is the case for the data center market,
you're going to get a lot of companies, some of them surprising entrance, making moves
in that industry. And that's what we're seeing. Schneider Electric, which last October announced
the acquisition of liquid cooling company Motivere, and completed that deal in
February, announced a liquid cooling product portfolio for hyperscale collocation and high-density
data centers. Motivar by Schneider Electric is how they labeled it, and needless to say, it is targeting
GPU-intensive AI factories. So we have a series of new companies entering the HBC AI market.
All of them have some direct experience in managing buildings or environmental requirements or
power and cooling. Will they all succeed in the data center business? Or,
this may be a replay of, say, Exxon trying to glom on to the PC business in the early 80s with
the line of office systems. Well, as the AI business becomes a bigger part of economies and
stops looking like a fad, the big guys are both needed and interested. So they're coming.
Schneider Electric is the French industrial technology company with 160,000 employees,
operates in over 100 countries, and had about $45 billion in annual revenue in 2020.
2024. Carrier, which was founded by the inventor of the modern air conditioner, has also entered
the business. It has about 48,000 employees, $22.5 billion in revenues in 2024, and operates
in over 160 countries. Carrier announced in February what they described as a comprehensive
solution for data center liquid cooling, and they called it quantum leap. We should also remember
that at SC-24 in Atlanta last year,
we saw other unexpected companies on the floor,
like Castrol, Shell, and Valvaline.
So the market is changing with new entrants and unexpected entrance.
All right, that's it for this episode.
Thank you all for being with us.
HPC Newsbytes is a production of OrionX
in association with InsideHPC.
Shaheen Khan and Doug Black host the show.
Every episode is featured on InsideHPC.com
and posted on OrionX.net.
Thank you for listening.