@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20251229
Episode Date: December 29, 2025- Nvidia Groq $20B deal - Quantum-HPC hybrid systems and applications - Quantum control, scale, error correction - NCAR Closure [audio mp3="https://orionx.net/wp-content/uploads/2025/12/HPCNB_2025122...9.mp3"][/audio] The post HPC News Bytes – 20251229 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC Newsbytes, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone.
Welcome to HPC Newsbytes.
I'm Doug Black of InsideHPC, and with me is Shaheen Khan of OrionX.net.
The big news for us to talk about, of course, is that GROC and NVIDIA on Wednesday announced
a, quote, non-exclusive licensing agreement with the deal estimated at about $20 billion.
Top line, the deal is generally viewed as a move to bolster NVIDIA's real-time inference
strength, something Shaheen will elaborate on in a moment. In addition, GROC's founder,
Jonathan Ross, President Sonny Madra, and other senior managers at GROC will join NVIDIA.
And here's a very interesting part of this whole story. This is not an
acquisition, in fact, GROC will continue on as an independent company along with GROC cloud.
As for the structure of the licensing deal, Reuters reported it is similar to other deals
by large tech firms possibly intended to avoid antitrust regulatory interference, something
NVIDIA may be gun-shy about after its attempted acquisition of ARM several years ago.
The news is getting a lot of attention, and despite the big dollar value, I don't see it as
that significant. So let's unpack it into several dimensions. The deal structure is reminiscent
to what NVIDIA did with Enfabrica a few months ago. Get the intellect and the intellectual capital
and leave the rest behind. They also licensed the IP as you mentioned versus acquiring it,
which makes it easier to pass regulatory requirements. And leaving Grok's cloud business behind
avoids friction with their cloud customers while creating a new neocloud customer.
The deal also led to a lot of discussions on the merits of GPUs versus A6, training versus
inference, and how the news of Google TPUs being considered by the likes of META may have compelled
NVIDIA to move in the TPU direction. After all, GROC's co-founders had been at Google.
The amount of money involved in the Infabrica deal was $900 million, already quite a lot of money,
and this is $20 billion, and presumably a third.
of Nvidia's cash reserves, but quite affordable for Nvidia nevertheless, and the stakes are
high. Given the relative small number of people moving over, Grok has reported to have around
500 to 600 employees and not all of them are moving to Nvidia. The acquisition reminds me of
Facebook acquiring WhatsApp for $19 billion. That was many years ago, so even bigger dollars in
relative terms. There have been bigger deals in tech, for example, Microsoft Activision Blizzard
for $68.7 billion, Dell EMC at $67 billion, AMD bought Xilinks for $50 billion, IBM bought
Red Hat for $37 billion, and several others. Obviously, all developments in the market have
some impact, but Envidio already announced an inference-focused accelerator that we covered
here, the Ruben CPX chip. And Nvidia has a commanding lead in its ability to ship hardware
in substantial volume and in software and ecosystem. So I see this not as an urgent competitive
response, but a nice addition of IP and talent to the arsenal to be used where it fits
and good preparation for the future as AI evolves. Now, the GROC chip is called a language
processing unit, in part, because it is optimized for LLMs. I expect we will hear more about
how NVIDIA will use some of the concepts in future generations of CPX, which was also built
as an LLM-focused chip. Writing the LLM surf makes sense, as does a bigger nod to streaming-focused
data flow concepts that show up in many accelerators now, including GROC. But it's a double-down on
LLM-focused hardware versus a qualitative change. While I expect in video to tell us more about
chips optimized for agentic AI, this acquisition does not talk about that either. Ultimately,
AI needs a few more big step functions, like deep neural nets for training and large language
models for generative AI. When that next step happens is unpredictable, and that unpredictability
is the real cause of the industry's worry about an AI winter,
and that next step may need a chip-optimized all for itself.
There is, as usual, a lot of quantum news.
We'll mention a few focused on integration of quantum into HBC
and building national infrastructures.
Ion-Q-K-K-I-S-T-I plan to integrate a 100-Cubit system
into K-K-K-K-Sup supercomputer to create the country's first,
on-site hybrid quantum classical environment for research into such areas as health care, finance,
and material science. In Europe, CESCA in Spain, CESGA, is working with telco company Telefonica
and the European quantum computing vendor IQM to integrate a 54-Cubit IQM system with the Finisterra
4 supercomputer. And on the defense side, inflection in the U.S. formerly called
called Cold Quanta, received a $2 million contract from the U.S. Army to develop quantum-inspired
machine learning for navigation and GPS-denied environments, using advanced sensor fusion and
Kuda Q, NVIDIA's programming platform for quantum classical applications.
The quantum computing industry has ways to go, but it's sounding more confident,
bringing roadmaps forward and talking about quantum speed advantage in a real way before
2030 and maybe even as early as next year. To do that, it needs to make progress towards scale
and fault tolerance, which require fidelity and control besides error correction.
Singapore's so-called quantum ecosystem comprised of A-Star, NUS, and N-TU, and Kisite announced
a five-year partnership to co-develop control technologies for high-scale. The goal is to
standardized design, testing, and cryogenic measurement so experimental systems can become
manufacturable platforms. River Lane introduced what they called an adaptive hardware decoder
for real-time quantum error correction. Error correction needs to minimize the number of physical
qubits to form a fault-tolerant, logical one, and to be real-time. The River Lane technology
aims to adapt to correlated noise to improve the physical-logical
ratio and to do it all in under a microsecond. On applications, Zanadu published a quantum framework to
accelerate quantum chemistry, which can be used for drug discovery, and QBITSolve received an NSF grant
to advance quantum computational fluid dynamics. Speaking of NSF, earlier this month, reports emerged
that the Trump administration plans to dismantle the U.S. National Center for Atmospheric Research,
N-KAR, a major climate and weather research center funded by the National Science Foundation.
The news was announced on December 16th on Twitter X by Russell Vote,
director of the White House Office of Management and Budget. Vote reportedly described
N-KAR as, quote, one of the largest sources of climate alarmism in the country.
Naturally, the folks who work at and run NKAR oppose the decision, pointing out the significant
climate work Encar has done since its founding in 1960, including, for example, modeling and
predicting wind shear, wind shear being associated with lethal airliner crashes.
This news is the latest in a series of budget and staff cuts that show how federal government
funding of research has changed significantly this year. Proponents of the cut argue that
Encar has become too much of a political activist and any vital weather activities can be moved to
other parts of the massive government bureaucracy or can be added again later and more efficiently.
Opposing the decision are critics who see the closure itself as politically motivated.
And then there are scientists and local leaders who have warned of consequences for public safety
because of reduced ability to predict and prepare for severe weather, loss of essential data,
and ultimately a setback for U.S. scientific leadership.
All right, that's it for this episode.
you all for being with us. Thank you for being with us all year. Happy holidays. Happy New Year
and looking forward to another year with you next year. All the best in 2026. HPC Newsbytes is a
production of OrionX in association with InsideHPC. Shaheen Khan and Doug Black host the show. Every
episode is featured on insidehpc.com and posted on Orionx.net. Thank you for listening.
Thank you.
