@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20260413
Episode Date: April 13, 2026- Intel Tesla Terafab - Intel Google CPU IPU - Fujitsu U. Osaka early-FTQC - Caltech data sample streaming for quantum computing in AI - UALink 1.0 Specs vs NVLink [audio mp3="https://orionx.net/wp-c...ontent/uploads/2026/04/HPCNB_20260413.mp3"][/audio] The post HPC News Bytes – 20260413 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC Newsbytes, a weekly show about important news in the world of supercomputing,
AI, quantum computing, and other advanced technologies.
Hi, everyone. Welcome to HBC Newsbytes. I'm Doug Black, and with me is Shaheen Khan.
In the annals of the chip industry, Intel may be engaged in the biggest turnaround story
since its chief CPU rival, AMD, named Lisa Sue, CEO of that company in 2014.
There's irony here because, of course, AMD's turn.
turnaround took place at Intel's expense to a great extent. One interesting aspect of Intel's revival
is that some credit must go to Pat Gelsinger, who was CEO for nearly five years until his resignation in
December 2024. He was succeeded three months later by Lip Butan, and at that time,
some industry observers wondered out loud why he took the helm of the struggling company.
But measured by Intel's stock price, he has presided over an impressive reversal. Shares were
priced at about $23 the day he was named Intel's chief. Now they're at 62.
Shaheen, is it reasonable to say Intel seems to have turned the corner when LBT, as he's called
at Intel, turned around his relationship with President Trump? We recall that last August,
soon after Trump demanded LBT resigned from Intel because the president said that Putin was
highly conflicted due to past business ties to China. LBT then met with Trump at the
White House, and soon enough, the administration announced a 10% equity position in the company.
The stock was worth $24 when the deal was announced, so that investment is paying off handsomely.
That news was followed by NVIDIA taking a 5% stake in the company, which also helped the Intel
cause.
Now there's significant new news coming from Intel.
Shaheen, would you give us a summary?
Well, regarding the stock price, if you took your eyes off of Intel stock for a minute,
yet you'd be surprised by how much it has risen in the past couple of few weeks.
The reasons seem to be a one-two punch of sorts with announcements that you referenced.
Intel is at least two companies baked into one.
A chip manufacturing company with massive geopolitics interest,
and a chip selling company with massive market interest,
since right now it generates most of the revenue.
They covered both sides in these announcements.
First, there was an announcement complete with a photo of what looked like a meeting over the weekend
between Lib Bhutan and Elon Musk at Intel headquarters that Intel would design, manufacture,
and package high-end chips at scale for Tesla, SpaceX, and XAI.
It is a good move for the so-called TerraFAP to partner with Intel,
the only American company that can manufacture leading-edge chips,
and it boosts Intel's move towards manufacturing chips for other companies.
companies. You've heard this year before. If you want high-end chip manufacturing in the US by an
American company, Intel is your only choice. If you want the most leading at chip manufacturing and
packaging in the US by any company, then Intel is your only chance. And then a few days later,
a separate announcement was made with Google that said Intel's CPUs will continue at Google Cloud
for all apps. A big message is what we covered a couple of weeks ago. The C-institutional,
CPU stands for central and the CPU is retaining its central role. The announcement said the
companies would also work together on custom ASICs for infrastructure processing units, IPUs.
IPU is the label they put on specialized programmable chips that offload OS and infrastructure
tasks from the CPU so it can run more user applications. They're the same as, or at least in
the same category as, DPUs and NPUs and
perhaps other labels. The world is waiting for the next big leap forward for quantum computing
in the form of it establishing superiority over classical HPC and also performing useful work
handling research or business workloads. Now we're hearing vendor reports, some would call them
claims, that quantum is doing just that. In Japan, Fujitsu and the University of Osaka
announced late last month, they have combined Fujitsu's phase rotation gate quantum computing
architecture with a molecular model optimization technique in a way that cuts computational
resource requirements. This use of early fault-tolerant quantum computing or early FTQC is
enabling energy calculations for chemical materials such as catalyst molecules within a realistic
time frame, something that Fujitsu said can't be done using current classical systems.
Fujitsu gives the credit to the new version of its star, STA-R-all-Caps, quantum architecture.
And they say it will address challenges like drug discovery, improving the efficiency of ammonia,
synthesis processes, and advanced carbon recycling technologies.
We've also heard from D-Way for over a year that a customer in Turkey, which is a Ford
subsidiary auto manufacturer is doing useful work with D-Wave systems, but at least one quantum
scientist, Olivier Ezratti, with whom Shaheen and I will soon post a podcast interview, has expressed
broad doubts about these vendor reports. It's the usual refrain, a lot of impressive progress
and a lot that remains to be done, but quantum computers continue to chip away at it, and
occasional breakthroughs change the landscape. There was also an update from the quantum
lab at Caltech, California Institute of Technology, that is, on novel uses of quantum computers for
AI. Basically, they suggest that even a small quantum computer can perform certain AI tasks,
such as classification or dimensionality reduction, by learning from massive classical
data sets in a new way, by streaming samples rather than storing the full dataset. Data
flows through sequentially, one sample at a time, and each sample,
incrementally updates a shared quantum state. That state represents a highly compressed,
high-dimensional structure, potentially using far fewer qubits than a classical system
would require for an equivalent representation. So the advantage is capturing more useful structure
per unit of memory. We can think of the quantum system as a kind of compression engine accelerator
for statistical patterns. But there are trade-offs, as usual. It does not reduce
data requirements. It may even require more sampling and multiple passes over the data because the
data is not stored and it just flows through. It assumes data can be accessed in a streaming
or random sampling fashion which shifts but does not eliminate the I.O. bottleneck and it is best suited
for statistical learning task, not general purpose data processing or tightly coupled computations.
That trade-off can be worthwhile though if a quantum accelerator delivers significant
gains in representation efficiency. The paper points to that theoretical potential.
Conceptually, this aligns very well with a key attribute of HPC, extracting maximum value from
constrained resources. We should report that the Ultra Accelerator Link Consortium has announced
that the UALink 200G1.0 spec is now available. The UA Link is an open, scale-up
interconnect for AI workloads, and the consortium has more than 85 member companies.
The new spec defines a low-latency, high-bandwidth interconnect for communication between
accelerators and switches in AI computing pods. And the consortium says the spec enables 200G per
lane scale-up connection for up to 1,024 accelerators. Yeah, excellent progress by the community there.
Every interconnect is a battlefield, and at the GPU layer, NVLink filled the gap and ran away with it.
PCIE-based switches are out there and can fill some of the needs, and even Ethernet-based interconnects are getting faster and have big uses in AI.
In this arena enters UALink, trying to build an open multi-vender ecosystem and eventually catch up, and it's been doing great work towards that.
UALink products are expected in the late 2026, 2027 time frame, so we have to wait for proof.
And by then, NVLink is expected to be in its next generation.
So we'd be comparing two future products with all the risks associated with that.
But assuming projections are close to valid, NVLink next year would be about 2x faster than UALink next year,
a peak of 3.6 terabytes per second for NVLink, which would be using 400G technology.
compared to 1.8 terabytes per second for UALink based on 200G.
With this announcement, UALink decouples the hardware layer specs from the rest of the stack,
and that should allow faster progress on speed.
And for a lot of GPUs, UALink will be just fine.
And the UALink technology aims to support more GPUs, 1,24, like you said,
compared to NVLink that will be going from 72 now to 500,000.
76 next year. So UALink could in principle build some seriously big systems. Another attribute is the whole
open versus closed aspect. InVidio's story is they build the full system, but you can use all of it or any
part of it that you want. That story works well, as long as they execute on all fronts, which they have
been. If they ever have a misstep, however, then the open technology alternatives can enter the
video ecosystem. Until then, it's the rest of the market teaming up to catch up.
All right, that's it for this episode. Thank you all for being with us.
HPC Newsbytes is a production of OrionX. Shaheen Khan and Doug Black host the show.
Every episode is posted on Orionx.net. If you like the show, please rate and review it.
Thank you for listening.
