@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20260427
Episode Date: April 27, 2026- Cisco Universal Quantum Switch - Co-Packaged Optics (CPO) enters real world deployments - CPO: AMD, NVDA, TSMC, Global Foundries - Vox checks in on Los Alamos use of AI for nuclear simulations [aud...io mp3="https://orionx.net/wp-content/uploads/2026/04/HPCNB_20260427.mp3"][/audio] The post HPC News Bytes – 20260427 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC Newsbytes, a weekly show about important news in the world of supercomputing,
AI, quantum computing, and other advanced technologies.
Hi, everyone. Welcome to HBC Newsbytes. I'm Doug Black, and with me, of course, is Shaheen Khan.
Cisco announced last week their universal quantum switch,
which is designed to route quantum information between systems while preserving that information
with a conversion engine that translates between all.
all encoding and entanglement modalities at input and output.
Cisco said that in proof-of-concept experiments,
the switch preserved quantum information with an average of less than
or equal to 4% degradation in encoding and entanglement fidelity.
Their complete findings are expected to be published
in an upcoming research paper on Archive.
It's nice to see Cisco, the original networking giant,
get into quantum networking in a visible way. It looked for a bit like the company might have missed
the cloud and early AI waves, but it has emerged as a key provider of AI infrastructure and
security. It projected $3 billion in AI-related revenues for its 2026 fiscal year,
pointing to growing industry demand for high-performance 800G Ethernet, and AI integrated security
which it supplies via its Splunk and HyperShield brands.
It is also benefiting from a campus network refresh cycle
that supports its core networking segment.
Back to quantum computing and networking,
Cisco has been investing in quantum networking research for years
through its outshift incubation group
and Cisco Quantum Labs in Santa Monica.
This includes the development of a quantum network entanglement chip
in collaboration with UC Santa Barbara,
and the universal quantum switch,
which they announced last week, as you mentioned.
So what the switch aims to do
is to connect multiple quantum computers.
For that work,
the computers and the switches
all interface via photonic cubits.
They all need to speak the language of photonic cubits,
so to say.
To send or receive a qubit,
a quantum computer would need to map its internal cubit
to or from photonic cubits.
This is roughly similar to what a Nick card and network interface card does in classical systems.
The switch then receives, routes, and delivers those photonic cubits while preserving their quantum state.
The switch is called universal because it is designed to manage different photonic encodings,
wavelengths and timing, or phase alignments, while maintaining fidelity as the photons get routed.
That means the source and the destination do not have to match each other's
exact photonic implementations. In this way, endpoints hide the differences in the physics of
their qubit, and the switch, the network layer, hides the complexity of encoding and entanglement
routing. In more advanced scenarios, the switch would also do entanglement swapping to stitch
multiple switches together or to re-encode the photonic signals. But what was announced is a prototype.
So all of this is the usual, promising, but forward-looking state of things.
Most systems right now would require exact matching at source and destination,
and entanglement swapping, especially at scale, is at an early stage.
So think of this as a credible direction for a class of quantum networking infrastructure
in the year 2030 or later.
Sheehan, I know we discuss Silicon Photonics on this podcast quite often,
but in my view it's warranted because copackaged optics or CPO has the potential to significantly boost chip performance while reducing heat and energy consumption.
CPO moves data using lasers rather than copper-based electronic interconnects, which are slower and produce more heat.
Now there's a published report that AMD will utilize CPO within its upcoming MI-500 accelerators scheduled for 2027.
reports in Game Resource GPU and Guru 3D
state that AMD has partnered with global foundries
on CPO interconnects and packaging will be handled by ASE.
Nvidia, of course, has already said its upcoming Ruben GPU platform
will incorporate CPO available later this year or in 27.
All of this is in keeping with what our friend Karen Bergman of Columbia University
told us on our podcast last year, this was her second appearance on the at HBC podcast,
that CPO technology is proven.
It's now a matter of getting the technology integrated within broadly commercial chips,
this after years of skepticism false starts,
and a broad R&D effort by leading chipmakers and startups backed by enormous investments.
Shaheen, what do you make of it?
Yes, co-packaged optics or CPO has been talked about for social,
long and it's finally and firmly entering real-world deployment. The idea is to integrate optical
transceivers which convert electrical signals to light and back directly into the same physical
chip package as an electronic chip such as a switch ASIC, CPU or GPU. It is fundamentally a
chip-let kind of a technology relying heavily on chiplet architectures and 2.5D or 3D packaging to
achieve the high-speed low power that is required. It is transitioning from prototype to production
thanks to AI. As a large and expanding market, when AI needs something, the funds are usually there.
And AI needs ever larger and faster memory and ever faster interconnects and lower power
and cooling whenever it can get it. Companies like Broadcom have already shipped tens of thousands
of CPU-enabled switches, marking a clear shift from prototype to
mainstream deployment. Broadcom is the main player here. And as you mentioned, AMD and
NVIDIA are now driving it too, working with TSM and global foundries. So in 2026, it is no longer
experimentation, but a rapid transition into volume production and becoming part of poor AI
infrastructure. And the technology is advancing quickly as well. In 2025, 1.6 terabit solutions
represented a significant share of deployments.
By the second half of 2026, we expect to see volume ramps of next generation 6.4 terabit per package
systems.
There's a long layman's article out in the publication Vox about Los Alamos National Lab
using OpenAIs chat GPT to analyze nuclear weapons testing data.
It's also the latest in the longstanding combination of advanced computing and nuclear
weapons testing simulation rather than testing in the real world.
One of the Los Alamos officials featured in the article is Gary Greider, a pioneer in
HPC storage and a guest on our podcast. At one point in the article, Gary is showing the
Vox reporter where ChatGPT is running on Venado, a 10 Xaflops AI supercomputer. This is an HPECray
and Nvidia system. And Gary is quoted as saying, the world's nuclear interest
information is right in there. You're looking at it. A theme of the article is the combination of
AI and nuclear weapons, two entities that are regarded by many as threats to the future of mankind.
Quote, the researchers there were remarkably sanguine about the more existential risks that often
come up in conversations about AI, even as they worked on the production of the world's most
dangerous weapons, writes Joshua Keating. He then quotes Alex Schenker, a little
Los Alamos physicist who said, quote, AI is just more math. I don't like to think about it like
it's magic. Shaheen, what are your impressions of the article? Well, combining AI and nuclear seems like
multiplying risks. But if anyone understands the sensitivity of this work, it's the scientists at Los Alamos
National Laboratory. And as the article says, the whole thing is pretty well contained. The current
use of AI appears focused on simulations and research workflows that are upstream from direct
physical applications. Access to these systems is tightly controlled in a highly secure environment,
which significantly reduces external risk. At the same time, it's important not to overstate
containment. While tightly controlled direct access usually means the opposite is true also
that the system cannot easily directly reach outside of its sandbox,
downstream impact can happen in indirect ways.
Even in restricted settings, advanced AI can have dual-use implications,
and the knowledge the AI system holds and helps generate may carry broader consequences.
The researchers seem very aware of this,
and AI is being used as a tool to augment scientific work rather than to control sensitive
operations. Overall, this is a measured and deliberate way to explore how AI can accelerate research
while retaining safeguards. That said, the broader challenge of AI governance remains unresolved,
as we have covered in previous episodes of this podcast. Recent news, such as Anthropic limiting
access to its more capable models because of concern over their misuse, highlight how difficult
it is to balance innovation with security.
The question of how to enforce effective global AI governance is still very much open.
All right, that's it for this episode.
Thank you all for being with us.
HPC Newsbytes is a production of OrionX.
Shaheen Khan and Doug Black host the show.
Every episode is posted on Orionx.net.
If you like the show, please rate and review it.
Thank you for listening.
