@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20250908
Episode Date: September 8, 2025- Cooler AI: Cutting energy costs with Adiabatic Reversible Computing - More Heat in Chip Wars: OpenAI joins the race to design custom AI silicon - Ironwood Rising: Google’s next-gen chip may debut... in Neoclouds - Made in India: Nation unveils its latest fully homegrown chip - HPC User Forum 2025 - Europe’s Exascale: Jülich unveils its 64-bit powerhouse - Quantum Cash Flow: Industry kicks off September with billion-dollar momentum [audio mp3="https://orionx.net/wp-content/uploads/2025/09/HPCNB_20250908.mp3"][/audio] The post HPC News Bytes – 20250908 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC Newsbytes, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone. Welcome to HBC Newsbytes. I'm Doug Black of InsideHPC, and with me is Shaheen Khan
of Orionx.net. In what E.E. Times calls a world-first, a British tech startup named Vair
said they've demonstrated what they call an audiobatic reversible computing system with net energy recovery, unquote.
Vair hopes to commercialize chips that can compute with virtually no energy being used.
Sheehan, this is new to me, but according to E.E. Times, reversible computing dates back to 1961.
It has to do with undoing computations to gain back energy to prevent energy being lost as heat.
Vair's CEO, Rodolfo Rossini, came across reversible computing in 2018 while looking at
what are called orphan technologies, which were ideas abandoned in the 1980s and 90s because
Moore law meant they weren't needed. Well, when Moore's Law was cranking out new chips,
you could just wait a few months and a faster chip would show up. So yes, this is partly because
of Moore's Law. But this is interesting really because of AI. AI chips are running hotter
and hotter, you need more and more of them, and there's no end in sight. So everyone is looking
for ways to gain energy efficiency via new architectures like neuromorphic computing, new technologies
like optics, new more holistic energy management across the full data center, and now driving
that concept all the way down to the transistor. For reversible computing, picture this. Energy
flows in from a power supply to transistors to do the computation. When it's done,
the result is held, and then the flow of energy is reversed back to the power supply to be used again.
It's like pumping it back and forth instead of in one direction, and it can all be under the hood.
One analogy I have seen is regenerative brakes in hybrid cars, but this goes further.
Vare has built the actual logic gates and made it work with standard CMOS technology,
which is not the ideal foundation, so they've solved a harder problem than required.
But other big steps remain as they work to show that reversible computing can be commercially viable.
Now, this can sound like a perpetual motion machine, and we know that is not possible,
thanks to the first and second laws of thermodynamics.
Energy is conserved and cannot be created, and energy transformation increases disorder and loses efficiency.
As you mentioned, they call it, quote, adiabatic reversible,
which means a system that is thermodynamically at maximum efficiency,
but short of perpetual motion.
It is not an attainable objective anyway,
but getting even a little closer can be a big deal.
So you want to avoid dissipating heat,
and that leads to reversible,
which keeps the system at equilibrium at all times,
which allows you to take energy back and forth,
but it also means it be slow.
And you want to recycle energy,
and that leads to adiabatic,
which means a fully sealed system
where zero heat goes in or out.
Side note here,
quantum computing fundamentally relies on reversible logic, and some of the logic gates in
quantum computing, like the Tofoli transformation, are based on the same research by the same
scientists in the 70s. Quantum logic operations or quantum gates must be reversible to preserve
the delicate quantum states of superposition and entanglement, which all becomes irreversible
the moment you measure the state. There's other alternative chip news. The financial time
reports that OpenAI is set to begin production of its own AI chips with Broadcom, which
means OpenAI joins the list of companies trying to reduce their reliance on Nvidia. Their order
with Broadcom could total $10 billion, according to the FT, and the chips may ship as early as next
year. OpenAI begins collaboration with Broadcom last year, and the chip will be for internal
use as of now rather than availability to outside customers. OpenAI reportedly wants to
double its aggregate compute capacity by the end of this year, so all of that will still be
Nvidia and AMD. But they've also increased their capacity by 15x since 2024, presumably had to
delay the release of chat GPT 4.5 because they didn't have enough capacity, said they will soon
have more than 1 million GPUs, and also say they will have to figure out how to do 100 times that.
Yeah, we also have to remember their vision last year for a $7 trillion with a T dollar global initiative to reshape the semiconductor industry.
So it's not surprising that OpenAI would design its own chip.
All big users have to evaluate that option, and we know meta, Google, Microsoft, Amazon, Alibaba, and perhaps others are doing it, even as they stock up with merchant chips.
Along these lines, last week we mentioned Google's latest TPU and liquid cool.
rack called Ironwood, expected to ship later this year. It's 9,000 TPUs with 1.77 petabytes of
addressable memory and connected by an optical interconnect that uses AI to optimize speed.
Well, it was reported last week that Google is interested in shipping these racks to GPU
as a service so-called neoclouds. This is an excellent next step for Google to expand outside
of its own walls. Also last week, India announced a milestone with a
an indigenously designed and manufactured 32-bit processor called Vickram 32. It is designed by the
Indian Space Research Organization, ISRO, and is a 32-bit follow-on to the 16-bit Vickram 1601. Both are
manufactured by ISRO's semiconductor laboratory's 180 nanometer fab, which was launched in 2016.
It's another milestone in the country's push for semiconductor self-reliance according to a story in electronic times.
It's old manufacturing technology, but it seems to serve a very good series of use cases.
Very much. The processor is designed for space launch vehicles, so it is able to withstand a wide temperature range from minus 55 to plus 125 degrees Celsius, needs very little electricity.
we're talking one-half of one watt
and uses a custom instruction set
that is optimized for the AdO language,
which carries on for many safety-critical systems
and especially in the government market.
The chip can be used in similar embedded environments
in defense, aerospace, and automotive.
It's an impressive little processor,
and it's already been validated in space
in one of their missions.
India's technology prowess is not in doubt.
So one should expect rapid progress.
The country reported,
has more than 20 design startups receiving support from the Indian government.
Okay, I want to mention I was at last week's HBC user forum held it near Washington, D.C.
and hosted by Hyperion Research.
The conference drew more attendees than usual, and at its usual, high-quality roster
of presenters discussing new developments in Data Center, in HBCAI, and Quantum.
Also notable were two major fundraising pieces of news in quantum computing.
IQM based in Finland received $300 million, and Quantinium, based in the UK and the U.S., doubled that with a $600 million capital raise.
If you add a few others, quantum computing got over $1 billion just in September so far.
Not bad at all.
And thanks to AI's eye-popping numbers, it doesn't even sound that big.
I mean, Open AI raised $40 billion in March, which brings its total to over $64 billion so far.
We should also mention that Europe unveiled its first X-Scale class supercomputer in 64-Bet performance
at the ULIC Supercomputing Center in Germany.
This is the Jupiter system we've talked about in the past, built by Evident,
and emerging as a showcase for a hybrid CPU, GPU, QPU complex.
We'll see where it is in the top 10 of the next Top 500 list in November.
All right, that's it for this episode.
Thank you all for being with us.
HPC Newsbytes is a production of OrionX in association with InsideHPC.
Shaheen Khan and Doug Black host the show.
Every episode is featured on Insidehpc.com and posted on OrionX.net.
Thank you for listening.
