@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20240701
Episode Date: July 1, 2024- Intel's Silicon Photonics Milestone - Intel's 144-core-now, 288-core-next-year Xeon Sierra Forrest CPU - Quantum Advantage: Time vs. Space - Microsoft Concludes Undersea Datacenter Project [audio m...p3="https://orionx.net/wp-content/uploads/2024/07/HPCNB_20240701.mp3"][/audio] The post HPC News Bytes – 20240701 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC News Bites, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone.
Welcome to HPC News Bites.
I'm Doug Black.
Hi, Shaheen.
We start this week with what has been called an important milestone in silicon photonics.
We're talking about Intel's integrated optical IO
chiplet with a CPU. The prototype chip has bandwidth of four terabits per second and
consumes about a third of the energy of traditional copper-based interconnects. Shaheen, one of the
most popular episodes of the At HPC podcast is our conversation with Professor Karen Bergman of Columbia, an expert and
entrepreneur in the field. She told me Intel's news is, quote, an important milestone. Yes,
that was episode 54, and you really should go listen to it. The integration of optical and
electronic technologies is making great progress. To put this in a chiplet makes it part of the menu of capabilities
for large chips. They said it's compatible with PCIe 5, which avoids creating a new standard.
Distance is a big limitation for high-end interconnects, and this can go up to 100 meters,
which is plenty for many racks. And at 4 terabits per second, that's 500 gigabytes per second,
and a third of energy, it is looking very good. Lots going on in
this area with lasers and LEDs and at various layers of a system, but we're still a year or
two away from commercial availability, an important milestone indeed. AMD's success has put to rest
claims that x86 performance was limited by its architecture. x86 vanquished its competition in the last decade,
only to be challenged by ARM and increasingly RISC-V.
But it continues to be the mainstream of computing.
So where is Intel in all of this, you might ask?
Well, in early 2000s, in the transition from 32-bit to 64-bit,
AMD got the upper hand with their Opteron product,
but ended up losing the plot as Intel caught up
and led with Xeon. AMD got its innovation mojo back and is leading again, while Intel found
itself in comeback mode again, and its manufacturing fell behind TSMC. Earlier in June, Intel showed
more progress, with announcements that point to a stronger competitive position. Yes, Intel reiterated its plans for their Gaudi 2 and Gaudi 3 GPUs for AI,
but the new news was about a 288 E-Core Xeon Sierra Forest CPU early next year,
enabling them to tout more hardware threads than the latest AMD Epics.
There is a bit of a core count race going on between those two companies,
along with Ampere,
and this could be a big deal for Intel. But for now, Sierra Forest has 144 cores, which of course
is nothing to sneeze at, but 288 cores remain six to nine months away. With that many cores,
the line between CPUs and low-end accelerators starts to blur a bit. So we'll see if the chips have provision to use
all those cores in concert. Now, Sheena, I'll be interested in your views on news coming out of a
research project by Sandia Labs and Boston University. This is all about quantum versus
classical computing, and the research say their work shows quantum delivering superior results
solving a particular type of advanced
math problem. But in this case, the quantum advantage isn't faster processing, but the use
of far less memory. Possible advantages of quantum computing are speed, accuracy, and energy. This is
excellent work that splits the speed advantage into time and space, just processing faster versus requiring
less resources. They show an exponential quantum space advantage for a so-called natural streaming
problem that includes discrete optimization, implying that the method can be used for many
applications. Quantum advantage can be polynomial or exponential. And as it sounds like, exponential advantage is what would make quantum computers leapfrog
existing technologies.
You might remember experiments with underwater data centers, containers of computers on the
seafloor having no problem staying cool, and maybe even use waves to generate energy.
And of course, no worries about real estate.
But hold that thought.
Microsoft was one of the experimenters with its project Natick. The project started in 2013,
deployed a test system off the coast of Scotland in 2018, and has just concluded. It doesn't look
like it will be continued, presumably because issues like physical security and cybersecurity,
including networking, challenges with upgrading
or maintaining systems, and connectivity costs are prominent. It's notable that just as this
project is ending, China began a similar project last year. According to the publication Data
Center Dynamics, Microsoft submerged 855 servers, this is the Scotland work that they did, and left them to run for 25 months and only six
of the servers broke down. Microsoft stopped using them despite having one-eighth of the failure rate
compared to on-land data centers. Another factor is the maturing of liquid cooling that was not
available in 2013 when the notion of servers cooled with water or submerged data centers was radically new.
Microsoft said it will continue to use Project Matic as a research platform to test new concepts
around data center reliability and sustainability. All right, that's it for this episode. Thanks so
much for being with us. HPC News Bites is a production of OrionX in association with Inside
HPC. Shaheen Khan and Doug Black host the show.
Every episode is featured on InsideHPC.com and posted on OrionX.net. Thank you for listening.