@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20240520
Episode Date: May 20, 2024- Who will join the Exascale club next? - Nvidia "paints it green" - Other AI chips impress - Student Cluster Competition winners [audio mp3="https://orionx.net/wp-content/uploads/2024/05/HPCNB_20240...520.mp3"][/audio] The post HPC News Bytes – 20240520 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC News Bites, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone.
Welcome to HPC News Bites.
I'm Doug Black.
I'm with Shaheen, as always.
We're recording this while I'm still at ISC 2024 in Hamburg.
The conference has been an impressive gathering,
as always, with very good energy. The top 500 list and the Intel HPE Cray Aurora exascale system
coming in at an exaflops on the top 500 list was the biggest piece of news and the source of a lot
of conversation. We had a full episode on this early last week that we encourage you to listen to. Some additional commentary on Top 500. The Intersect 360 research analyst firm highlighted
the importance of the science Aurora will support, the expectation that Aurora will need to exceed
the performance of Frontier on LINPAC, and a reminder of the challenges the system has had
to get fully up and running. Hyperion Research, another analyst firm, for its part predicted the system will come in at about 1.5 exaflops.
Yeah, we have a lot to look forward to for the next list in six months at SC24.
The next big entry to the top 500 list is expected to be the El Capitan system at Livermore,
the Cray system with AMD.
And we can also hope Microsoft's example
may compel other cloud providers to participate, perhaps all of them with their own homegrown chips.
The system at Ulic in Germany might get there too. While at ISC, we also noted that although
NVIDIA was not an exhibitor, they had a lot of their people here, and NVIDIA was co-branded with vendors all over the conference floor at Lenovo, Supermicro, and Microsoft, to name a few.
NVIDIA was in the air.
Few conversations did not include them.
There's talk that the company is establishing the environment in which everyone else will exist and compete in HPC AI.
They have so many resources to throw at the big challenges of the industry.
They have set such a hot pace for product development. It's pretty amazing.
That seems to be their strategy for trade shows, to reserve all that for their own
GTC events, and then to, quote, paint it green, end quote, at other trade shows. All of that
works when you're big, and it works even better when you're
the only game in town. But of course, the prize is too attractive for others to stop competing for
it, and many other players are very much in the game. Along those lines, there were some interesting
projects with other AI chips. Lots going on to better understand when you need a large, let's say,
10 trillion parameter model, and when a smaller model might be more advantageous, when you need a large, let's say, 10 trillion parameter model, and when a smaller model might be more advantageous.
When you need a long context window and when not.
And where the industry momentum might make the whole thing a moot point.
Samba Nova is working with the RIKEN Center in Japan on Fugaku LLM, a large language model trained on Japan's fastest supercomputer.
Fugaku, of course, is the former number one system,
and as you all know, I think, just a beautiful machine, only rivaled by its predecessor,
the K computer, just so you know where my biases are. Cerebrus, on the other hand,
is working with various national labs and showcased particularly impressive performance
for a molecular dynamics simulation, where it beat Frontier by a factor of,
wait for it, 179 times by dedicating a processor core for each simulated atom.
You can do that, of course, when you've got so many cores.
Yeah, their latest dinner plate-sized wafer has 900,000 cores. That's 4 trillion transistors and 125 petaflops of peak AI performance.
I met with Cerebrus this week, and what they're doing continues to be really out of this world.
This time, it's a data flow algorithm for long timescale molecular dynamics simulation.
When I hear companies like Zambanova, Grok, and Cerebrus tell their stories, I also think of what Matt Seeger of Oak Ridge told us as a guest on our podcast, talking about post-exascale supercomputing.
We asked if we may see surprises and newcomers among the vendors involved in those systems, and he confirmed that yes, we probably will.
If the result with Cerebrus is any indication, especially AI chip companies, is probably where we will continue
to see pleasant surprises. We should also highlight the annual student cluster competition at ISC,
where teams of undergraduate students from around the world, under the supervision and sponsorship
of a faculty member, work on problems with a fixed envelope of electricity and configuration
parameters. Some of the problems
can be expected like LINPACK and other problems they have to figure out in real time. It is great
to see the next generation of HPC talent do great work in a short period of time. I tell you it's
also excellent recruitment opportunity for anyone looking for rare talent. This year they had an
online part with about 20
competitors and an in-person part with eight players from around the world. This year's winners
for the online part were all previous champions. SYSU is in first place. That's Sun Yat-sen
University in Guangzhou in China. Second place was Tsinghua University in Beijing, China, which also happens to be number
23 in the global ranking of universities by U.S. News. NTU, the Nanyang Technological University
in Singapore, came in third this time. The in-person part had Tsinghua University as number one.
They also had the highest LIMPAC number. Second place was the Nationals in Kauai University in Taiwan,
similar name but different institution.
And ETH in Zurich, Switzerland came in at number three.
Like the Oscars in the movie industry,
just making it there is quite an achievement
and every one of them is a champion.
All right, that's it for this episode.
Thank you all for being with us.