@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20250421
Episode Date: April 21, 2025- This week in GPU geopolitics - Intel and Silverlake in Altera FPGA deal - Storage is what's next as investment soars - 2nm chips are coming to Taiwan and Arizona [audio mp3="https://orionx.net/wp-c...ontent/uploads/2025/04/HPCNB_20250421.mp3"][/audio] The post HPC News Bytes – 20250421 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC News Bites, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone.
Welcome to HPC News Bites.
I'm Doug Black of InsideHPC, and with me is Shaheen Khan of OrionX.net.
We return to technology geopolitics for our top news of the week. And this is
the shifting story of US trade sanctions on exports of advanced AI chips to China. It
was reported last week that the administration would allow Nvidia to sell less sophisticated
GPUs to China in return for a pledge from NVIDIA to build a mammoth AI data center in Texas.
But then it emerged that the White House will ban sales not only of NVIDIA H20 GPUs, but also AMD's
MI308 AI processor. With both companies taking significant revenue hits as a result, NVIDIA said
it would take a $5.5 billion charge due to the decision. Well, it's reported
that AMD will take an $800 million charge.
This reminds me of episode 99 of the HPC podcast with China trade expert Dr. Handel Jones and
his comments that Nvidia has become a significant factor in the geopolitical contest between
the US and China. Chips are one of the biggest US exports
to China. The bigger global dynamics of trade and tariffs is the subject of debate everywhere,
but when it comes to China specifically, we've seen growing bipartisan concern in the US about
China's ambitions and especially the role of AI in all aspects of geopolitics.
So it is not surprising to see even more tightening
of restrictions there.
Nvidia for its part has become more visible in geopolitics.
Governments recognize the importance of the company
in this AI era and want to build a relationship
with the company.
And some of them are thinking about so-called sovereign AI.
On the flip side, Nvidia has made it more visible when it provides commentary
about government policies and when its leaders meet with heads of states
and the higher echelons.
This is techno-politics in more or less full bloom.
AI data management and storage company Hammerspace announced last week
it raised $100 million in a new venture round,
which is impressive by any standard, but it also is not unusual in the evolving storage
space.
DDN raised $300 million in an investment round from Blackstone last year, while Weka announced
a Series E round of $140 million last May. Vast Data had a $118 million round in December of 2023,
and Pure Storage has raised a total of $530 million
over the last 10 plus years.
Shaheen, it's occurred to me over the last few years,
as data in the AI era has become the new gold or oil,
that the position of storage in the industry has changed.
It's become cool.
I recall about eight years ago talking to an old IBM storage hand who told me that in his 20 plus
years of selling storage, he'd only come across two or three CIOs who came up through storage.
But now perhaps that's changing. Yeah, storage sure looks like what's next.
It's a unique product in the digital world in the sense
that data occupies it and tends to never leave,
and it keeps growing.
Also, new storage technology increasingly
moves it into higher-end realms of data science and data
management.
And then computational storage sets off
a domino that can reach HPC and AI types of overlay.
The growth is in all dimensions.
The sources of data have expanded from organizations to customers to people and increasingly to things.
The number of data items from each source has grown as we instrument processes and collect more categories of data.
The frequency of data collection has grown, and so has the
richness of each item of data.
And then there's the complexities and value of unstructured data.
The uses of data show its benefits while not knowing when you might need it again prohibit
its deletion.
So we see a lot of interest in storage for AI and storage that actually uses AI.
That's the part to
really watch.
Back in 2015, Intel paid $16.7 billion to acquire Altera, a market leader in the field
programmable gate array or FPGA chips sector. Altera had revenues of $1.9 billion in 2014
and was projected to grow into an expanding market. 2014 was, of
course, two or three years before AI and its catapulting impact on GPUs was visible. So
while the FPGA market has grown, it has clearly yielded the spotlight to GPUs. Under Pat Gelsinger,
Intel's previous CEO, Altera's books were separated from the rest of Intel
in October of 2023, and it was set up as its own company
with its own CEO in February of 2024.
In the continuing transformation of Intel,
their new CEO, Lipu Tan, has continued the process
and sold 51% of Altera to Silver Lake,
a private equity firm that was
also a key player in the Dell-EMC merger and privatization deal. Intel will receive $4.46
billion from that 51%, which values Altera at about half of what Intel paid for it.
Intel still owns a substantial portion, so they're incented to carry on supportive Volterra's
existing business links to Intel, while Silver Lake has control of running the company and can
pursue directions that may not include Intel. The market seems to like the deal. The sale was set
up nicely by Pat Gelsinger, and it is a good move, given the realities of the market and the company
and the cash that is needed to push ahead with high-end manufacturing. I see it as a continuation versus a departure
from previous strategy. I think you're right that AI and then GPUs stole the show. Despite
some good efforts that showcased FPGAs for AI in the cloud, big cloud providers ended
up building their own GPUs. As the name implies, FPGAs are chips that can be programmed
after they're manufactured.
So they can be customized for different applications.
This is in contrast to chips that are ASICs,
application specific integrated circuits,
which cannot be reprogrammed.
Many chips, of course, are a hybrid
and have a small area on the chip that can be programmed for specific customers or uses.
So FPGAs provide a lot of flexibility
at the expense of optimization and speed.
Programming them is about defining the logic,
placing them on the chip, and routing data on the chip.
That uses so-called HDLs, hardware description languages,
like Verilog or VHDL, and those are the two primary examples.
This is harder and more time-consuming than programming GPUs, which itself is much harder than traditional CPU programming.
Broadly speaking, FPGAs are very good at super-long pipelines. So that makes them suitable for packet processing
and streaming applications,
and really any application that goes through a lot of data.
Let's close with a brief mention that AMD announced
that it would use TSMC's two nanometer fab in Arizona
when that facility goes online in a few years.
TSMC is currently in production with their N4 process
and expects to have N3 in production in 2028.
So this one will be after that.
TSMC's N2 fab in Taiwan and Intel's 18A fab in Arizona
both expect to be in volume production
in the second half of this year.
It sure looks like Intel is, in fact, closing the distance
and is catching up with the leading edge.
We have to add that while the two fabs are both Intel is in fact closing the distance and is catching up with the leading edge.
We have to add that while the two fabs are both in the 2nm zone and use the so-called
gate all around technology, they are a bit different.
TSMC is expected to be a tad denser and Intel a tad faster, but we shall see how they do
in practice.
All right, that's it for this episode.
Thank you all for being with us.
HPC News Bytes is a production of OrionX in association with Inside HPC. Shaheen Khan and Doug Black host the show. Every episode is featured on insidehpc.com and posted on orionx.net.
Thank you for listening.