In The Arena by TechArena - AI Connectivity & Chiplet Innovation at Alphawave Semi Unveiled
Episode Date: September 13, 2024Letizia Giuliano of Alphawave Semi discusses advancements in AI connectivity, chiplet designs, and the path toward open standards at the AI Hardware Summit with host Allyson Klein....
Transcript
Discussion (0)
Welcome to the Tech Arena, featuring authentic discussions between tech's leading innovators
and our host, Alison Klein.
Now let's step into the arena.
Welcome in the arena.
My name is Alison Klein. You're coming to you from AI
Hardware Summit in the Bay Area. And I am so delighted to be joined once again by Leticia
Giuliano from AlphaWave Semi. Welcome to the program, Leticia. How are you doing?
I'm good. Hi, Alison. Thanks for inviting me again.
It's always a delight to have you on the show. And I'm so glad that we caught up at AI Hardware.
Why don't we just start, for those who haven't heard the previous episodes that you've been on,
with just a brief introduction about AlphaWave Semi and your role at the company.
Yes. At AlphaWave, we deliver solutions for powering high-performance connectivity and compute.
We do that starting from leading-performance connectivity and compute.
We do that starting from leading-edge connectivity silicon IP.
So we are leader on high-speed service, including 100 gig, 200 gig,
as well as PCI Express Gen 7 and below that.
But also we do that delivering custom silicon business that is powered by our winning IP portfolio
and also our partnership with ARM
and our foundry ecosystem like T-Sync for 2.58 3D packaging.
So all the ingredients needed to die for these big AI chips and systems.
At AlphaWave, I am responsible for the product marketing and management.
So I see this product really coming to life and powering all our customer systems.
So I'm really excited.
Leticia, you've been on the show so many times before, and we've always talked about the
innovation in chips, and you've got such great purview being involved in so many industry
standards.
We've talked about chiplets before, and I was thinking about chiplets a lot when I was at AI Hardware.
Tell me about where we are with chiplets and how you see the industry shaping up with so many different silicon suppliers out there.
How do you see the industry shaping up in terms of that open chiplet ecosystem that we've talked about?
I think 18 months ago, we were talking about when the chips are going to come in, how we're
going to do that.
I think now there is no more talking about it.
We are designing it.
We are designing, we are executing, and we are empowering next generation, this current
generation of product with chiplets.
The thing that we're still trying to work on
and we're still having a roadmap is the open part.
So I think chiplets are designed today
and they're mainly for closed system.
And that is mainly due to the fast, rapid time to market
is required for this,
where we use infrastructure or standards,
but we're using that in a closed system.
Now we're talking about how we're going to make those
in the future years an open ecosystem.
And according to our view and our customer and partner point of view,
it's going to still take a few years before we're going to talk about open chip.
But at AlphaWay, we're committed to accelerate that.
One important thing that we did this year is to execute and deliver to our customer a allow our customers to have an IEO chiplet
with all the goodness and the silicon-proven IP portfolio that we have for Ethernet, PC
Express, and CXL, and connect that to an NPU or other main SOC die through UCA.
So that is really exciting for us.
We have a lot of experience going with chiplet that
can really fast their cycle time to market. So it's been really exciting to see that coming to
life. When you're working with customers on chiplet designs, I know that there's a lot of
specificity in how they want you to land that chiplet. And you want to deliver repeatable designs
that can be applied to multiple configurations.
How do you build in the flexibility
for form factors and other considerations
to be able to meet each customer's demand?
Yeah, that is a really good point
and a good problem to solve.
There is so many customization happening in the world of AI, right?
We will learn, we hear the hardware submit.
There is no one solution, no one hardware that fits at all.
So all our customers and our system we are building today needs to be
tailored to the particular workload or the particular place in the
data center they are, right?
That transfers to the hardware that we are designing.
The hardware that we are designing can be satisfied with a really very complex
system package with advanced packaging, for example, or can be tailored down for
maybe another lower scale application where we can stay with a lower cost point
with a standard package.
And all this creates a different type of form factor, different type of price point that
we want to do for the particular system.
And at AlphaWave, what we have done, we have created a resonance platform for chiplets
where you start from the physical layer.
So that's the thing that UCA provides a lot of standardization across form factor, but
also goes farther to the chiplet design and the data path that we have from Ethernet to
BusinessPress to UCA that can be scaled up or scaled down to satisfy a different type
of form factor, bandwidth requirement
or package type.
So it's like building blocks for designing your own chiplet, but comes along with a complete
full suite of subsystem verification, package design, and how to build a complete system.
It's come along with many years of experience on what the customer really
needs in terms of connectivity and what is needed in the AI system in terms
of connectivity, and now we make that tailored for multiple types of applications.
For example, I mean, our service and our connectivity solution has been
always a multi-protocol standard.
So with the same connectivity, we can satisfy Ethernet or PC Express,
or we can satisfy any custom protocol,
and it needs to connect our front-end network.
So it's a good application specifically in this world where we try to accelerate things and we want to reuse a standard,
but you also want to have the option to have a customer protocol in it.
When you think about where we are, and you said that open standard chiplet industry is still a ways off, what do you think it's going to take to get there, Leticia? factors. Are there other things that have come to play? And is it really the dynamics of market
and folks not wanting an open industry that is gating the development of that vision on the hill,
if you will? Yeah, it's a very important question. The hardware summit in this couple of days,
I've seen a lot of discussion. I've discussed myself also with some key folks. You are definitely a push from some of the industry.
Now we want to do everything fast.
We're going to not wait for standard.
And then you have instead the other side of people that want to foster collaboration.
And I think for me, the important part is that we need to foster collaboration in a
new way that also can be fast.
So I see there is more people now coming together, specifically, for example, in UCI to reuse the UCI building blocks to bring other protocol on top of it.
So we have the ARM ecosystem today that is proposing child chip-to-chip, that is a protocol for die-to-die standard,
that can reuse the die-to-die UCI interface as a full file layer stack.
So that is a really good example how we can collaborate and reuse what is already there.
We don't need to reinvent it and just attach to another ecosystem like the ARM ecosystem. So we need more of this example where we can take a benefit and recognize the
value of other people and other work in the industry and reuse quickly in other
ecosystem.
The same is happening, for example, on the EUA link, the Ultra Accelerator link,
or the Ultra Internet Consortium, where we're trying to
reuse all the infrastructure that we already built for Ethernet or PC Express and build on top a new
protocol that can accelerate the AI-specific workload. So that's pretty exciting to see.
That's fantastic. I want to turn the table for a second to connectivity there was a really
broadening trend at ai hardware around conversations about connectivity from every
layer that you want to think about it from cabling on a motherboard to fabric technologies to connect
an ai training cluster connectivity was a focus at hardware. And you guys have a ton of IP and
connectivity. What are you seeing as the trend here? And can you break it out between what is
needed for AI and then what is needed for general purpose compute? And how do you see industry
standards affecting this space? Yes, there is a suite of several connectivity technologies
that are enabling today AI from compute
and what we need for CPU to GPU connection,
GPU to GPU.
I think we can divide that.
If you look at the classic data center,
we used to have Ethernet, right?
So the front-end network still remains the purview of the Ethernet.
And we need to reuse all that experience there to build it up on top.
But then if you look how the way that we need to scale the compute of the AI inside a rack,
we need to build something that can connect all the GPU together.
And fortunately, unfortunately, PCI Express is not evolving fast as we want, right?
If the compute is evolving 4x every year in terms of AI machine learning, I mean, the PCI Express rates are one generation every three, four years, right? now paves the road to build custom interfaces or they can still reuse some of the platform of
package and connection that we already know, but on new protocols. New protocols that are more
tailored for AI, like more power efficiency, land latencies, and in this case, really speeding up
the connection between GPU and GPU. So you see NVLink is a big example, right? There was a good discussion yesterday.
NVLink maybe is the de facto standard
and that is now a new way to talk about standard.
It's a de facto one.
So I think I would like to branch in a way to reuse
what is available there in terms of standard
and then build on top something custom for a custom workload.
So that is the way that I think the trend is going right now.
And that will help to speed it up where we have this bottleneck of network connectivity
that is needed for AI workloads.
When you think about the industry's response to these standards,
what do you think it's going to take to coalesce around them?
And is it a fait accompli
or is there still work left to do?
There is still work to do
and we are engineering,
we like to solve problems.
I think having a better way
to create platform for standardization
and interoperability
with this new consortium that's coming out
is really what we need right now.
So the UAC is doing, a UA link is coming up as well with this new consortium that's coming out is really what we need right now.
So the UAC is doing,
and UA Link is coming up as well.
And all these emphasis around interoperability between different solutions out there
is what we have to really put our focus.
And AlphaWave, we are really promoting all of that.
We are always there to make sure
the forefront of the interoperability activity for any type
of connectivity that we use and our customers use.
And we create a platform also, test the vehicle for doing that.
So all our products, including our chiplet and our proof of concept are being used today
to create an interoperability platform on the Ethernet side, PCI Express side, and UCI.
So we're pretty excited about that.
And I think this is all I see in all the new industry forums coming up,
where there is more and more emphasis on interoperability and ecosystem.
Now, you have released some products in this space in 2024.
Can you share the new entrance?
Yes.
This year, we just launched in June our multi-standard IO product,
which is Shiplet,
that I was talking before.
It's our AlphaChip 1600.
And that is a good example
of AI IP implemented in a Shiplet
specifically for connectivity,
like PCI Express, Gen 6, CXL
3.0, and 100 gig Ethernet to multi-lane.
So that is a perfect ingredient for all those AI systems that we need to connect to NPU
or, for example, for NPU or GPU to CPU.
Other important ingredients are the HBM.
So we have launched our HBM3 running in 9.6 gig
in partnership with all our memory vendors, partners,
and moving forward, there is HBM4 that is coming up
and it is really exciting.
Leticia, it's always lovely to talk to you.
I only have one more question for you.
Where can folks engage with you
and find out more about the solutions
we talked about today,
as well as connect with your team
to discuss potential implementations?
Definitely, our website is full of resources,
awavesemi.com,
and you can find us on LinkedIn.
Please follow us.
We also have a good information webinar that are always useful.
And feel free to ping us on LinkedIn.
Fantastic.
Thanks so much for your time today.
It's always a pleasure.
Thank you.
Thank you so much.
Thanks for joining the Tech Arena.
Subscribe and engage at our website, thetecharena.net. Thank you so much. Thanks for joining The Tech Arena.
Subscribe and engage at our website, thetecharena.net.
All content is copyright by The Tech Arena.